source
stringlengths
62
1.33M
target
stringlengths
127
12.3k
Background OCC’s mission focuses on the chartering and oversight of national banks to ensure their safety and soundness, on fair access to financial services, and on fair treatment of bank customers. As of March 2005, the assets of the banks that OCC supervises accounted for approximately 67 percent— about $5.8 trillion—of assets in all U.S. commercial banks. Among the more than 1,800 banks OCC supervises are 14 of the top-20 commercial banks in asset size. OCC groups its regulatory responsibilities into three program areas: chartering, regulation, and supervision. Chartering includes not only reviewing and approving applications for charters but also reviewing and approving proposed mergers, acquisitions, and reorganizations. Regulation includes establishing written regulations, policies, operating guidance, interpretations, and examination policies and handbooks. Additionally, in its most recent strategic plan, OCC identified its regulatory approach as one that would ensure that national banks operated in a “flexible legal and regulatory framework” that enables them to provide a “full competitive array” of financial services. According to OCC’s latest strategic plan, OCC’s supervision program consists of ongoing supervisory and enforcement activities undertaken to ensure that each national bank is operating in a safe and sound manner and is complying with applicable laws, rules, and regulations concerning the bank, customers, and communities it serves. OCC’s supervisory activities include examinations and enforcement actions, dispute resolution, ongoing monitoring of banks, and analysis of systemic risk and market trends. OCC policies establish a minimum level of activity that must occur during the supervisory cycle, during which time examiners assess the overall condition of the bank in the areas of capital adequacy, asset quality, management, earnings, liquidity, and sensitivity to market risks. Such examinations are generally referred to as “safety and soundness” examinations. In large banks, much of this work is conducted throughout the year by examiners assessing specific aspects of a bank’s management and operations, while in the smaller banks, the on-site examination generally occurs at one time during a 12- or 18-month period. OCC has a team of full-time, on-site examiners who are located at large banks throughout the year and who conduct ongoing monitoring and examinations. In addition to the safety and soundness examinations, OCC conducts compliance examinations that assess the bank’s compliance with laws intended to protect or assist consumers, such as laws related to disclosure of loan terms, fair lending, equal credit opportunity, and others. Consumer compliance examinations are conducted on a continuous 3-year cycle in large banks and at least every 36 months at small banks. OCC traditionally has issued opinions on a case-by-case basis, rather than rules or regulations, on whether the National Bank Act preempts state laws that impose standards or restrictions on the business of national banks. In contrast, on January 13, 2004, OCC issued the two preemption rules on the extent to which the National Bank Act preempts the application of state and local laws to national banks and their operating subsidiaries. The rules and the manner in which OCC promulgated them generated considerable controversy and debate, including questions about OCC’s authority to issue the rules. According to OCC, the two rules “codified” judicial decisions and OCC opinions on preemption under the National Bank Act by making them generally applicable and clarified certain issues. The visitorial powers rule, as stated by OCC, clarifies that (1) federal law commits the supervision of national banks’ banking activities exclusively to OCC (except where federal law provides otherwise) and that (2) states may not use judicial actions as an indirect means of regulating those activities. The banking activities rule preempts categories of state laws that relate to bank activities and operations, describes the test for preemption that OCC will apply to state laws that do not fall within the identified categories, and lists certain types of state laws that are not preempted. In proposing the banking activities rule, OCC stated that it needed to provide timely and more comprehensive standards about the applicability of state laws to lending, deposit taking, and other authorized activities of national banks because of the number and significance of questions banks were posing about preemption in those areas. However, opponents such as consumer groups and state legislators feared that the preemption of state law, particularly concerning predatory lending practices, would weaken consumer protections. They noted, in commenting on the preemption rules, that the rules would prevent states from regulating operating subsidiaries of national banks and would diminish the states’ ability to protect their citizens. Prior to OCC’s issuance of the rules, consumers who had complaints with national banks or their operating subsidiaries sometimes filed complaints with state officials who tried to resolve them, although consumers could have filed such complaints with OCC, and many did. Since OCC issued the rules, some state officials refer all complaints involving national banks to OCC while others, through informal arrangements, still try to assist consumers. It is too soon to assess the practical effect of the rules on a consumer who has a complaint with a national bank, given the short time frame and legal questions raised by opponents to the rules. We address some facets of the rules’ practical effect on consumers in this report and will address others in our subsequent report on the impact of the rules on the dual banking system and consumer protection. One of OCC’s strategic goals is to ensure all customers have fair access to financial services and are treated fairly. The agency’s strategic plan lists objectives and strategies to achieve this goal, including fostering fair treatment through OCC guidance and supervisory enforcement actions where appropriate, and providing an avenue for customers of national banks to resolve complaints. The main division within OCC tasked with handling consumer complaints is CAG. This group is a part of OCC’s Office of the Ombudsman, a distinct division of OCC that operates independently of the agency’s bank supervision function. In addition to CAG, the Office of the Ombudsman oversees (1) the national bank appeals process—a forum by which banks may appeal the results of OCC’s supervisory examinations and ratings and (2) a postexamination questionnaire to obtain feedback from banks. The Ombudsman reports directly to the Comptroller and is a member of OCC senior management team (the Executive Committee) that includes the Chief Counsel, the Chief National Bank Examiner, and the Senior Deputy Comptrollers for Large Bank and Mid-size/Community Bank Supervision. CAG’s mission is to ensure that bank customers receive fair treatment in resolving their complaints with national banks. According to the 2004 Report of the Ombudsman, CAG carries out its mission by providing services to three constituent groups: (1) customers of national banks—by providing a venue to resolve complaints, (2) OCC bank supervisors—by alerting supervisory staff of emerging problems that may result in the development of policy guidance or enforcement action, and (3) national bank managers—by providing a comprehensive analysis of complaint volumes and trends. The Deputy Ombudsman manages and directs CAG operations. Since 1999, CAG has employed about 40 full and part-time staff, and it had 49 staff in 2005. The annual operating and personnel budget attributable to CAG operations more than doubled from $2.6 million to $5.4 million between 1999 and 2005. According to our analysis of CAG budget and staffing data, the budget’s growth has outpaced that of staff due to the design and implementation of its computer network. OCC’s Handling of Consumer Complaints Is Similar to That of Other Regulators OCC’s process for handling and resolving consumer complaints is similar to that of the other three federal bank regulators. We identified six distinct steps that all of the federal regulators follow when processing consumer complaints. Unlike two of the federal regulators, OCC lacks a process for collecting feedback from consumers it assists. OCC and the other federal regulators also resolve complaints in a similar fashion, with the outcomes generally falling into the same categories. While the most common resolution of complaints was that of the regulator providing the consumer additional information, regulators also consider a complaint resolved if it is withdrawn or tabled due to litigation, or if the regulator determines that the bank did, or did not, make an error. The volume of complaints OCC handles is generally in proportion to the assets of the national banks it supervises. From 2000 through 2004, OCC handled on average more than twice as many complaints as the other regulators combined. OCC and other federal regulators have similar goals in responding to consumer complaints in a timely fashion. However, by combining consumer inquiries and consumer complaints in determining whether it met its timeliness goals, OCC overstated its performance on these goals. OCC and Other Federal Regulators Follow the Same General Process in Resolving Consumer Complaints All four federal regulators we reviewed take similar approaches in processing consumer complaints about banks they supervise. The regulators define their role as a neutral arbiter between consumers and the banks they regulate when processing complaints. For instance, the 2004 Report of the Ombudsman states that CAG’s role is to be neutral in answering questions and offering guidance on applicable banking laws, regulations, and practices and that it should not be an advocate for either the bank or consumers. As illustrated in figure 1, each regulator generally follows six distinct steps in processing a complaint: The consumer submits the complaint; The regulator determines if the bank is under its supervision; The regulator forwards the complaint to the bank; The bank sends a response to the regulator; The regulator examines the response to see if it completely addresses the consumer’s complaint; and The regulator notifies consumer of complaint’s outcome. Although consumers may initially contact OCC or other regulators about their complaints via various methods, such as telephone, mail, fax, or, in some cases, E-mail, regulators normally do not formally accept a complaint until they have received a signed complaint form or letter. After a regulator receives a formal complaint, it must then determine if the bank involved is under its jurisdiction. If not, then the regulator determines who is the appropriate regulator and provides the consumer with contact information or forwards the complaint. Once the appropriate regulator receives the complaint, it forwards the complaint to the bank. OCC uses a secure Web-enabled application—CAGNet—that permits it and participating national banks to send and receive documents and images electronically. Banks have a set period of time to respond to a complaint, though the period varies among regulators. Among the four federal regulators, the time allowed for initial response ranges from 10 to 20 days, with OCC requesting a response within 10 days. All of the regulators permit the banks to request additional time to review the complaint or compile necessary information. After completing its review of the complaint, the bank sends a response to the regulator. Often, the bank responds concurrently to the consumer, since the consumer is the bank’s customer. After receiving the bank’s response, each regulator examines it to determine if the consumer’s complaint has been completely and appropriately addressed. At this step, the regulator examines the complaint and response to determine if any additional follow-up is necessary by its supervisory or legal staff. If it is not satisfied with the bank’s response, then the regulator requests additional information or clarification from the bank. Once satisfied with the bank’s response, the regulator notifies the consumer about the outcome of the complaint. OCC Does Not Seek Feedback from Consumers on Services Provided Of the four federal regulators, two offer consumers a method for providing feedback on the complaint process once the regulator has notified the consumer of the outcome. The Federal Reserve and FDIC offer consumers a feedback survey once their complaints have been resolved. The Federal Reserve mails a satisfaction survey, while FDIC directs consumers to a Web-based survey. Federal Reserve officials explained that the Federal Reserve has surveyed consumers since the mid 1980s and can link individual surveys back to original complaints, but the agency has not analyzed the aggregate data or used any findings from the surveys to modify its complaint-handling process. However, Federal Reserve officials explained that sometimes specific survey results are shared with staff who worked on the complaint or with management to better target staff training. Neither OCC nor OTS has any formal mechanism to measure satisfaction with the consumer complaint process (though officials from both agencies explained that they receive many letters expressing both satisfaction and disappointment with their services). OTS officials explained that the small number of complaints they receive does not warrant the resources necessary to implement a customer satisfaction survey. Like other federal agencies, OCC measures and reports on certain aspects of its performance in accordance with the Government Performance and Results Act of 1993 (GPRA). According to the 2004 Report of the Ombudsman, OCC measures the effectiveness of its supervisory process through an examination questionnaire, which is provided to all national banks at the conclusion of their supervisory cycles. The questionnaire is designed to gather direct and timely feedback from banks on OCC’s supervisory efforts. While the questionnaire is a useful step to help OCC assess its performance regarding its national bank clients, OCC does not have a comparable tool to gather information regarding its performance in assisting the consumers of national banks. Collecting information about how individual consumers assess the assistance CAG provides in answering their questions or helping resolve a complaint with their bank could be equally helpful for OCC to measure its performance in ensuring fair treatment of bank customers. OCC officials stated that they understand the value of measuring the satisfaction of consumers who they assist and are evaluating several different options for obtaining consumer feedback. Outcomes of Complaints Handled by All of the Federal Regulators Fall into the Same General Categories OCC and the other three federal regulators offer consumers similar resolutions in their final responses to complaints. In analyzing the complaint data across the four federal regulators, we found that the regulators, after investigating complaints, generally resolved them in one of four ways, as shown in order of decreasing frequency: (1) providing the consumer with additional information without any determination of error, (2) withdrawing the complaint or tabling complaints already in litigation, (3) finding that the bank had not made an error, and (4) finding that the bank had made an error. Regulators Provided Consumers Additional Information Between 2000 and 2004, the most common resolution of complaints handled by all federal regulators was that consumers were provided more in-depth or specific information about their complaints (see fig. 2). In these cases, the regulator’s investigation revealed that the consumer required additional information to understand his or her situation, and the regulator made no determination of whether the bank or the consumer had made any error. For example: The regulator might explain to the consumer that the complaint involves a contractual dispute that is better handled by a court. For instance, in one case, OCC informed a consumer to consider seeking legal counsel since the matter between the bank and the consumer involved a factual dispute concerning the interest rate on a credit card. The bank, based on its review of credit information, raised the interest rate on the consumer’s credit card after providing the consumer adequate notice about the impending change to the terms of credit, which included information on how to opt out of the credit card if the consumer did not agree to the new terms. The consumer complained that the bank failed to provide adequate notice and, thus, improperly raised the interest rate. After reviewing the relevant documentation from both the consumer and bank, OCC informed the consumer that since the bank claimed to have sent the proper notice to the consumer and the consumer denied receiving the notice, the agency could not judge which party was correct. Therefore, OCC counseled the consumer to consider taking legal action should the consumer want to pursue the matter further. The regulator may determine that rather than wrongdoing, there was a miscommunication between the bank and its customer. For example, in one case involving a checking account, a bank charged a maintenance fee to an account with a zero balance. The checking account had a minimum monthly maintenance fee, which the bank deducted automatically from the checking account. When the bank charged the monthly maintenance fee and the balance became negative, the bank charged an overdraft fee. The consumer understood that overdraft protection should cover the maintenance fee but did not recognize that overdraft protection would result in an additional fee. After OCC forwarded the complaint to the bank, the bank decided to no longer hold the consumer liable for the delinquent monthly maintenance and overdraft fees that accumulated. OCC viewed the matter as miscommunication between the bank and consumer. The regulator may determine that the complaint should be forwarded to a different regulator. When appropriate, all four federal regulators directly refer consumers, or forward their complaints, to other federal and state agencies. We found that three federal regulators—the Federal Reserve, FDIC, and OCC—referred a considerable number of consumers who contacted them to another federal agency to have their complaints or inquiries addressed. For example, from 2000 through 2004, FDIC referred about 40 percent of the consumers who contacted them with a complaint to another federal agency; the Federal Reserve and OTS referred about 53 percent and 3 percent, respectively. OCC, during this same period, referred approximately 38 percent of its callers to another federal agency. Complaint Is Considered Withdrawn or Tabled Due to Litigation For OCC, the second most frequent type of complaint resolution was “withdrawn” or “complaint in litigation,” while it was the least common for the Federal Reserve and OTS and the third most common for FDIC. These are complaints that, by and large, the regulator is not able to address. None of the federal regulators address complaints that they find are already involved in any legal proceeding at the time the consumer contacts them. In the case of OCC, one of the major reasons for complaints being withdrawn, according to OCC officials, is that the consumer does not send in the requested information, such as the signed complaint form or letter OCC requires before it begins any complaint investigation. As shown in figure 2, in 2000, OCC closed about 17 percent of complaint cases because it did not receive requested information or the complaint was in litigation, while in 2004, OCC closed nearly 37 percent of these cases for the same reasons. One reason for this increase, OCC officials explained, is that in mid-2000 they made changes to the database that tracks complaints. In particular, after the changes, the database coded complaints as “withdrawn” when the regulator did not receive information it requested from a consumer within 30 days. Previously, this type of complaint remained opened indefinitely or until the consumer provided the information. OCC’s policy is to reopen any complaint cases if the consumer sends in the requested information after 60 days from the day OCC made the request for additional information. Since OCC does not open a new case in such instances, this policy negatively impacts OCC’s average in meeting its timeliness goals for resolving complaints. According to OCC officials, another reason for this increase is OCC’s policy of encouraging consumers to contact the bank prior to filing a complaint with OCC. It is typical for the staff to provide a case number and complaint form to the consumer to use if he or she is unsuccessful in resolving the problem with the bank. OCC officials explained that in many instances they assume that the bank and the consumer has worked the problem out since the consumer never sends in a completed complaint form. In these instances, OCC codes the complaint as withdrawn because the consumer did not submit a completed complaint form. OCC officials explained that this coding procedure has an advantage. Although the complaint has been withdrawn, the information that the consumer provided through the initial contact is available to examination staff as well as to the bank, and it provides insight to potential issues at the bank. Regulators Determine That Bank Was Not in Error This category of complaint resolution was third for OCC in terms of frequency, while it was second for the other regulators. The regulators frequently resolve cases by finding that banks did nothing wrong, and the consumers do not have legitimate complaints, that is, the bank was correct. For example, in one case, OCC informed the consumer that an incorrectly completed deposit slip led the consumer to believe the bank improperly deducted funds from the consumer’s checking account. OCC had the bank provide the consumer copies of the deposit slip and checks recorded on the slip, which showed the consumer inaccurately transcribing the amounts from the checks to the deposit slip. Regulators Determine That Bank Was in Error “Bank Made an Error” was the least common outcome for complaints resolved by OCC and FDIC and next-to-least common for the other two regulators. The bank error category includes both regulatory violations and problems consumers had with the bank’s customer service. In these instances, the regulators determine that the bank did make an error in how it provided its products and services to the consumer. For example, in one case, OCC determined that a bank did not properly respond when fraudulent charges were identified on a consumer’s credit card account, and the bank did not reverse them. The complaint was resolved when the bank reimbursed the consumer’s credit card account. OCC Handles a Greater Volume of Complaints Than the Other Bank Regulators Likely reflecting the greater volume of bank assets under its supervision, OCC handled more complaints from 2000 through 2004 than FDIC, OTS, and the Federal Reserve combined. During this time period OCC processed, on average, 10 complaints for every billion dollars under its supervision, while FDIC averaged 6 complaints, the Federal Reserve 3 complaints, and OTS 5 complaints (see fig. 3). From 2000 through 2004, credit cards were the most common product involved in complaints addressed by OCC, FDIC, and the Federal Reserve. According to officials from OCC and FDIC, complaints about credit cards will continue to remain high because consumers have multiple credit cards and use them frequently. During this same time period, the assets of banks under OCC’s supervision that issued credit cards averaged $221 billion, while the total assets of the banks under the supervision of the other three regulators averaged $87 billion. Given these numbers, it would appear that the volume of complaints OCC handles is not out of proportion to the bank assets under its supervision, especially given that OCC supervises several banks that specialize in issuing credit cards. Although OTS also receives complaints about credit cards, during the same time period it received the most complaints about home mortgage loans. This is not exceptional, given that mortgage lending is a leading activity of the thrifts and savings banks OTS supervises. Federal Regulators Have Similar Timeliness Goals, but OCC Overstated Its Timeliness in Resolving Complaints by Including Inquiries in Its Calculation Consistent with GPRA and its implementing guidance, OCC provides information in its annual report that includes performance measures, workload indicators, customer service standards, and the results achieved during the fiscal year. OCC aims to resolve complaints within 60 days. The Federal Reserve, FDIC, and OTS also have a goal of resolving complaints within 60 days. In fiscal years 2003 and 2004, OCC’s target was to close 80 percent of all complaints within 60 calendar days of receipt. According to its 2003 annual report, OCC exceeded its target by closing 87 percent of complaints within 60 days. However, our analysis of calendar year data that OCC provided to us shows that only about 66 percent of complaints were closed within 60 days. Similarly, the 2004 annual report states that OCC closed 74 percent of complaints within the established time frame, while our analysis of OCC’s data shows that in calendar year 2004, it was approximately 55 percent. The discrepancy between the percentages reported in the annual reports and our analysis cannot be entirely explained by the fact that we reviewed calendar year data and the annual reports include fiscal year data. OCC officials explained that the differences between its reported figures and our analyses are the result of differences in the consumer complaint data on which each is based. The annual reports stated that the agency closed 69,044 complaints in 2003 and 68,104 complaints in 2004. However, these totals include inquiries that the agency handled, not just complaints. Inquiries—which may be questions or comments subject to an immediate, simple answer—can typically be handled at the initial contact between the consumer and OCC, while some complaints can take well over the 60-day time frame to investigate and resolve. Therefore, by including both inquiries and complaints in determining whether it met its timeliness goals, OCC overstated its performance, as measured by the percentage of complaints resolved within the target time frame. OCC officials explained that the data in the annual reports were presented using the generic term “complaints” to simplify the amount of information given to the reader. As OCC officials explained, some complaints involve more complex products, such as mortgages. Also, depending on the nature of the complaint, such as allegations of fair lending abuse, some investigations take more time. All four regulators have a percentage of complaints that they cannot resolve within their established time frames. OCC officials also explained that the time used in resolving complaints is a result of how it handles consumer appeals. Since OCC considers an appeal a reopened complaint, the start date for calculating the number of days it takes to resolve a complaint reverts back to the date it was originally filed with the agency. This practice had the affect of adversely impacting the measure of OCC’s timeliness in meeting its timeliness goals. CAG’s Consumer Complaint Data Inform OCC’s Bank Supervisory Activities According to the 2004 Report of the Ombudsman, CAG’s role includes providing information to OCC examiners and the banks to “elevate” the issues raised by consumers and make them visible to OCC staff involved in supervision. The complaint data CAG collects, summarizes, and disseminates to OCC’s examiners helps the examiners to identify banks, activities, and products that require further review or investigation. OCC supervision guidance requires examiners to consider consumer complaint information when assessing a bank’s overall compliance risk and ratings and when scoping and conducting their examinations.OCC guidance also requires that the banks have processes in place to monitor and address consumer complaints. According to compliance examiners we interviewed, the examiners learn about complaints primarily through a Web-based application called CAG Wizard. The application allows examiners to access near real-time consumer complaint data. Examiners can review specific complaints, generate standard reports or conduct customized searches of the data. The information available to examiners includes data on all of the banks OCC supervises, not just those where an examiner is currently assigned. With this capability, examiners can also generate similar reports on similar institutions. Examiners with whom we spoke said CAG Wizard is a useful tool. They reported using the application to prepare for an examination or when developing the annual risk assessment of the bank. Often, the examiners compare the complaint data that banks maintain with the data CAG provides through CAG Wizard. OCC examiners and CAG staff also collaborate on other activities. For example, CAG staff may alert examiners if there are certain types of complaints that warrant further attention or if patterns emerge in the overall complaint volume about the bank. CAG officials and OCC examiners told us that there is an open line of communication between their respective staffs. For example, examination staff at one national bank undertook a specific investigation based on a complaint forwarded from CAG. Examination staff specifically requested and reviewed information from the bank concerning the advertising of a product and the bank’s associated fees. Examiners can also forewarn CAG staff about any impending bank actions related to products, services, or policy that may cause consumers to complain. For instance, the bank might be changing the terms on a credit card product, and as such, sending a notification to customers. Such mailings typically lead to an increase in calls to CAG, but with forewarning from the examiners, CAG can have more accurate information on hand to use in assisting bank customers who call with questions. OCC also uses consumer complaint data collected by CAG to formulate guidance for national banks. Topics of these guidelines cover various aspects of banking, including risks involved with using third-party vendor partners (e.g., when a bank partners with another business to provide a service to bank customers), predatory lending, and credit card practices. For example, CAG received a significant number of consumer complaints about aggressive marketing tactics and inadequate disclosures related to credit repair products offered through third parties. In response to the complaints received, OCC issued guidance in 2000 warning banks about risks posed to them by engaging third-party vendors for products and services linked to the credit cards that banks issue. CAG provides the largest national banks with aggregate information on the complaints about them. Also, CAG staff meets annually with bank officials of at least the 10 banks that received the most complaints during the previous calendar year. In 2004, the 10 banks with the most complaints accounted for 81 percent of all the complaints that OCC received. At these meetings, CAG officials discuss significant issues, such as data on complaint volume and trends, comparable data for the bank’s peers and the industry, and current issues the bank should address. Prior to these meetings CAG officials consult with examiners on what specific issues warrant additional analysis or attention by bank officials. According to examiners, they attend the meetings and offer input on any specific topics CAG should highlight. Most bank officials with whom we spoke also said that the meetings with CAG were useful in helping them address customer satisfaction. Despite OCC Efforts, State Officials and Consumer Advocates Still Have Concerns About OCC’s Commitment and Capacity to Address Consumer Complaints Many of the state officials and advocates with whom we spoke continue to be concerned that OCC does not have the necessary commitment or capacity to provide consumers with sufficient protection against violations of laws. Unlike consumer advocates and state attorneys general, OCC defines itself as a neutral arbiter in terms of assisting consumers. Yet state officials and consumer advocates perceive OCC as being pro-bank, not neutral, and as such, they may hesitate to forward complaints on behalf of their citizens or clients. Some officials were unaware of CAG’s process for handling consumer complaints; however, OCC recently took steps to publicize its customer assistance function. State officials were concerned about a perceived unwillingness by OCC to share information about the outcomes of complaints. Other groups with whom we spoke view the CAG’s centralized location as a shortcoming because CAG staff, they said, could not be familiar with current lending practices that pose high risk to consumers or to problematic institutions in local areas. OCC has taken some steps to provide flexibility in operations to meet any upcoming increases in demand for its services. While State Officials and Advocates We Contacted Remain Concerned About OCC’s Commitment to Consumer Protection, Some Were Unaware of Its Consumer Protection Efforts As we previously reported, OCC received close to 3,000 letters commenting on the banking activities rule, with the majority of commenters opposed to the rule and citing concerns about weakened consumer protections. Comments from state officials argued that a lack of state regulation would create “an enormous vacuum of consumer protection without adequate federal regulation to fill the gap.” Many of these commenters suggested that OCC needed to do more, not less, to protect consumers. These views were echoed by those with whom we spoke in preparing this report, as were concerns that the visitorial powers rule severely limits the advocates’ and state officials’ abilities to assist their constituents and clients, thereby exposing them to potential consumer protection violations. The rule, according to OCC, clarifies that federal law commits the supervision of national banks exclusively to OCC. Because advocates work to advance the interests of their clients, they do not see their role being adequately filled by OCC, or CAG, which defines itself as a neutral arbiter. Although part of OCC’s mission is to ensure fair access to financial services and fair treatment of bank customers, the perception remains, among the groups with whom we spoke, that OCC is “on the side” of the banks. Some advocates with whom we spoke were unclear about how OCC processes complaints through CAG and what assistance it can provide consumers. Some of the state officials and advocates with whom we spoke were unaware of the CAG, its process for responding to consumer inquiries and complaints, or the help it can provide. Some of the state officials and advocates with whom we spoke said that they are reluctant to refer clients to the agency, given their level of mistrust of OCC and lack of knowledge about its customer assistance function. However, CAG data from November 2001 to September 2005 show referrals from all 50 state banking departments and 49 state attorneys’ general offices. OCC officials said that they have several ongoing initiatives aimed at better informing the public about their services. For example, OCC recently revised its consumer complaint brochure. The brochure has “frequently asked questions” about OCC and the role it, and CAG specifically, play in resolving consumer complaints. This new version will be printed in Spanish and English. As of November 2005, OCC said they had distributed a small number of brochures to each national bank. In addition a “camera-ready” version will be made available to banks so that they can print more copies if they choose. However, OCC officials said they will not require the banks to display or distribute the brochures. In addition, officials said they do not have a distribution plan to give the brochure directly to the general public, although they did give a small supply to the Better Business Bureaus. We note that the information in the brochure is available on the OCC Web site. OCC also informs the public about CAG services and performance through the Annual Ombudsman report. This report is available on OCC’s Web site and contains information on total case volume handled in the previous year, as well as a general discussion about complaint volumes and trends. Also, in 2004, OCC redesigned its Web site to enhance the consumers’ capability to access information and learn more about its services. The redesigned Web site provides a searchable list of national bank operating subsidiaries that do business directly with consumers, which allows individual consumers to determine if an entity is associated with a bank supervised by OCC. However, some of the consumer groups with whom we spoke said that one limitation of this list is that it does not have dates attached to the list of operating subsidiaries indicating when they became associated with the bank, which can be important in trying to identify the parties involved in a transaction at a particular time. OCC officials said that they will address any complaint brought against a national bank and its operating subsidiaries, regardless of when the transaction took place. OCC officials also said CAG staff are engaging in a series of outreach meetings with state government organizations and Better Business Bureaus. For example, in November 2004, senior CAG officials met with one state attorney general’s office to demonstrate how OCC handles consumer complaints. That state attorney general told us that it was clear from the meeting that the CAG officials seemed earnest in wanting to cooperate, even though the two sides might still disagree on the appropriate roles for OCC and the states in protecting consumers. OCC officials said they intend to hold similar meetings with other state attorneys general and state banking departments, although none were planned as of November 2005. OCC staff is also engaged in outreach efforts with the Better Business Bureaus, which includes conference presentations, as well as meeting with several bureaus in order to educate them on OCC’s customer assistance services and to enable OCC to better understand the nature and volume of complaints received by the Better Business Bureaus involving national banks. In addition, OCC officials are requesting that Better Business Bureaus update their Web sites to include a link to OCC. Also, during fiscal year 2005, representatives from CAG and OCC’s Community Affairs office held outreach meetings with national consumer group organizations, such as the Consumer Federation of America, American Association of Retired Persons, and the National Association of Consumer Agency Administrators. State Officials View OCC’s Efforts to Share Information About Complaint Outcomes as Unsatisfactory Among some of the states’ attorneys general with whom we spoke, there is the perception that OCC is not willing to cooperate in protecting citizens, as evidenced, in part, by their perception of OCC’s unwillingness to share information on consumer complaint outcomes. Most state attorneys general staff with whom we spoke said they are willing to forward complaints to OCC, but they have not been receiving what they perceive to be adequate information on the outcome of referrals. According to OCC officials, it is agency policy to send the consumer a letter acknowledging receipt of a complaint submitted to OCC. If a complaint is forwarded to OCC from another agency, it is OCC’s policy to send a copy of the acknowledgment letter to the forwarding agency. Nonetheless, some state attorneys general and other state officials said that, in their experience, OCC does not provide any information about the resolution of the complaints, which is what state officials want. However, in commenting on a draft of this report, OCC officials told us that if state officials request information on the resolution of an individual complaint, OCC will notify them of the outcome. Specifically, they said that an attorney from OCC’s Community and Consumer Law Division will contact the state official once a case is closed and will discuss the case. Although it is not a written policy, OCC officials told us these contacts are common practice. “Where you believe there is a broader issue, such as the applicability of a particular State law to national banks generally, or if you have information that a specific national bank is engaged in a particular practice affecting multiple customers that is predatory, unfair or deceptive, this information should be communicated to the OCC’s Office of Chief Counsel for coordination.” The MOU was sent to all state attorneys general as well as the National Association of Attorneys General (NAAG) and the Conference of State Bank Supervisors (CSBS). Some of the officials from banking departments and the offices of attorneys general that we interviewed as well as representatives of CSBS said they viewed OCC’s proposed MOU as unsatisfactory because, in their view, it essentially favored the OCC. In a written response to the OCC Comptroller, declining to sign the MOU, one state’s attorney general described the proposal as one where “states send complaints to OCC with the idea that, at some later date, we would have the right to inquire about the results of the ’resolution’ of the matter obtained by OCC.” In addition, some of the state officials with whom we spoke believed that signing the proposed MOU would amount to a tacit agreement to the principles of the banking activities and the visitorial powers rules. “Nothing in this MOU is intended to or shall be construed to affect, modify, or imply any conclusion regarding the jurisdiction or authority of either of the agencies or affect the rights or obligations of the agencies under existing law concerning the scope of the respective jurisdiction of each of the agencies to supervise, examine or regulate the regulated institutions covered by this MOU.” Only one state official signed the original 2003 Memorandum, and according to OCC, to date, no additional state officials have signed the 2004 version. Others Raise Concerns About CAG’s Centralized Operations, Although OCC Cites Advantages Consumer groups also expressed misgivings about forwarding complaints to OCC. Many of the groups with whom we spoke viewed CAG’s centralized location as a shortcoming because they believe that the CAG staff thus could not be familiar with current lending practices that pose high risk to consumers or problematic institutions in local areas. The consumer advocates we interviewed said an in-depth understanding of local real estate conditions was necessary to prevent predatory lending abuses. Furthermore, they said that OCC’s 60-day time frame is too long to effectively address many of their clients’ acute needs, such as when immediate action is needed to stop a foreclosure proceeding. We note however, that the other federal bank regulators and three of the six state regulators with whom we spoke all have a 60-day goal for resolving complaints. According to OCC officials, the agency centralized its consumer operations in Houston because it offers efficiency advantages. FDIC officials said they are consolidating their complaint handling operations for the same reasons. OCC examiners we interviewed also pointed out that a central facility makes sense, given that national banks operate across state lines and have so many customers in multiple markets. According to CAG and bank supervision staff, funneling data to, and analyzing it in, one location provides more potential for seeing national trends and potential problems. However, there are also potential drawbacks to having only one operational facility available for any such customer function, as it increases the likelihood that there might be disruptions in service. For example, during Hurricane Rita in September 2005, telephones were not staffed for 4 days at CAG, due to the evacuation of Houston. However, consumers were able to submit complaints by either E-mail or fax. During that period, OCC received 14 faxes opening new cases, as well as 184 E-mails—34 from bankers and 150 from consumers. Of those complaints from consumers, 16 were from Members of Congress. OCC staff said these numbers are in-line with normal activity levels. When we asked about the closure, the Ombudsman replied that he decided to obey the evacuation notice issued by Houston-area officials, and while this may have resulted in some backlog of cases, his first priority was ensuring the safety of the Houston OCC employees. In December 2005, OCC began seeking private-sector support for the CAG facility, in order to expand its telephone service hours. This expansion will give OCC the ability to quickly expand CAG’s telephone operating hours in the event of an emergency, and because the third-party vendor will be located outside of Houston, those staff will be able to help OCC continue to serve consumers, even if the Houston office is unable to operate. Some Groups and Officials Have Concerns About Complaint Handling Capacity, and OCC Plans to Increase Capacity Some consumer groups and state officials stated that the recent banking activities and visitorial powers rules could potentially increase the number of complaints OCC receives, since now OCC will more likely handle all complaints pertaining to national banks and their operating subsidiaries. These groups and officials argued that OCC did not have the capacity to adequately handle any new volume. Furthermore, they contend OCC could not match the resources (i.e., personnel and hours of operations) of state banking departments, consumer credit divisions, and offices of state attorneys general that currently work to resolve complaints and, more broadly, to identify fraudulent and abusive practices. However, we note that state banking departments and state attorneys general handle other types of consumer complaints, such as complaints about automobile dealers, mortgage brokers, and check cashers. Since OCC issued the preemption rules in January 2004, the volume of complaints, according to CAG data, has remained fairly steady. In fact, between 2000 and 2004 complaints received by OCC have decreased 37 percent. According to OCC staff, complaint volume was high around 2000 due to a settlement with a large national bank on credit card disclosure issues. CAG data for 2005, while available only through June at the time of our review, indicate a potential increase in the volume of complaints when compared with 2004. CAG officials believe that the conversion from state charters to federal charters of two large banks in 2004 accounts for the increase. That is, customers of those banks who had complaints previously contacted the appropriate state regulator and either the Federal Reserve or FDIC, which jointly regulate state chartered banks. After the banks converted to federal charters, customers contacted OCC concerning any complaints. These data suggest that an increase to levels of complaints experienced before the 2004 preemption rules could be absorbed by current OCC resources. Further, CAG data show that the total number of complaints, in any given year, received from state offices, including banking departments and states’ attorneys generals, is a relatively small percentage of the total number of complaints; therefore, any increase in referrals to OCC from those offices might not have a dramatic effect on total overall volume. Nevertheless, concerns that OCC resources were not equivalent to those of a state attorney general or state banking department were still prevalent among some of those with whom we spoke at the state level. However, OCC officials said that they have staff—beyond CAG—that work on consumer protection issues, including bank examiners in compliance supervision and attorneys in the Community and Consumer Law and Enforcement and Compliance divisions. Until 2004, OCC staffed the CAG’s toll-free telephone line 4 days a week, 8 hours a day, but now has service 5 days. One measure OCC uses to gauge how effectively it is servicing customers is the wait time for callers to speak with a CAG representative. OCC officials told us their goal is to answer 80 percent of CAG calls within 3 minutes or less. According to OCC data, between June 2004 and November 2005, CAG met this goal, although wait times generally were longer for Spanish speaking services. In addition, to accommodate the expected increase in call volume due to recent charter conversions, OCC has recently hired more CAG specialists. Lastly, in December 2005, OCC began seeking private-sector support for the CAG facility, in order to expand its telephone service hours. A third-party vendor will handle routine matters, such as providing materials to satisfy noncomplex questions, obtaining information from callers that is necessary to open a case file, routing the caller to the appropriate OCC specialist and providing the status of an open case. In addition, the vendor’s employees will be able to direct the many callers who have concerns that pertain to institutions not regulated by OCC to the appropriate regulator. OCC plans to begin expanding the CAG’s telephone hours of operation after vendor selection and training is completed. Conclusions Overall, OCC’s consumer complaint handling operations appear to be in- line with practices of other regulators, with OCC handling a larger volume of complaints than the other bank regulators, likely reflecting its position as the supervisor of banks that account for the majority of the nation’s bank assets. A significant portion of OCC’s and other regulators’ work involves providing or clarifying information for bank customers who have questions and/or have misunderstood a bank product or service. Officials from all four regulators said that assisting consumers through the complaint process is an important part of their efforts to educate consumers about financial products and services. Two of the federal bank regulators collect some feedback from consumers who make complaints or inquiries; OCC does not. In contrast, OCC does seek feedback from banks after every examination, through a survey. Given that part of OCC’s mission is to ensure that consumers of national banks’ products and services are treated fairly and have fair access to financial services, obtaining feedback from bank customers who contact CAG should be useful in improving both its service to customers and helping banks to do likewise. Moreover, federal standards reflected in GPRA require that government agencies measure their progress toward goals, including those related to serving the public. OCC measures its timeliness in serving consumers with complaints and inquiries as one indicator of its performance and discloses the results in reports that are publicly available. However, because those reports combine data on complaints and data on inquiries—which are questions or comments that are subject to an immediate, simple answer and typically require less time to handle—they overstate OCC’s performance in meeting its timeliness goal for resolving actual complaints. OCC appears to make appropriate use of the data CAG collects and analyzes by informing banks about their performance in relation to consumer complaints and by using the data to inform its examination and supervisory activities. CAG’s analysis of complaint data is presented to bank officials annually and used to identify any concerns. OCC examiners reported CAG data as useful tools in scoping examinations and in assessing areas of risk. We documented instances when examiners’ audit plans were influenced by information from CAG. We also identified instances when information gathered from CAG complaints and additional research by supervisory staff contributed to the development of supervision policies and guidance. The concerns expressed by a broad range of consumer advocates and state officials indicate some uneven understanding of OCC’s process for handling consumer complaints, possibly contributing to the lack of trust that the agency will be aggressive in protecting consumers’ interests. Because these concerns may inhibit state officials or consumer advocates from sharing information with or referring consumer complaints to OCC, they could adversely affect the agency’s effectiveness in regulating banks or assisting bank customers who have complaints. Consumer advocates and others are concerned about CAG’s centralized location and its capacity to handle complaints particularly if the volume of complaints should increase. Recent efforts such as outreach to the Better Business Bureaus and development of a revised brochure for consumers regarding CAG are appropriate steps designed to better inform the public of its process and services. However, the distribution plans for the brochure focus on the banks and rely on them to share the brochure with bank customers, if the banks wish. Given that the former Comptroller has acknowledged that OCC and state officials “have a mutual interest in ensuring that consumers are protected from illegal, predatory, unfair, or deceptive practices,” it is essential that OCC undertakes outreach to key state partners—regulators and consumer advocates—in a manner that effectively and efficiently informs the public, and especially customers of national banks, about what CAG does and how state officials and OCC can work together to protect consumers. Such efforts cannot only raise awareness among the states about OCC’s efforts and capabilities to assist consumers, it might help allay the suspicion and mistrust we identified and construct a path for better cooperation between OCC, state officials, and consumer advocates in the future. Recommendations To identify ways to improve its process for handling consumer complaints and inquiries and its efforts to better inform, educate, and serve bank customers, we recommend that the Comptroller of the Currency take the following three actions: Develop and implement a feedback mechanism to receive input and measure satisfaction of bank customers who have used CAG services. Revise the data publicly reported on timeliness to reflect complaints resolved within the 60-day goal separately from data reported on inquiries resolved within the time frame. Develop and implement a comprehensive plan to inform bank customers, consumer advocates, state attorneys general, and other appropriate entities of OCC’s role in handling consumer inquiries or complaints about national banks. The plan could include such steps as directly distributing an informational brochure to some bank customers and meeting with state and local consumer advocates and appropriate state officials to describe OCC’s role and processes for assisting bank customers and others who raise consumer protection concerns. Agency Comments and Our Evaluation We obtained written comments on a draft of this report from the Comptroller of the Currency; they are presented in appendix II. OCC generally concurred with the report and agreed with our three recommendations. Specifically, OCC stated that a broader comparison of consumer protection activities, including those of state agencies, would have provided a clearer picture of protections available to consumers, but it acknowledged that such a comparison was beyond the scope of our report. Regarding the recommendations, OCC said it will develop and implement a customer feedback mechanism to receive input and measure satisfaction of those who have used CAG services. OCC also agreed to revise the data that it publicly reports on timeliness to reflect complaints resolved within the 60-day goal separately from data reported on inquiries. Finally, OCC acknowledged that state officials may not be aware that it does have some practices currently in place to inform state officials of the outcome of consumer complaints, and therefore it will undertake additional outreach to state agencies to make them aware of those options. Therefore, OCC agreed with our recommendation that it develop and implement a comprehensive plan to inform bank customers, consumer advocates, state attorneys general, and other appropriate entities of its role in handling consumer inquiries or complaints about national banks. OCC also provided technical comments that we have incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will provide copies of this report to the Comptroller of the Currency and interested congressional committees. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are acknowledged in appendix III. Scope and Methodology To describe how the Office of the Comptroller of the Currency (OCC) handles consumer complaints and to compare how its process compares with that of other bank regulators, we interviewed officials in OCC’s Customer Assistance Group (CAG), as well as their relevant counterparts at the Federal Reserve, Federal Deposit Insurance Corporation (FDIC), and Office of Thrift Supervision (OTS). We visited the CAG office in Houston, Texas, and observed its work, including a review of 18 closed cases to learn what information CAG collects from complaints. In addition, we reviewed CAG’s policies and procedures that relate to consumer complaint processing. To describe how the four regulators resolve the complaints they handle, we requested complaint data for calendar years 2000 through 2004. Specifically, we obtained information about the source and resolution (outcomes) of complaints, the banking products or services involved, and the amount of time the regulators took to resolve them. The data came from four different databases: (1) OCC’s REMEDY database, (2) the Federal Reserve’s Complaint Analysis Evaluation System and Reports (CAESAR), (3) FDIC’s Specialized Tracking and Reporting System (STARS), and (4) OTS’ Consumer Complaint System (CCS). We obtained data from OCC, the Federal Reserve, FDIC, and OTS in September 2005 that covered calendar years 2000 through 2004. For purposes of this report, we sought to use REMEDY, CAESAR, STARS, and CCS data to describe the number of cases each regulator handled, what products consumers complained about, how the regulators disposed of complaints, the number of complaints and inquiries the regulators forwarded to other federal agencies, and how long it took the regulators to resolve complaints. To assess the reliability of data from the four databases, we reviewed relevant documentation and interviewed agency officials. We also had the agencies produce the queries or data extracts they used to generate the data we requested. Also, we reviewed the related queries, data extracts, and the output for logical consistency. We determined these data to be sufficiently reliable for use in our report. To make general comparisons about the source and resolution of complaints between the four regulators, we created categories that include all of the codes each regulator used to describe the sources and resolutions of complaints. Officials of the Federal Reserve, FDIC, OTS and OCC agreed with our categorization of their respective source and resolution codes. The source categories were “consumer,” “federal,” “state,” and “other.” The resolution categories consisted of (1) regulators provide consumers additional information, (2) complaint is withdrawn or tabled due to litigation, (3) regulators determine that bank was not in error, and (4) regulators determine that bank was in error. Using the codes, we sorted each of the regulators’ complaints and tallied the number of complaints that fell into each category. We also sorted the complaints by codes indicating the type of bank product or service and confirmed for certain products, such as credit cards, that the codes represented the entire universe of complaints about the product. To describe how long it takes to resolve a complaint, we requested from each regulator a frequency count of how many complaints were resolved within and over 60 days. To describe how CAG’s efforts related to OCC’s supervision of national banks, we interviewed OCC officials and reviewed related documents about how consumer complaint data influence bank examinations and guidance. We interviewed CAG officials and examiners at six national banks concerning how CAG shares consumer complaint information and how information is used by bank examiners. In addition, we interviewed bank officials to learn what information CAG provides the banks and how banks use the information. To identify issues raised by consumer advocates and state officials, we conducted site visits in four states: California, Georgia, New York, and North Carolina. The site visits included interviews of state attorneys general, banking regulators, banking officials and local consumer advocate groups, as well as analysis of relevant documents. We also interviewed state officials in two additional states, Iowa and Idaho. We selected these locations, in part, based on their experience with state consumer protection laws. In addition, we interviewed representatives of national consumer groups, including the Center for Responsible Lending, Consumer Federation of America, National Community Reinvestment Coalition, National Consumer Law Center, and Association of Community Organizations for Reform Now. Also, we interviewed representatives of national trade groups for state officials in Washington, D.C., including the Conference of State Bank Supervisors and the National Association of Attorneys General. We conducted our work in California, Georgia, New York, North Carolina, Texas, and Washington, D.C., from October 2004 through December 2005 in accordance with generally accepted government auditing standards. Comments from the Office of the Comptroller of the Currency GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to those named above, Katie Harris (Assistant Director), Nancy Eibeck, Jamila Jones, Landis Lindsey, James McDermott, Kristeen McLain, Suen-Yi Meng, Marc Molino, David Pittman, Barbara Roesmann, Paul Thompson, and Mijo Vodopic made key contributions to this report.
In January 2004, the Office of the Comptroller of the Currency (OCC)--the federal regulator of national banks--issued rules concerning the extent to which federal law preempts state and local banking laws. Some state officials and consumer groups expressed concerns about a perceived loss of consumer protection. GAO identified (1) how OCC's complaint process compares with that of other federal bank regulators, (2) how complaint information informs OCC's supervision of national banks, and (3) issues that consumer advocates and state officials have raised about OCC's consumer protection efforts and OCC's responses to the issues. Overall, OCC's process for handling consumer complaints--carried out primarily by its Customer Assistance Group (CAG)--is similar to that of the other three federal bank regulators. However, unlike two of them, OCC lacks a mechanism to gather feedback from consumers it assists that could help it and the banks improve service to consumers. All of the regulators resolve the majority of complaints by providing or clarifying information for bank customers; less frequently, the regulators investigate and determine that a bank or customer erred. OCC annually handles more complaints than the other regulators, likely reflecting its position as the supervisor of banks with the majority of the nation's bank assets. OCC's complaint volume has not increased appreciably since it issued the preemption rules. OCC, in accordance with federal requirements for agencies to measure how they are fulfilling goals related to serving the public, measures the percentage of complaints it resolves within 60 days, a target other federal bank regulators also use. In reporting its performance, however, OCC includes data on its response to consumers' inquiries, which typically take less time, thereby overstating its performance on timeliness of responses to complaints. OCC's bank examiners use consumer complaint information collected by CAG to plan or adjust examinations. CAG staff and examiners communicate regularly regarding specific complaints or complaint volume and coordinate these efforts to provide consistent messages when discussing consumer-related issues with bank officials. In addition, complaint data inform OCC policy guidance to banks, often addressing potential compliance and safety and soundness risks banks face. CAG also provides feedback to banks, focusing on complaint trends and potential risks that may impact the banks' compliance with consumer protection laws or other issues. Many of the state officials and consumer advocates GAO contacted during visits to four states, as well as some representatives of national organizations, nevertheless remain concerned about OCC's commitment and capacity to address consumer complaints--especially given their perception that the rules effectively ended protections provided by state laws and processes. Specific concerns these officials cited include an inability to obtain information on complaint outcomes, the fact that OCC handles complaints from a single location, and the adequacy of CAG's resources. OCC has taken actions addressing some of these concerns. The agency views itself as a neutral arbiter and continues to provide an avenue for consumers to file complaints related to national banks. OCC recently hired additional CAG staff and has begun working with a third-party vendor to expand telephone service from 7 to 12 hours a day. GAO noted that some officials and advocates contacted were unaware of OCC's process for handling consumer complaints and the assistance it can provide.
Background The Federal Agriculture Improvement and Reform Act of 1996 (1996 Farm Bill) authorized USDA to issue guidelines for the regulation of the commercial transportation of horses and other equines for slaughter by persons regularly engaged in that activity within the United States. The statute gives USDA authority to regulate the commercial transportation of equines to slaughtering facilities, which the statute indicates include assembly points, feedlots, or stockyards. The authority to carry out this statute was delegated to USDA’s Animal and Plant Health Inspection Service (APHIS). Pursuant to this authority, APHIS issued a regulation, “Commercial Transportation of Equines to Slaughter” (transport regulation), in 2001. In 2001, APHIS also established the transport program. This program seeks to ensure that horses being shipped for slaughter are transported safely and humanely. In addition, USDA’s Food Safety Inspection Service (FSIS) carries out the Humane Methods of Slaughter Act and related regulations, which require the humane handling of livestock, including horses, in connection with slaughter. APHIS’s transport regulation establishes a number of requirements that owners/shippers (shippers) must meet for horses transported to slaughter. The regulation states that shippers must (1) provide horses with food, water, and rest for at least 6 hours prior to loading; (2) provide horses adequate floor space in whatever conveyance (e.g., a trailer) is being used; (3) segregate all stallions and other aggressive equines; and (4) ensure that trailers are free of sharp protrusions, are not double-decked, and have adequate ventilation. If a trip is longer than 28 hours, horses must be unloaded and provided at least 6 hours of food, water, and rest before being reloaded. Horses cannot be shipped to slaughter unless they are accompanied by an “Owner/Shipper Certificate—Fitness to Travel to a Slaughter Facility” (owner/shipper certificate) certifying that the horses are fit for travel. The certificate must state that horses are over 6 months of age, are not blind in both eyes, can bear weight on all four limbs, are able to walk unassisted, and are not likely to foal (i.e., give birth) during transport. Figure 1 provides an example of this certificate. Shippers found to be in violation of the transport regulation can face penalties of $5,000 per horse, per violation. As of fall 2007, the last three horse slaughtering facilities in the United States were closed following unsuccessful challenges to state laws banning the practice. According to USDA data, those facilities, two in Texas and one in Illinois, slaughtered almost 105,000 horses in 2006—the last full year of operations—and exported more than 17,000 metric tons of horsemeat, which was valued at about $65 million at that time. Regarding the Texas facilities, in January 2007, the U.S. Court of Appeals for the Fifth Circuit ruled that a 1949 Texas law banning the sale or possession of horsemeat applied to them. They ceased operations in May 2007. Regarding the Illinois facility, the state enacted a law in May 2007 making it illegal to slaughter horses for human consumption. In September 2007, the U.S. Court of Appeals for the Seventh Circuit upheld this slaughter ban, and the Illinois facility ceased operations that month. Since fiscal year 2006, Congress also has taken annual actions in appropriations legislation that have effectively prevented the operation of horse slaughtering facilities in the United States by prohibiting USDA’s use of federal funds to (1) inspect horses being transported for slaughter and (2) inspect horses intended for human consumption at slaughtering facilities. The 1996 Farm Bill authorized the issuance of guidelines for the regulation of the commercial transportation of equines for slaughter as well as the conduct of any inspections considered necessary to determine compliance. The Federal Meat Inspection Act requires inspection of certain animals, including cattle, sheep, swine, goats, and horses, before they are slaughtered and processed into products for human food to ensure that meat and meat products from those animals are unadulterated, wholesome, and properly labeled. However, Congress prohibited USDA from using appropriated funds to pay for these inspections, effective 120 days after enactment of the fiscal year 2006 appropriations legislation on November 10, 2005. Following the prohibitions, the three domestic slaughtering facilities open at that time petitioned USDA to create a voluntary fee-for-service inspection program for horses prior to slaughter, and USDA created such a program in early 2006, allowing required inspections, and, thus, domestic slaughtering, to continue. The congressional prohibition on use of appropriated funds continued in fiscal year 2007, but, as previously discussed, the plants had already been shut down by state law that year. In fiscal year 2008, Congress renewed the prohibition on the use of appropriated funds for inspections on horses being transported to slaughter and at slaughtering facilities, and it added a new prohibition on the use of appropriated funds for implementation or enforcement of the fee-for-service program. These prohibitions were continued in fiscal years 2009 through 2011. These prohibitions notwithstanding, U.S. horses intended for slaughter are still allowed to be transported within the United States under the oversight of USDA’s transport program and exported to slaughtering facilities in Canada and Mexico. In September 2010, USDA’s Office of Inspector General (OIG) reported, in part, on the operations of the transport program. The OIG found that APHIS needs to improve its controls for ensuring that horses being shipped to foreign facilities for slaughter are treated humanely. For example, APHIS does not deny authorization to shippers with a record of inhumanely transporting horses intended for slaughter from shipping other loads of horses, even if unpaid fines are pending for previous violations. The OIG also found deficiencies in how APHIS tags horses that have been inspected and approved for shipment to foreign slaughtering facilities. For example, the agency requires shippers to mark such horses with backtags, which are intended to allow APHIS to trace horses back to their owner and also to verify that horses have passed inspection by an accredited veterinarian. However, APHIS lacked an appropriate control to track individual horses by backtag number on approved shipping documents so that it could perform reconciliations, investigate violations, and initiate enforcement actions, as appropriate. In addition, the OIG noted that APHIS needs to obtain the resources necessary to adequately oversee the transport program and issue in final a proposed rule that would broaden the scope of the agency’s regulation of horses being shipped to foreign slaughtering facilities. In its official response to the OIG report, APHIS concurred with the OIG’s findings and recommendations related to the transport program, and APHIS proposed specific actions and time frames for implementing the recommendations. For example, APHIS agreed to work with USDA’s Office of General Counsel and complete by May 31, 2011, an evaluation of “the best options to revise regulations necessary that will establish an agencywide policy that those who have violated the humane handling regulations and failed to pay the associated penalties shall not receive endorsement of any subsequently requested shipping documents.” U.S. Slaughter Horse Market Has Changed Since Domestic Slaughter Ceased in 2007 The U.S. slaughter horse market has changed since domestic slaughter for food ceased in 2007, particularly in terms of increased exports to Canada and Mexico and lower domestic sales and prices, especially for lower- value horses, according to our analysis of available trade data and horse auction sales data. Horse Exports to Canada and Mexico Have Increased with the Cessation of Domestic Slaughter The number of horses slaughtered in the United States decreased from 1990 (345,900 horses) through 2002 (42,312 horses), according to available data from USDA’s National Agricultural Statistics Service. At the same time, the reported number of slaughtering facilities dropped from at least 16 U.S. facilities that operated in the 1980s to 7 facilities in 1994 to as few as 2 in 2002. Beginning in 2003, however, the number of horses slaughtered began rising through 2006, the last full year of domestic slaughtering operations, when nearly 105,000 horses were slaughtered in the United States. According to USDA officials, this increase can be explained, in part, by the reopening of a horse slaughtering facility in DeKalb, Illinois, in 2004 that increased domestic slaughtering capacity. This facility had been closed for 2 years following a fire set by anti-slaughter arsonists. Because all domestic slaughtering facilities closed by September 2007, however, the number of horses being slaughtered in the United States dropped to zero by the end of that year. Figure 2 shows the changes in the number of horses slaughtered in the United States from 1990 through 2007. Before 2007, horses were slaughtered in domestic slaughtering facilities only when the horsemeat was destined for consumption by humans or zoo animals. Currently, pet food and other products, including glue, may still be obtained from the corpses of horses that are hauled to rendering plants for disposal. The production of these products is not covered by the requirements of the Federal Meat Inspection Act and is therefore not affected by the current ban on the use of appropriated funds for the ante- mortem inspection of horses destined for human consumption. According to a transport program official, USDA is not aware of any domestic facility slaughtering horses for any purpose, including for zoos, as of the end of 2010. USDA identified at least three establishments—in Colorado, Nebraska, and New Jersey—that import horsemeat for repackaging and distribution to purchasers in the United States who feed the meat to animals at zoos and circuses. With the cessation of domestic slaughter, U.S. exports of horses intended for slaughter increased to Canada and Mexico, the current locations of all North American horse slaughtering facilities. As of the end of 2010, Canada had four such facilities, and Mexico three, that were the principal destinations of U.S. horses exported for slaughter. According to USDA officials, this increase in exports began, in part, because shippers were anticipating the closure of the three horse slaughtering facilities in the United States at that time. From 2006 through 2010, Canadian and Mexican imports increased by 148 percent and 660 percent, respectively, with the total number of horses imported from the United States for slaughter increasing from about 33,000 in 2006 to about 138,000 in 2010. In addition, the total number of horses exported for all purposes, including breeding and showing, also increased from 2006 through 2010, as shown in figure 3. According to USDA officials, some horses exported for purposes other than slaughter were likely “feeder” horses that were ultimately sent to slaughtering facilities at a later time. For example, feeder horses may be sent to a Canadian or Mexican feedlot for fattening before subsequently being sent to a slaughtering facility in that country. The extent to which horses are exported as feeder horses is unknown, according to USDA officials. The total number of U.S. horses sent to slaughter in 2006, the last full year of domestic slaughter, was comprised of horses slaughtered domestically (i.e., 104,899, as shown in fig. 2) and those sent for slaughter in Canada or Mexico (i.e., 32,789, as shown in fig. 3)—for a total of 137,688 horses. Taken together, the 137,984 U.S. horses that were sent to slaughter in Canada or Mexico in 2010 is approximately equal to the total number of horses slaughtered in 2006. Additional certification may affect Canadian and Mexican exports of horsemeat to Europe and, in turn, may affect the future export of horses intended for slaughter from the United States to these countries. In 2010, the European Union began prohibiting the importation of horsemeat from horses treated with certain drugs and requiring countries to document withdrawal periods for horses treated with other drugs before meat from such horses could be imported to the European Union. Those regulations precipitated similar regulations in Canada and Mexico. For example, Canadian requirements went into effect on July 31, 2010, banning specific medications, such as phenylbutazone—the most common anti- inflammatory medication given to horses—and requiring a 180-day withdrawal period for other medications, such as fentanyl, an analgesic. Also, since November 30, 2009, Mexico has required an affidavit by transporters that horses have been free from certain medications for 180 days prior to shipment. Furthermore, effective July 31, 2013, the European Union will require lifetime medication records for all horses slaughtered in non-European Union countries before accepting imports of horsemeat from those countries. According to APHIS and horse industry sources, these requirements could result in shippers certifying that their horses are free of medication residues without having first-hand knowledge or documentation of the horses’ status for the previous 180 days. Horse Sales and Prices Have Declined Since 2007, Especially for Lower- Valued Horses With regard to sales, many of the State Veterinarians said that fewer horse sales have occurred and fewer auctions have operated within their states since 2007, in part, because of lower horse prices and sale commissions since the cessation of domestic slaughter. As a result, they said, horse owners have fewer options for getting rid of horses they no longer want. There also has been reduction in the number of commercial shippers doing business since the cessation of slaughter. In reviewing USDA documentation, we found that more than 110 shippers operated from 2005 through 2006—the 2 years prior to the cessation of domestic slaughter in 2007—and fewer than 50 shippers operated from 2008 through 2009. Some in the horse industry, as well as the State Veterinarians, generally attributed this decrease to the closing of horse auctions around the country, reflecting a smaller market and the lower profit margins resulting from the increased costs of transporting horses intended for slaughter to Canada and Mexico. Horse industry representatives also stated that the closing of domestic slaughtering facilities has dramatically affected the prices of horses. National data on horse prices do not exist, but data from individual auctions are available. For example, the Billings, Montana, horse auction, one of the nation’s largest, which also sells horses purchased for slaughter, reported a large increase in the percentage of lower-priced horses sold—the type of horse that typically ends up at slaughter—and a general decrease in sale prices. In May 2005, approximately 25 percent of “loose” horses—less expensive horses that are run through the auction ring without a rider or saddle—sold for less than $200 at that auction, whereas in May 2010, about 50 percent of loose horses sold for less than that amount. The economic downturn in the United States that started in December 2007 also likely affected horse prices, according to the academic experts and industry representatives we consulted. Since many U.S. horses are used for recreational purposes, they are generally thought to be luxury goods, and their ownership is sensitive to upturns and downturns in the general economy. Furthermore, some horse sellers could no longer afford to keep their horses, and potential buyers also were not able to offer as much to buy horses or were not in the market to purchase horses at all, according to some industry observers. In particular, a considerable number of horse owners are from lower-to-moderate income households and are less able to withstand the effects of a recession, according to academic experts. For example, one study estimated that up to 45 percent of horse owners have an annual household income of between $25,000 and $75,000. According to several State Veterinarians, those owners are more likely to have problems affording the care of their horses during an economic downturn. To estimate the impact of the cessation of domestic slaughter on horse prices, we collected price data on more than 12,000 sale transactions from spring 2004 through spring 2010 from three large horse auctions located in the western, southern, and eastern United States. Our analysis of these data controlled for the economic downturn and other factors that are auction- and horse-specific, such as a horse’s breed/type, age, and gender, which may also affect prices. Horse sale prices ranged from a minimum of $4 to a maximum of $48,500, with most of these sales clustered at the lower end of the price range. Figure 4 shows the distribution of these sales prices, including the median and average price per head. Our analysis also shows a statistically significant reduction in average sale price across all price categories after the cessation of slaughter in 2007, as shown in figure 5. For example, the average sale price for horses in the lowest price category (20th percentile), dropped by about $110 per head (from $433 to $323), and the average price for the highest price category (80th percentile) dropped by about $140 per head (from $2,380 to $2,241). The other variables that we considered included season of year of the auction, auction location, and percentage of “no sales” (horses that did not receive a bid acceptable to the seller) for each auction. The effect on price was not statistically significant for that category. These estimates suggest that the closing of domestic horse slaughtering facilities had a significant and negative impact on horse prices at the low- to-mid levels of price at these auctions, while relatively higher-priced horses appear not to have lost their value due to the cessation of slaughter. Appendix II provides further details on the results of our analysis. Horse Welfare Has Reportedly Declined, Although the Extent Is Unknown, Straining the Resources of State and Local Governments, Tribes, and Animal Welfare Organizations Horse welfare in the United States has generally declined since 2007, as evidenced by a reported increase in horse abandonments and an increase in investigations for horse abuse and neglect. The extent of the decline is unknown due to a lack of comprehensive, national data, but state officials attributed the decline in horse welfare to many factors, but primarily to the cessation of domestic slaughter and the U.S. economic downturn. Abandoned, abused, and neglected horses present challenges for state and local governments, tribes, and animal welfare organizations. In response, some states and tribes have taken several actions to address these challenges and the demand on their resources. Cases of Horse Abandonments, Abuse, and Neglect Have Reportedly Increased Since 2007 In interviewing the 17 State Veterinarians, we asked whether the states had data for cases of horse abandonments, abuse, and neglect. Most veterinarians from these states, including some with the largest horse populations—California, Florida, and Texas—said they do not routinely collect such data because, in part, their resources are limited and jurisdiction of animal welfare is usually a local (e.g., county) responsibility. Nearly all the State Veterinarians, however, reported anecdotes indicating that cases of abandonments and abuse or neglect have increased in recent years. For example, several State Veterinarians, including those from California, Florida, and Texas, reported an increase in horses abandoned on private or state park land since 2007, although specific data quantifying those abandonments were not available. In addition, states that do collect some data reported increases in abandonments or investigations of abuse and neglect since the cessation of domestic slaughter. For example, data from Colorado showed a 50- percent increase in investigations for abuse and neglect from 1,067 in 2005 to 1,588 in 2009. Similarly, data from Indiana indicated that horse abuse and neglect investigations more than doubled from 20 in 2006 to 55 in 2009. In addition, organizations representing localities, especially counties and sheriffs, have reported an increasing problem. For example, the Montana Association of Counties reported that the number of horses being abandoned by their owners has rapidly increased since horse slaughter for human consumption was halted in the United States, but the association did not have specific data. In addition, the National Association of Counties reported that the increasing abandonment problem is not exclusive to Montana or the West but is happening nationwide. State Veterinarians Attributed Decline in Horse Welfare Primarily to Cessation of Slaughter and Economic Downturn, but Representatives of Animal Welfare Organizations Question Cessation’s Impact We also asked the 17 State Veterinarians whether horse welfare, in general, had improved, declined, or remained about the same in their states over the last 5 years. Without exception, these officials reported that horse welfare had generally declined, as evidenced by a reported increase in cases of horse abandonment and neglect. They most frequently cited two factors that contributed to the decline in horse welfare—the cessation of domestic slaughter in 2007 and the economic downturn—although they generally were careful not to pin the decline on any single factor. Other factors that they generally cited include poor weather conditions (e.g., drought in western states); the cost of horse disposal methods (e.g., veterinarian-assisted euthanasia); the increasing costs of feeding and caring for horses; and the lack of auction markets to sell horses. Among the factors affecting horse owners, the State Veterinarians said a horse owner’s decision to abandon a horse generally related to (1) cessation of domestic slaughter, (2) poor economic conditions, and (3) low horse prices or lack of sale opportunities. They also said the factors most often related to a horse owner’s neglect of a horse were (1) poor economic conditions, (2) the cost of horse care and maintenance, and (3) lower horse prices. Several State Veterinarians pointed out that, in their professional experience, very few owners directly physically abuse their horses, which would be a crime. More common, however, were owners who neglected the feeding and proper care—such as providing farrier services (i.e., hoof care) and vaccinations—of their horses. Thus, based on the information these officials provided, the primary drivers for the increase in abandonment and neglect cases are the cessation of domestic slaughter, causing lower horse prices and difficulty in selling horses, and the economic downturn, affecting horse owners’ ability to properly care for their animals. As discussed, our analysis also showed that the cessation of slaughter and the economic downturn generally reduced horse prices at our selected auctions; in particular, the cessation affected prices for the low-to-mid range priced horses that are more frequently abandoned and neglected. Furthermore, regarding neglect, some State Veterinarians, noting that people are more inclined to take care of that which has value, said that the drop in horse prices affected some owners’ interest in caring for their animals, especially if their financial situation had declined. With regard to the entities most affected by the increase in abandoned and neglected horses, the State Veterinarians generally said that counties, including sheriffs, bear the responsibility for investigating potential cases affecting horse welfare. Many State Veterinarians, particularly from western states, indicated that their offices did not have the resources to support the counties beyond providing expert veterinary advice regarding conditions of abandoned and neglected horses, such as opining on a horse’s nutritional status (known as “body scoring”). State and Local Governments, Tribes, and Animal Welfare Organizations Are Affected by Neglected and Abandoned Horses, as Is the Federal Government State and local governments, tribes, and animal welfare organizations, especially horse rescues, are facing growing pressures to care for abandoned and neglected horses at a time of economic recession and tight budgets. According to the State Veterinarians, counties and animal welfare organizations bear the costs of collecting and caring for abandoned horses, while county governments generally bear the costs of investigating reports of neglect. These officials said horse rescue operations in their states are at, or near, maximum capacity, with some taking on more horses than they can properly care for since the cessation of domestic slaughter. One State Veterinarian added that his office is reluctant to pressure horse rescues in his state to take on additional animals because of this problem, even though alternatives are lacking. Some State Veterinarians also described situations in which counties and sheriff departments were reluctant to investigate reports of abandoned or neglected horses because these jurisdictions lacked resources to deal with the consequences of finding such animals. In some cases, these officials said local jurisdictions may lack the resources even to initiate such investigations, let alone to take possession of and care for these animals. And in cases where an investigation results in horse seizures, local jurisdictions may have to appeal for the public’s help in caring for the animals. For example, the Montana State Veterinarian and his staff described a recent situation in their state involving the seizure of hundreds of neglected horses, many of which had low body scores and would not have survived the winter without intervention. These horses were seized from a ranch owner near Billings, Montana, in January 2011 who was no longer able to afford their care. Because of the strain placed on state and county resources to care for so many animals, these jurisdictions had to seek private donations of hay to feed these horses. Figure 7 shows some of the horses seized in this case. Tribes also reported increases in abandonments on their land, exacerbating the overpopulation of horse herds on tribal lands. According to 2009 data from the Northwest Tribal Horse Coalition (now the National Tribal Horse Coalition), the number of horses on its tribal lands exceeded 30,000 horses. When we met with representatives of tribes in the western United States, they showed us significant degradation of their lands as a result of the over-grazing by large populations of wild horses, as shown in figure 8. They explained that the increase in abandoned horses on their lands has compounded the challenge of restoring native and religiously- significant species of plants to their land—an effort often paid for, in part, by the federal government. Moreover, domesticated horses abandoned on public lands generally have poor survival prospects, according to officials from the Department of the Interior’s Bureau of Land Management (BLM). These horses are unfamiliar with which wild plants are edible and are likely to be shunned or hurt by wild horses. These abandoned horses may also introduce diseases to wild herds. The effects of the increasing number of abandoned or neglected horses have been felt by local animal welfare organizations as well—in particular, the horse rescues and local societies for the prevention of cruelty to animals that work with local officials to place such horses, according to the State Veterinarians. The total number of rescues and their capacities is unknown because there is no national registry or association for horse rescues. However, both the National Association of Counties and the Unwanted Horse Coalition estimated that the nationwide capacity of rescue facilities is about 6,000 horses. They also reported that the vast majority of these facilities are already full. Some State Veterinarians told us that some rescue organizations have taken on more horses than they can properly care for, especially in an economic environment in which donations have declined; as a result, horses at some of these organizations’ facilities have been seized. For example, it has been reported that horse rescues in California, Florida, New York, and West Virginia have recently had their animals seized by local authorities because they were not properly caring for them, and others in New Hampshire and Pennsylvania closed due to financial difficulties. In addition, the increase in unwanted domesticated horses available for sale or being abandoned on public lands is affecting the federal government’s ability to manage wild horse and burro populations. Most of these wild animals are found on lands managed by BLM and USDA’s Forest Service in the western United States. From 1971 through 2007, BLM removed over 267,000 wild horses and burros from these lands, and during the same period, approximately 235,700 of these animals were adopted by the public under a BLM program that promotes these adoptions. As we reported in 2008, BLM has, however, experienced a steady decline in adoptions in recent years, which agency officials attributed, in part, to the large number of domesticated horses flooding the market. More recently, BLM officials said that annual adoptions had fallen from about 8,000 in 2005 to about 3,000 in 2010. In an October 2010 Web message, the BLM Director estimated that the number of horses and burros on lands the agency manages exceeds by about 12,000 the number that would allow these lands to remain sustainable for other uses and species. According to BLM officials, in addition to natural reproduction in wild horse and burro herds, the increasing number of domesticated horses being abandoned on public lands has contributed to this overpopulation problem. Other officials, including those from animal welfare organizations, questioned the relevance of the cessation of domestic slaughter to the rise in abandoned and neglected horses, which they attributed more to the economic downturn. For example, in March 2010, Animal Welfare Institute representatives said that since a 1998 California ban on dealing in horses intended for slaughter, their organization has offered a $1,000 reward for notification of abandoned horses but has never received a tip. In addition, the Humane Society of the United States and the United Animal Nations reported that there has been no documented rise in abuse and neglect cases in California since the 1998 ban. United Animal Nations also reported there was no documented rise in abuse and neglect cases in Illinois following the 2-year closure of the horse slaughtering facility in that state in 2002. Furthermore, Humane Society of the United States officials said that owners who abandon horses are going to abandon them regardless of having the option for domestic slaughter, adding that there were instances of horse abandonment near domestic horse slaughtering facilities before they closed. These officials acknowledged that there are no good data on horse abandonments but noted an increase in abandonments of all kinds of domesticated animals as the economy worsened. States and Tribes Have Taken a Variety of Actions Related to Horse Welfare and Slaughter Some states took actions related to horse welfare and slaughter even before the cessation of domestic slaughter in 2007. For example, in 1998, California made it illegal to export horses for the purpose of having them slaughtered for human consumption outside the state. Specifically, California law makes it unlawful for any person to possess; to import into or export from the state; or to sell, buy, give away, hold, or accept any horse with the intent of killing or having another kill that horse, if that person knows or should have known that any part of that horse will be used for human consumption. Several state officials told us that this ban is difficult to enforce because it may be difficult to show when an owner knew or should have known that a buyer intended that animal for slaughter. For example, if an owner transports a horse to an auction in another state (e.g., Montana or Texas), it may be difficult to prove that the owner specifically intended to sell the horse for slaughter or should have known that the buyer of the horse intended to sell the horse for slaughter. In addition, since 2007, states and tribes have taken a variety of legislative or other actions related to horse welfare or slaughter. For example, in 2009 Montana passed a law that allows horse owners to surrender horses that they cannot afford to maintain to the state at a licensed livestock market without being charged with animal cruelty. Also, Colorado authorized the inclusion of a checkbox on state income tax return forms allowing taxpayers to make a contribution to the Colorado Unwanted Horse Alliance. In authorizing the program, the Colorado legislature found that the number of unwanted horses is increasing; most horse rescue facilities are operating at capacity and have limited ability to care for additional horses; and incidences of horse abuse and neglect are rising. In addition, Kentucky passed a law in the spring of 2010 creating the Kentucky Equine Health and Welfare Council and charged it with developing regional centers of care for unwanted, abused, neglected, or confiscated equines; creating a system of voluntary certification of equine rescue and retirement operations; and suggesting statutory changes affecting equine health, welfare, abuse, and neglect issues. Also, in 2009, the National Congress of American Indians and the Northwest Tribal Horse Coalition passed resolutions supporting domestic slaughter to manage overpopulated horse herds. A number of the 17 states that we examined have also enacted laws related to horse welfare and slaughter since the cessation of domestic slaughter. For example: Arkansas, Oklahoma, Utah, and Wyoming passed resolutions urging Congress to facilitate the resumption of horse slaughtering in the United States and oppose federal legislation that would ban domestic slaughter. North Dakota and South Dakota passed similar resolutions urging Congress to reinstate and fund federal inspection programs for horse slaughter and processing. Montana passed a law that would make it easier to establish a horse slaughtering facility by making it harder for those opposing such a plant to get an injunction against it while challenging various permits that the plant would need to operate. In his 2009 testimony in support of the bill, the chair of Montana’s Farm Bureau cited rising numbers of unwanted horses and associated costs. Wyoming amended its existing law to provide that strays, livestock, and feral livestock, including horses, may be sent to slaughter as an alternative to auction or destruction. The legislative changes also provided that the state could enter into agreements with meat processing plants whereby meat from livestock disposed of by slaughter could be sold to state institutions or nonprofits at cost or to for-profit entities at market rate. Several states are seeking to reopen domestic horse slaughter facilities, under a provision of the Food, Conservation, and Energy Act of 2008, which authorized USDA to establish a new voluntary cooperative program under which small state-inspected establishments would be eligible to ship meat and poultry products in interstate commerce. USDA recently finalized a rule to implement the program, but USDA officials said that the rule does not include horsemeat, because recent appropriations legislation has prohibited the use of federal funds for inspecting horses prior to slaughter. And although, under the proposed program, the inspections would be done by state officials, federal law requires USDA to reimburse the state for at least 60 percent of the associated costs. However, as noted by USDA officials, the prohibition in appropriations legislation against using federal funds for inspecting horse at slaughter would preclude these reimbursements. USDA officials said the same issue would preclude tribal slaughtering facilities from shipping horsemeat in interstate or international commerce as well. USDA’s Oversight of the Welfare of Horses Transported for Slaughter Is Complicated by Three Challenges USDA faces three challenges in its oversight of the welfare of horses during their transport for slaughter. First, APHIS faces several specific management challenges in implementing the transport program. Second, legislative prohibitions on using federal funds for inspecting horses prior to slaughter impede USDA’s ability to ensure horse welfare. Third, the cessation of domestic slaughter has diminished APHIS’s effectiveness in overseeing the transport and welfare of horses intended for slaughter. Management Challenges Affect APHIS’s Implementation of the Slaughter Horse Transport Program Several management challenges are affecting APHIS’s implementation of the transport program. These challenges include (1) delays in issuing a final rule to give the agency greater oversight over horses transported for slaughter to protect their welfare; (2) limited staff and funding that complicates the agency’s ability to ensure the completion, return, and evaluation of owner/shipper certificates; and (3) a lack of current, formal agreements with Canadian, Mexican, and state officials whose cooperation is needed for program implementation. APHIS Has Not Issued a Final Rule to Better Protect Horses Transported for Slaughter APHIS’s transport regulation sets minimum care standards to protect horse welfare, but it applies only when the horses are being moved directly to slaughtering facilities, at which point shippers designate the horses as “for slaughter” on an owner/shipper certificate and move the horses directly to slaughtering facilities. Consequently, the regulation does not apply to horses that are moved first to an assembly point, feedlot, or stockyard before going to slaughter. For example, a horse’s journey to slaughter may have covered several states, from point-of-purchase at an auction to an assembly point, such as a farm; from the assembly point to a feedlot or stockyard; and from the feedlot or stockyard to a point near a slaughtering facility or a border crossing where the slaughter designation was first made. In reviewing a generalizable sample of nearly 400 owner/shipper certificates from before and after cessation of domestic slaughter in 2007, we found that shippers usually designated horses as “for slaughter” on the final leg of their journey to a slaughtering facility, as allowed under the current regulation. For example, prior to cessation in 2007, shippers sometimes designated horses near the U.S. facility in which they would be slaughtered. Specifically, we found cases in which horses shipped to the slaughtering facility in DeKalb, Illinois, were designated for slaughter at a point just a few miles from the plant. Similarly, since cessation in 2007, shippers sometimes made this designation near border crossings with Canada or Mexico. For example, since cessation, we found shipments of horses being designated for slaughter in Shelby, Montana, about 36 miles from the border crossing into Canada and in El Paso, Texas, about 10 miles from where they cross the border into Mexico. According to APHIS officials, in virtually all of these cases, without a “for slaughter” designation, it is likely that before reaching these designation points, the horses already had traveled for long distances within the United States without the protection of the APHIS transport regulation to ensure their humane treatment. For example, some of the horses may have been transported in double-deck trailers intended for smaller livestock animals; as discussed, the APHIS transport regulation prohibits the use of this type of trailer after the designation for slaughter is made. To address this issue, APHIS proposed, in November 2007, to amend the existing transport regulation to extend APHIS’s oversight of horses transported for slaughter to more of the transportation chain that these horses pass through. The proposed rule defines equine for slaughter as an equine transported to intermediate assembly points, feedlots, and stockyards, as well as directly to slaughtering facilities. The current regulation does not define equine for slaughter and only applies to those equines being transported directly to slaughtering facilities. APHIS has experienced repeated delays in issuing a final rule that would extend APHIS’s oversight of horses being transported for slaughter. According to USDA officials, the delay is the result of a number of factors, including, competing priorities and the need to address substantive, public comments on the proposed rule that resulted in reclassifying it as significant under Executive Order 12866. As of June 2011, USDA officials said they anticipate issuing the final rule by the end of calendar year 2011. APHIS officials noted that this change to the transport regulation could help address another issue as well. Specifically, the regulation currently does not apply to shippers transporting horses to Canada as feeder horses. As discussed, some U.S. horses exported for purposes other than slaughter (i.e., not designated for slaughter on an owner/shipper certificate) may be feeder horses that are ultimately sent to slaughtering facilities at a later time. According to APHIS officials, the number of feeder horses has likely grown with the increase in total horse exports to Canada since 2007. Because feeder horses are not designated for slaughter before crossing the border, they are not covered by the transport regulation at any point in their journey. If the transport regulation is amended, however, as APHIS has proposed, the designation “equine for slaughter” would apply to these animals during the leg of their trip from the U.S. auction where they were purchased to the border crossing, including any intermediate stops within the United States at assembly points, feedlots, and stockyards. Such a designation would place those animals under the protection afforded by APHIS’s oversight. APHIS officials also noted that the provision of the 1996 Farm Bill authorizing the transport regulation is the only federal statute that regulates the transportation of horses, and they commented on the irony that horses designated for slaughter are provided greater protection, under current federal law and the transport regulation, than other horses in commercial transit. Limited Staff and Funding Complicates Program Implementation Over the past 6 fiscal years, the transport program’s annual funding has varied, generally declining from a high of over $306,000 in fiscal year 2005 to about $204,000 in fiscal year 2010. This funding primarily provides for the salaries and expenses of two staff, one of whom is the national compliance officer, who inspects conveyances and owner/shipper certificates for compliance with the transport regulation, with the remainder going to travel costs. The two program officials stated that the program’s limited funding, particularly for travel, has significantly curtailed their ability to provide coverage at border crossings and to work with shippers and inspectors in foreign slaughtering facilities to ensure compliance with the transport regulation. For example, with one compliance officer, the program cannot adequately cover the numerous border crossings on the Canadian and Mexican borders through which shipments of horses intended for slaughter move. In April 2011, transport program officials said they recently had begun training inspectors in APHIS’s Western region and Texas area office to assist the program at southern border crossings by, in part, collecting owner/shipper certificates and returning them to APHIS headquarters. However, these officials said they did not have a written plan or other document that describes this initiative, including the number of staff to be involved, their anticipated duties to support the transport program, and the time frames for implementing the initiative. Hence, while this appears to be a positive step, we were unable to evaluate the potential usefulness of this initiative. Figure 9 provides information on the transport program’s funding for fiscal years 2005 through 2010. According to program officials, the reduction in funds in 2009 was the result of a cut in travel funds that were allocated to other APHIS programs. The program officials added that the seesaw nature of the program’s funding, as well as the fact the program has just two staff, has affected their ability to ensure compliance with, and enforce, the transport regulation and contributed to year-to-year variations in the number of violations found. In addition, because of limited staff and funding, APHIS stopped entering information from owner/shipper certificates into an automated database in 2005. Agency officials said that the database was used in the early years of the transport program to document demographic information, such as the identity of shippers and origin of horses they shipped. However, after several years, this information was well established, and there was no need to continue to collect data for this purpose. They also said that the database did not provide beneficial information for protecting horse welfare that justified the cost of maintaining the database. Nonetheless, automating the certificate data would make it easier for the agency to analyze them to, for example, identify potential problem areas for management attention and possible enforcement action, such as patterns of violations or other problems associated with particular shippers, border crossings, or slaughtering facilities. It would also allow the agency to easily identify buying trends and common shipping routes. Furthermore, automating data from the certificates on the number of horses in each shipment could potentially provide USDA a more accurate count of the number of U.S. horses exported for slaughter. At present, to estimate the number of horses exported for this purpose, USDA’s Foreign Agricultural Service pieces together Canadian and Mexican data on horses imported for slaughter and makes certain extrapolations to arrive at an approximate number since no official U.S. trade data exist on horses exported for slaughter. Federal internal control standards call for agencies to obtain, maintain, and use relevant, reliable, and timely information for program oversight and decision making, as well as for measuring progress toward meeting agency performance goals. Furthermore, the Office of Management and Budget’s implementing guidance directs agency managers to take timely and effective action to correct internal control deficiencies. APHIS’s lack of a reliable means of collecting, tracking, and analyzing owner/shipper certificates constitutes an internal control weakness and leaves the agency without key information and an important management tool for enforcement of the transport regulation. Uneven Cooperation with Canadian, Mexican, and State Officials Impedes Oversight With the cessation of domestic slaughter and the transport program’s limited staff and funding, APHIS relies on the cooperation of officials from Canada and Mexico working at border crossings and in their countries’ slaughtering facilities to help the agency implement the transport regulation. APHIS has sought similar cooperation from officials working for the Texas Department of Agriculture regarding horses exported through Texas border crossings. The effectiveness of these cooperative arrangements has been uneven, in part because APHIS lacks current, formal written agreements with its foreign and state counterparts to better define the parameters of this cooperation and ensure continuity over time as the personnel involved change. We have previously reported that by using informal coordination mechanisms, agencies may rely on relationships with individual officials to ensure effective collaboration and that these informal relationships could end once personnel move to their next assignments. Regarding Canada, representatives of APHIS and the Canadian Food Inspection Agency (CFIA) signed a letter of intent in October 2002 outlining their shared responsibilities for enforcement of the transport regulation. Each country pledged to help the other enforce its regulations. For example, to assist APHIS, CFIA agreed to ensure, either at points of entry or slaughtering facilities, the following regarding shipments of U.S. horses to Canada for slaughter: health certificates for the horses are endorsed by USDA-accredited veterinarians within the 30 days prior to export; horses are clinically healthy, fit for travel, and transported humanely to the points of entry; owner/shipper certificates are properly completed, including the date, time, and location the horses were loaded; horses are listed correctly on the owner/shipper certificate, so that for example, the backtags on the horses match the backtags listed on the certificate; an ante-mortem inspection of each horse is performed; date and time the shipment arrived at the facility is noted on the copies of all relevant documents (e.g., owner/shipper certificates) are returned to APHIS each month. APHIS officials said they rely on owner/shipper certificates, properly completed by shippers and CFIA officials, as appropriate, and returned by CFIA to APHIS for compliance and enforcement purposes. For example, APHIS needs information on the timing of the loading and off-loading of a shipment of horses to assess whether a shipper complied with regulatory requirements related to the amount of time a shipment is in transit. Figure 10 highlights sections of the owner/shipper certificate that are to be completed by shippers or Canadian or Mexican officials. In reviewing a generalizable sample of certificates returned by CFIA from 2005 through 2009, however, we found instances in which certificates were not properly completed by either the shipper or CFIA officials. Based on the results of our review, we estimate that about 52 percent of certificates were missing key information that should have been filled in by either the shipper (e.g., loading date and time, or certification that the horses were fit for transport) or CFIA (e.g., arrival date and time, or slaughtering facility identification). In addition, we estimate that about 29 percent of certificates returned to APHIS were missing some or all of the information to be provided by CFIA officials at the slaughtering facility. Moreover, in our review of these certificates we noted that the extent to which they were returned incomplete from CFIA to APHIS increased over time. For example, from 2005 through 2006, the 2 years prior to the cessation of domestic slaughter in the United States, we estimate that about 48 percent of certificates were missing key information that should have been completed by either the shippers or CFIA officials. However, from 2008 through 2009, the 2 years after the cessation, we estimate that about 60 percent of certificates were missing key information. This increase suggests that the growth in U.S. horse exports for slaughter since the cessation has been accompanied by an increase in problems with owner/shipper certificates needed by APHIS for enforcement purposes. However, APHIS and CFIA have not revisited this agreement since 2002 to reflect changes since the cessation of slaughter in 2007, when the volume of horses exported to Canada increased significantly and APHIS became more dependent upon cooperation from Canadian border officials and CFIA inspectors in slaughtering facilities. Regarding Mexico, APHIS lacks a written agreement with its relevant counterpart, Mexico’s Secretaría de Agricultura, Ganadería, Desarrollo Rural, Pesca y Alimentación (SAGARPA), to promote cross-border cooperation. APHIS officials said that they drafted an agreement in 2002, similar to the one with CFIA, and that APHIS had contacts with SAGARPA about finalizing it during 2002 and 2003. However, according to APHIS officials, the Mexican agency did not provide a response consenting to the agreement, and APHIS has not renewed the effort to get an agreement since 2003. Thus, these officials said, enforcing the transport regulation along the southern border is more difficult than along the northern border with Canada. Moreover, while shippers on the northern border can drive their conveyances directly into Canada, U.S. shippers generally are not insured to travel into Mexico. As a result, shippers unload their horses before crossing the border, where SAGARPA officials inspect the horses. The horses are subsequently loaded onto a Mexican conveyance for transport to a Mexican slaughtering facility. In the absence of a formal, written agreement between APHIS and SAGAPRA or the Texas Department of Agriculture, APHIS does not receive official cooperation from Mexican or Texas officials. As a consequence, owner/shipper certificates may not be correctly filled out by the shippers and collected, completed, and returned to APHIS from either the border crossing or the Mexican slaughtering facility with information about shipment dates and times and horse conditions. In some cases, APHIS had an informal understanding with SAGARPA officials at a border crossing that they would collect and return the certificates to APHIS. In other cases, at Texas border crossings, employees of the Texas Department of Agriculture informally cooperated with APHIS by collecting and returning the certificates to the agency and alerting it to possible violations of the transport regulation. However, these informal arrangements have not been sustained over time and have not been sufficient to ensure the return of certificates to APHIS. For example, as of March 2011, APHIS transport program officials said they have not received any owner/shipper certificates from Texas border crossings in more than a year. Although some U.S. horses intended for slaughter are exported through a border crossing in New Mexico, the majority of horses bound for Mexico pass through the Texas crossings. Thus, program officials said their ability to enforce the transport regulation for shipments of horses exported through these border crossings has been severely hampered. In addition to the more recent problem with certificates not being returned from the Texas border crossings, we reviewed a generalizable sample of owner/shipper certificates returned from the southern border from 2005 through 2009 to determine the extent to which they were correctly completed by shippers and SAGARPA officials. Based on the results of our review, we estimate that about 48 percent of these certificates from 2005 through 2009 were missing key information to be provided by either shippers or SAGARPA officials. Moreover, about 54 percent of certificates from 2008 through 2009 were missing such information, suggesting an increase in problems associated with the recent increase in exports to Mexico of horses intended for slaughter. In addition, we estimate that about 39 percent of certificates returned to APHIS were missing some or all information, including the date and time the horses were unloaded at the border, to be provided by SAGARPA officials. Legislative Prohibitions Impede USDA’s Ability to Ensure Horse Welfare Legislative prohibitions have impeded USDA’s ability to protect horse welfare since fiscal year 2006. First, as discussed, appropriations bills for fiscal years 2006 through 2010 have prohibited APHIS from using federal funds to inspect horses being transported for slaughter. As a result, according to agency officials, the transport program’s compliance officer may only inspect the owner/shipper certificates associated with the shipment of horses and the conveyance on which the horses are transported. That is, only while inspecting these items may the officer also incidentally observe any potential violations of the transport regulation regarding the physical condition of the horses because of the annual prohibition on the expenditure of federal funds on inspecting horses. The compliance officer said this makes it difficult to ensure that horses are transported humanely to slaughter and to collect information on potential violations that is needed for APHIS to pursue enforcement actions. For example, while inspecting a conveyance being used to transport horses intended for slaughter in 2010, the compliance officer found that a mare in the shipment had given birth to a foal. Because the transport regulation requires shippers to verify that horses are not likely to give birth during shipment, the birth of a foal in transit represented a potential violation. However, because of the prohibition on using funds to inspect horses, the officer was unable to inspect the horses to determine which mare had given birth. Thus, the opportunity was lost to document a potential violation of the regulation by the shipper. Moreover, according to the officer, compliance probably has suffered because shippers are aware that transport program officials cannot inspect horses in transit to substantiate potential violations. According to APHIS officials, another impediment to their investigations of potential violations of the transport regulation is USDA’s lack of subpoena authority to access the records of alleged violators or to compel persons to testify in administrative hearings and to produce documentary evidence for such hearings. Specifically, although USDA has such authority under several other APHIS-administered statutes (e.g., Animal Health Protection Act, Horse Protection Act, and Plant Protection Act), it does not have this authority under the authorizing legislation for the transport regulation—the 1996 Farm Bill. According to APHIS officials, the agency would welcome the addition of subpoena authority to promote enforcement of the slaughter horse transport regulation. Second, USDA also has been prohibited from using federal funds to inspect horses prior to slaughter for human consumption at slaughtering facilities. As discussed, the Federal Meat Inspection Act requires inspection of all cattle, sheep, swine, goats, and horses before they are slaughtered and processed into products for human food and to ensure that meat and meat products from these animals are unadulterated, wholesome, and properly labeled. Prior to the appropriations prohibition, and before the cessation of domestic slaughter, FSIS officials in U.S. slaughtering facilities inspected the condition of horses before slaughter as well as the horsemeat after slaughter. The prohibition on the use of funds for required inspections has, in effect, banned the slaughter of horses for food in the United States, and, as a consequence, moved this slaughter to other countries where USDA lacks jurisdiction and where the Humane Methods of Slaughter Act does not apply. Therefore, USDA is less able to ensure the welfare of horses at slaughter. And, as was the case with horses in transit to slaughter, APHIS officials speculated that compliance with the transport regulation has suffered because shippers are aware that the program can no longer leverage the assistance of USDA personnel in slaughtering facilities to ensure the completion of shipping paperwork or note the condition of individual horses in a shipment. This view seems consistent with our analysis of shipping certificates which found, as discussed, a statistically significant increase in incomplete certificates after the cessation of domestic slaughter. In addition, these officials noted that the loss of FSIS’s assistance in slaughtering facilities, as well as the prohibition on APHIS’s inspections of horses in transit, has led to a general decline in investigation cases since 2007. Figure 11 shows the number of investigation cases and alleged violators for fiscal years 2005 through 2010. Cessation of Domestic Slaughter Has Diminished APHIS’s Ability to Implement the Transport Regulation to Protect Horse Welfare According to APHIS and animal protection officials, horse welfare is likely to suffer as a consequence of horses traveling significantly farther to slaughter since the cessation of domestic slaughter, including an increased possibility of injuries when horses are confined in a conveyance with other horses over longer transport distances and travel times. As these officials explained, horses are by nature fight or flight animals, and when grouped in confinement, they tend to sort out dominance. In the tight quarters of a conveyance, weaker horses are unable to escape from more dominant and aggressive animals and, thus, are more prone to sustaining injuries from kicks, bites, or bumping into other horses or the walls of the conveyance. Moreover, once a shipment of U.S. horses has crossed the border into Canada or Mexico, APHIS no longer has authority to oversee their welfare, and these animals may be in transit for long distances in these countries before reaching a slaughtering facility. For example, the slaughtering facilities in Mexico that process U.S. horses are located near Mexico City, well within the interior of the country. In addition, the conveyances that horses are transferred to for travel in Mexico are not subject to the requirements of the transport regulation. Our analysis of a sample of owner/shipper certificates for 2005 through 2009 showed that, in 2005 and 2006, before domestic slaughter ceased, horses traveled an average of 550 miles after being designated for slaughter. In contrast, in 2008 and 2009, after domestic slaughter ceased, our analysis showed horses intended for slaughter traveled an average of 753 miles—an increase of about 203 miles. (The actual distances that the horses traveled, on average, before and after the cessation is likely to be greater than what our analysis showed because some shippers were prone to designate horses intended for slaughter close to the slaughtering facility before cessation, or near the border after cessation.) Over the longer distances horses now travel to Canadian and Mexican slaughtering facilities, APHIS is less able to effectively implement the transport regulation to protect horse welfare. Figure 12 provides an example of contrasting shipping routes and relative travel distances from before and after domestic slaughter ceased. In addition, since the cessation of domestic slaughter, USDA has been less able to help BLM prevent the slaughter of wild horses and burros. Wild horses and burros may be adopted, but title does not pass to the adopter until 1 year after the adoption, upon a determination that the adopter has provided humane conditions, treatment, and care for the animal over that period. Upon transfer of title, the animals lose their status as wild free- roaming horses and burros. As we reported in 2008, from 2002 through the end of domestic slaughter in September 2007, about 2,000 former BLM horses were slaughtered by owners to whom title to the horses had passed. When horses were slaughtered domestically, FSIS inspectors in slaughtering facilities watched for horses bearing the BLM freeze mark indicative of the wild horse and burro program. They would then alert BLM officials so that the title status of these animals could be checked to ensure that BLM horses were not slaughtered. As a result of FSIS’s assistance during the same time period, at least 90 adopted wild horses that were still owned by the government were retrieved from slaughtering facilities before they could be slaughtered. However, now that the slaughter of U.S. horses occurs in Canada and Mexico, FSIS can no longer provide this assistance. Furthermore, shippers are not required to identify BLM horses on owner/shipper certificates, but in reviewing nearly 400 owner/shipper certificates, we found indications that six adopted BLM horses had been shipped across the border for slaughter. Because inspection officials in foreign slaughtering facilities have no obligation to check with BLM or other U.S. authorities before slaughtering these animals, it is unknown whether title for those animals had passed to the adopter or how many more BLM horses may have been shipped across the border for slaughter. Conclusions The slaughter of horses for any purpose, especially for human consumption, is a controversial issue in the United States that stems largely from how horses are viewed, whether from an historic, work, show, recreation, or commodity point of view. As a result, there is tension between federal law mandating the inspection of horses and certain other animals at slaughter (i.e., the Federal Meat Inspection Act) and annual appropriations acts prohibiting the use of funds to inspect horses at, or being transported to, slaughtering facilities. What may be agreed upon, however, is that the number of U.S. horses that are purchased for slaughter has not decreased since domestic slaughter ceased in 2007. Furthermore, an unintended consequence of the cessation of domestic slaughter is that those horses are traveling farther to meet the same end in foreign slaughtering facilities where U.S. humane slaughtering protections do not apply. Their journey from point-of-purchase to slaughtering facilities in other countries, with multiple potential stops in- between at assembly points, feedlots, and stockyards, includes the possibility of being shipped in conveyances designed for smaller animals or confined in these conveyances for excessive time periods. The current transport regulation, the Commercial Transportation of Equines to Slaughter regulation, does not apply until a shipment is designated for slaughter, which can be the last leg of a longer journey. A 2007 proposed rule to amend the regulation, which would define “equines for slaughter” and extend APHIS’s oversight and the regulation’s protections to more of the transportation chain, has not been issued as final as of June 2011. To adequately implement the transport regulation and oversee the welfare of horses intended for slaughter, the horse transport program must ensure that owner/shipper certificates are completed, returned, and evaluated for enforcement purposes. Many certificates are not now returned, and others are returned incomplete. Furthermore, because of limited staff and funding and these missing and incomplete certificates, the program is less able to identify potential violations of the transport regulation. The program also stopped automating certificate data. Even with the present limitations of incomplete and missing certificates, automating these data is important for management oversight of compliance with the regulation and to direct scarce program resources to the most serious problem areas. Moreover, in time, as corrective actions are taken, these data will likely become even more useful for oversight purposes. If the proposed rule to extend APHIS’s authority to more of the transportation chain is issued as final, the program’s credibility will be further challenged unless APHIS identifies ways to leverage other agency resources to ensure compliance with the transport regulation. With U.S. horses now being shipped to Canada and Mexico for slaughter, APHIS depends upon cooperation with these countries, or state officials at the borders, to help it implement the transport regulation, but it does not have effective agreements that make clear each party’s obligations and that help ensure cooperation will continue as personnel change. APHIS developed an agreement with Canadian officials in 2002, but recently the agency has been receiving incomplete owner/shipper certificates from them, raising questions about the current agreement’s effectiveness and whether both APHIS and Canadian officials have the same understanding about the assistance APHIS seeks. Furthermore, APHIS does not have formal cooperative agreements with its Mexican counterpart and the Texas Department of Agriculture—the entities that oversee most U.S. horses exported to Mexico for slaughter. APHIS has not received any owner/shipper certificates from either of these entities in more than a year. Recent, annual congressional actions to prohibit the use of federal funds to inspect horses in transit or at slaughtering facilities have complicated APHIS’s ability to implement the transport regulation, thus horses now travel longer distances to foreign slaughtering facilities. APHIS lacks jurisdiction in these countries, and it can no longer depend on the help it once received from other USDA officials present in domestic slaughtering facilities to catch potential violations of the transport regulation. Even after the recent economic downturn is taken into account, horse abandonment and neglect cases are reportedly up, and appear to be straining state, local, tribal, and animal rescue resources. Clearly, the cessation of domestic slaughter has had unintended consequences, most importantly, perhaps, the decline in horse welfare in United States. Matters for Congressional Consideration In light of the unintended consequences on horse welfare from the cessation of domestic horse slaughter, Congress may wish to reconsider the annual restrictions first instituted in fiscal year 2006 on USDA’s use of appropriated funds to inspect horses in transit to, and at, domestic slaughtering facilities. Specifically, to allow USDA to better ensure horse welfare and identify potential violations of the Commercial Transportation of Equines to Slaughter regulation, Congress may wish to consider allowing USDA to again use appropriated funds to inspect U.S. horses being transported to slaughter. Also, Congress may wish to consider allowing USDA to again use appropriated funds to inspect horses at domestic slaughtering facilities, as authorized by the Federal Meat Inspection Act. Alternatively, Congress may wish to consider instituting an explicit ban on the domestic slaughter of horses and export of U.S. horses intended for slaughter in foreign countries. Recommendations for Executive Action To better protect the welfare of horses transported to slaughter, we recommend that the Secretary of Agriculture direct the Administrator of APHIS to take the following four actions: Issue as final a proposed rule to amend the Commercial Transportation of Equines to Slaughter regulation to define “equines for slaughter” so that USDA’s oversight and the regulation’s protections extend to more of the transportation chain. In light of the transport program’s limited staff and funding, consider and implement options to leverage other agency resources to assist the program to better ensure the completion, return, and evaluation of owner/shipper certificates needed for enforcement purposes, such as using other APHIS staff to assist with compliance activities and for automating certificate data to identify potential problems requiring management attention. Revisit, as appropriate, the formal cooperative agreement between APHIS and CFIA to better ensure that the agencies have a mutual understanding of the assistance APHIS seeks from CFIA on the inspection of U.S. horses intended for slaughter at Canadian slaughtering facilities, including the completion and return of owner/shipper certificates from these facilities. Seek a formal cooperative agreement with SAGARPA that describes the agencies’ mutual understanding of the assistance APHIS seeks from SAGARPA on the inspection of U.S. horses intended for slaughter at Mexican border crossings and slaughtering facilities and the completion and return of owner/shipper certificates from these facilities. In the event that SAGARPA declines to enter into a formal cooperative agreement, seek such an agreement with the Texas Department of Agriculture to ensure that this agency will cooperate with the completion, collection, and return of owner/shipper certificates from Texas border crossings through which most shipments of U.S. horses intended for slaughter in Mexico pass. Agency Comments and Our Evaluation We provided a draft of this report to USDA for review and comment. In written comments, which are included in appendix III, USDA agreed with the report’s recommendations. Regarding the first recommendation, USDA said it will move as quickly as possible to issue a final rule, but first it must formally consult with the Tribal Nations that are experiencing particularly serious impacts from abandoned horses. USDA said that if it can successfully conclude these negotiations in the next 2 months, it would publish the final rule by the end of calendar year 2011. However, USDA also said that it needs time to thoughtfully consider those consultations in regards to the regulation’s implementation. Regarding the second recommendation, USDA noted it is training additional APHIS port personnel in Slaughter Horse Transport Program enforcement activities at Texas ports of embarkation and plans to expand this effort in fiscal year 2012 within the allocated budget. USDA also stated it is training administrative personnel to evaluate owner/shipper certificates for enforcement purposes, and it will explore whether new technologies have made the process of entering information from those certificates into a database less costly in order to do so within existing funding. Regarding the third recommendation, USDA said it would consult with CFIA and propose revisions to the current cooperative agreement. Regarding the fourth recommendation, USDA indicated it will consult with SAGARPA and the Texas Department of Agriculture and propose the development of formal agreements with one or both. We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our report objectives were to examine (1) the effect on the U.S. horse market, if any, since domestic slaughter of horses for food ceased in 2007; (2) the impact, if any, of these changes on horse welfare and on states, local governments, tribes, and animal welfare organizations; and (3) challenges, if any, to the U.S. Department of Agriculture’s (USDA) oversight of the transport and welfare of U.S. horses exported for slaughter. In general, to address these objectives, we reviewed documents and/or interviewed officials from USDA, including the Animal and Plant Health Inspection Service (APHIS), Food Safety Inspection Service, Foreign Agricultural Service, National Agricultural Statistics Service, and the Office of Inspector General; other federal agencies such as the Department of the Interior’s Bureau of Land Management, Department of Commerce, Department of Labor’s Bureau of Labor Statistics, and Congressional Research Service; state and local governments, including the National Association of State Departments of Agriculture, Montana Association of Counties, National Association of Counties, National Sheriffs Association, and Western State Sheriffs Association; and Native American tribes, including several Great Plains Tribes, the Northwest Tribal Horse Coalition, and several Southwestern Tribes. We also reviewed documents and/or interviewed representatives from livestock industry organizations, including the American Association of Equine Practitioners, American Horse Council, American Veterinary Medical Association, Florida Animal Industry Technical Council, Maryland Horse Industry Board, Livestock Marketing Association, United Horsemen’s Front, United Organizations of the Horse, Unwanted Horse Coalition, and commercial horse auctions located in various states, including Alabama, Arkansas, Montana, Oklahoma, Pennsylvania, and Virginia; and animal welfare organizations, including the American Society for the Prevention of Cruelty to Animals, Animal Law Coalition, Animal Welfare Institute, Equine Welfare Alliance, and Humane Society of the United States. In addition, we reviewed published literature related to the horse industry and livestock slaughter, and we interviewed academic experts who have researched and written about these issues. Furthermore, we reviewed relevant federal and state legislation regarding horse inspection, slaughter, transport, and/or welfare, including bills proposed but not enacted in the 111th U.S. Congress and by state legislatures, and related federal regulations, including USDA’s Commercial Transportation of Equines to Slaughter regulation and related guidance. To determine the extent to which slaughter for non-food purposes occurs in the United States, we identified facilities that had been reported to slaughter horses for other purposes (e.g., food for animals at zoos and circuses) and interviewed the Slaughter Horse Transport Program’s compliance officer about the officer’s examinations into these facilities’ operations. We also visited border crossings in New Mexico and Texas, horse auctions in Montana and Pennsylvania, and tribal lands in the northwest United States to observe the handling of horse shipments at the border, horse sale procedures, and wild and abandoned horse management challenges, respectively. To further examine the effect on the U.S. horse market, if any, since the cessation of domestic slaughter, we used an econometric analysis and regression methods to estimate the effect of the cessation on horse prices, while considering the effects of the U.S. economic downturn (i.e., recession) and horse- and auction-specific variables. We did this analysis because we found few current studies addressing the effect of the cessation on horse prices in the economic literature. In undertaking this work, we collaborated with Dr. Mykel Taylor, Assistant Professor and Extension Economist in the School of Economic Sciences at Washington State University, who was studying this issue at the time we began our work and previously had modeled and written about the determinants of horse prices. We obtained data for our analysis from multiple sources. Regarding horse prices, we obtained sale price and horse characteristic data on 12,003 sale transactions from spring 2004 through spring 2010 at three large horse auctions located in Montana, Oklahoma, and Virginia. Specifically, we extracted data from price sheets and catalogue information published or otherwise provided by the owners of these auctions. We chose these auctions because they were located in geographically diverse parts of the country. In addition, these auctions regularly sell lower-value horses, as well as more expensive horses valued for leisure, work, or show purposes. Some, but not all, of the lower-valued horses in the data are bought for slaughter, including some referred to as “grade” or “loose” horses. We assumed that if there was an effect from the cessation of domestic horse slaughter, prices for lower-valued horses would be most impacted. Consequently, we did not include data in our analysis from auctions catering to very high-priced racing and show horses. We also obtained data from the Department of Labor’s Bureau of Labor Statistics on changes in unemployment in each of the regions in which the horse auctions we selected are located. We used these unemployment data as a proxy for the economic downturn experienced in recent years. We performed quality tests and interviewed knowledgeable agency officials and auction representatives about the sources of the data and the controls in place to maintain the data’s integrity, and we found the data to be sufficiently reliable for the purposes of this report. Using these data, we analyzed whether there was a significant reduction in average sale price per head after the cessation of domestic slaughter. For purposes of our analysis, the period prior to cessation included spring 2004 through 2006, and the period after cessation included 2007 through spring 2010 (because most domestic slaughtering facilities were closed by early 2007). To evaluate the potential reasons for this reduction in price, we also developed a hedonic model, which allows one to describe the price of a good (e.g., a horse) as a function of the value of intrinsic characteristics of that good (e.g., a horse’s breed, age, and gender). Thus, we specified a horse’s sale price as a function of variables that describe its physical attributes, such as breed, age, and gender; auction-specific variables, such as region of the country and season of the year; and other variables, such as the cessation of domestic slaughter and economic downturn. We used the quantile regression technique to derive coefficients to explain the impact on horse prices for each variable in the model. Quantile regression is a statistical method that provides information about the relationship between an outcome variable (e.g., horse prices) and explanatory variables (e.g., cessation of slaughter) at different points in the distribution of the outcome variable. This type of regression is more appropriate than standard linear regression for several reasons. For example, we wanted to determine the estimated effects of the cessation at various points across the entire distribution of sales prices in our data, instead of on just the average value (i.e., mean), as in linear regression. Also, the approach is more appropriate when using data from separate sources, such as the three auctions in different parts of the country. In addition, because our price data were highly skewed (i.e., included mostly lower- and mid-priced horses), we transformed prices to a natural logarithmic scale in the regression in order to obtain a better statistical fit for our model. There are several potential limitations to this type of modeling. For example, all of the variables influencing an outcome may not be known, and there are likely to be limitations in the data available for the analysis. For example, the price of a horse may also be related to other attributes such as quality of pedigree and performance characteristics (e.g., championships or titles won), but information on these variables was not available for all horses in our analysis. In addition, other characteristics of a horse, such as health, demeanor, and general appearance may also affect the price buyers are willing to pay, but those characteristics are difficult to measure and, therefore, were not available for our analysis. Nevertheless, despite these limitations, this type of regression is useful for developing estimates of the impacts from, and an indication of the relative importance of, various variables to an outcome. In our analysis, we estimated the impact of the cessation on horse prices, while considering other relevant variables, on horse sale price for five price quantiles (20th, 40th, 50th, 60th, and 80th percentiles). As discussed, the other variables in our analysis included a horse’s physical characteristics, such as breed/type, age, and gender. Regarding breed, the data contained a total of 27 horse breeds, but for purposes of our analysis, we categorized horses into one of seven variables—Quarter horses, Paint horses, Appaloosas, ponies and miniature horses, Thoroughbreds, combined “other,” and “grade.” Grade horses are sold without breed designation, are often sold in groups, and are usually the lowest-priced horses available at an auction. Regarding age, horses in our data ranged from 1 to 32 years old, and we included age as a continuous variable in our analysis. We also used a related variable, the square of a horse’s age, to account for changes in a buyer’s willingness to purchase a horse as its age increases. Regarding gender, we used “indicator” variables for mare, stallion, and gelding (a neutered male horse). In addition, we used two interactive variables to explain how the gender and age of a horse could interact to affect its sale price—(1) interacting mare with age and (2) interacting gelding with age. For example, the price of a mare may increase early in her life as she is able to produce foals but may decline when she becomes too old to breed consistently. To capture information that was auction-specific, we included several additional variables in our analysis. First, we measured the percentage of “no-sale” horses at each auction. In general, these horses were not sold by their owners because they did not receive high enough final bids for these horses at auction. We also included a variable denoting whether an auction was in the western, southern, or eastern region of the United States. In addition, we included variables to delineate whether an auction was held in the spring or fall seasons. Industry experts we contacted said spring auctions generally are larger and bring higher prices than fall auctions, when owners may be more anxious to sell their horses rather than have to feed them through the winter. We included the cessation of slaughter as an indicator variable in our analysis, with “0” indicating the period prior to the cessation of domestic slaughter in 2007, and “1” for the period after. For purposes of our analysis, the period prior to cessation included spring 2004 through 2006, and the period after cessation included 2007 through spring 2010 (because most domestic slaughtering facilities were closed by early 2007). To measure the effect of the economic downturn, we used a variable based on average monthly unemployment rates from the Bureau of Labor Statistics for the 12-month period prior to the date of each auction. These data are compiled by Census Divisions or by geographic region; we used the data for those Census Divisions or regions that correspond to the locations of the three auctions. More specifically, we averaged the unemployment rate data for the 12-month period prior to the date of each auction because we assumed that buyers and sellers would make transaction decisions based on economic conditions for a period before the date of the auction, not just on conditions at the time of the auction. In order to review the soundness of our methodology and results, we asked five academic experts in agricultural economics to review a draft of our model specifications and discussion of results for fatal flaws. We chose these experts because they have published articles related to the horse industry and livestock slaughter issues. These experts generally found the model specifications and results credible. Several offered specific technical comments related to the presentation of the model results, which we incorporated, as appropriate. Additional information about the results of our analysis is in appendix II. To further examine the impact, if any, of horse market changes on horse welfare and states, local governments, tribes, and animal welfare organizations, we used semi-structured interviews to systematically collect the views of the State Veterinarian (an appointed position) in 17 states. These states included the 10 with the largest horse populations, and the 10 with the largest horse economies—a total of 14 states. In addition, we added Montana, New Mexico, and Wyoming at the suggestion of representatives of the horse industry and animal welfare organizations, who indicated that these states had unique perspectives on border or tribal issues related to horses. In some cases, the State Veterinarian was joined by other state officials, such as members of the state livestock board, for these interviews. The results of the interviews are not generalizable to all State Veterinarians but provide information about the situations faced by these 17 states. Semi-structured interviews follow a standard structure to systematically gather information from the target audience. In our case, we wanted to systematically collect information from these 17 states on (1) horse sales and prices; (2) export, trade, and transport of horses; (3) abandoned and adopted horses; (4) horse abuse and neglect cases; (5) legislation related to horse slaughter and welfare; and (6) other factors generally affecting horse welfare. Using software called NVivo, we then performed a qualitative content analysis of the results of these interviews to identify common themes and the frequency with which certain issues were raised. Content analysis is a methodology for structuring and analyzing written material. Specifically, we developed a coding and analysis scheme to capture information on factors that may explain changes in the horse industry in these states. Such factors included the cessation of domestic slaughter; economic conditions; restrictions on the use of certain drugs in horses slaughtered for human consumption; and changes in horse breeding, disposal, care and maintenance, prices, sales, and such inputs as the cost of feed. We also developed a coding and analysis scheme to capture information on factors related to horse owners’ potential responses to those changes, including abandoning, neglecting, abusing, and hoarding horses, as well as factors related to horse welfare such as being harmed by unfamiliar herds and traveling farther to slaughter. In addition, we developed a coding and analysis scheme to identify state and local responses to changes in the horse industry, including impacts on resources, costs, investigations, and legislation. The content analysis was conducted by two GAO analysts with the assistance of a GAO methodologist. Discrepancies in coding were generally discussed and resolved between the analysts; on occasion, the methodologist weighed in to resolve a discrepancy. To further examine challenges, if any, to USDA’s oversight of the transport and welfare of U.S. horses exported for slaughter, we identified and analyzed a generalizable sample of about 400 horse shipping forms, known as owner/shipper certificates, for the period 2005 through 2009, to determine whether (1) the certificates were properly completed and (2) horses were traveling farther to slaughter since the cessation of domestic slaughter in 2007 than they were traveling prior to the cessation. Each owner/shipper certificate represents one load or shipment of horses. APHIS maintains these forms at its headquarters offices in Riverdale, Maryland, in hardcopy, sorted by year and shipper. As there were no electronic records of the sample frame (i.e., the universe of certificates) from which we could randomly sample and we initially did not know the total number of certificates on file, we selected a stratified, systematic random sample from the hardcopy certificates for the period. We chose to stratify the sample frame into three strata (i.e., time periods) so we would be able to compare estimates of certificate completeness and the distances horses traveled before and after 2007. Specifically, we systematically selected 396 certificates, including 192 for 2005 through 2006, the 2 years prior to the cessation of domestic slaughter; 84 for 2007; and 120 for 2008 through 2009, the 2 years after the cessation. In the course of selecting this sample, we determined that there were nearly 16,000 certificates on file for these years, including 7,671 certificates for 2005 through 2006, 3,378 certificates for 2007, and 4,787 certificates for 2008 through 2009. Because we followed a probability procedure based on random selections of our starting points (e.g., first select the 25th certificate in the 2005 through 2006 strata and every 40th certificate thereafter), our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we expressed our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. To estimate the degree to which owner/shipper certificates were properly completed by the shipper and by Canadian and Mexican officials, we extracted information from the certificates that APHIS uses to help determine compliance with the Commercial Transportation of Equines to Slaughter regulation, such as the loading date, time, and location; certification that the horses were fit for transport; the identity of the receiving slaughtering facility; and the date and time the shipment arrived. Using our sample of certificates, we calculated estimates of the degree of completeness of all certificates returned to APHIS from slaughtering facilities or border crossings from 2005 through 2009 and tested the change over time for statistical significance. In order to estimate the distance that horses traveled, on average, we extracted information on each shipment’s origination (i.e., loading) point and destination (i.e., off-loading) point from the certificates. Regarding shipments that went to former U.S. slaughtering facilities, we used the Transportation Routing Analysis Geographic Information System (TRAGIS) model developed by the Department of Energy to estimate driving miles between the origination point, such as an auction, farm, feedlot, or stockyard, and the slaughtering facility. Because TRAGIS includes only U.S. roads, we used a different approach for calculating distances beyond the U.S. border to foreign slaughtering facilities. First, based on USDA information on the border crossings most often used to export shipments of horses intended for slaughter, we used TRAGIS to calculate the distance from an origination point to several border crossings. Then, for each border crossing, we used commercial software available on the Web to estimate the distance from these crossings to a foreign slaughtering facility. We then combined the results and selected the combination that resulted in the shortest potential distance traveled from the origination point to the slaughtering facility. As a result, our estimates of the total distance traveled to foreign slaughtering facilities are likely to be underestimates. We conducted this performance audit from April 2010 through June 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Results of the Econometric Analysis of Horse Sale Prices For our econometric analysis of horse sale prices from three large geographically-dispersed horses auctions, we conducted a hedonic quantile regression to estimate the impact of a number of explanatory variables, including the cessation of domestic horse slaughter; the economic downturn (i.e., recession); horse attributes such as breed, age, and gender; and the location and timing of horse auctions, on the full range of values of the outcome variable—horse sale prices. We were particularly interested in the impact of the cessation and economic downturn, as these factors have been cited as reasons for recent changes in the horse industry. Appendix I includes a detailed explanation of our methodology for this analysis. A discussion of the results for the separate variables in the model follows: Age of horse. The results show that age is an important variable in explaining horse prices in these auctions. The positive sign for a horse’s age and negative sign for the age squared, indicate that young horses will increase in price as they age, but older horses will start to decline in price as they age. Moreover, the positive effect of age becomes zero for mares and geldings between 11 and 12 years of age, while stallions continue to increase in price for approximately 5 more years. Gender of horse. The results indicate that the value of horses varies both by their gender and the interaction of their gender and age. Specifically, the results show that the price of geldings is initially higher than both stallions and mares. This premium holds until approximately age 12, when the premium relative to stallions has gone to zero. Mares do not sell at a premium relative to stallions at any point in the age distribution. Location and timing of auction. The results indicate that a horse sold at either the eastern or southern auctions would fetch a higher price than an identical horse sold at the western auction. The premium for horses sold at the eastern auction is greater than the premium for horses sold at the southern auction. The timing of an auction—spring versus fall—was also statistically significant and suggests that horses sold in the fall tend to sell at a discount, although this effect diminishes for the higher price categories. This may be because owners may be more anxious to sell their horses in the fall rather than feed them through the winter. Auction no-sales percentage. The results suggest that for every 1 percent increase in an auction’s “no-sales” percentage, price decreased by about 2 percent across quantiles. That result was highly statistically significant and consistent across all horse price quantiles. This phenomenon may result from sellers having certain expectations of acceptable bid prices, and, if those expectations are not met, they may be willing to wait for a later auction date to try selling the horse again. Horse buyers may have expectations, as well, that prices will be falling even lower and wait until the next auction. This may be especially true during a period of economic slowdown, according to experts. Horse breed/type. The results suggest that Quarter horses sold at a premium, relative to grade horses, which do not have a declared breed registry. Ponies also tend to sell at a premium relative to grade horses, for those ponies sold in the higher categories (i.e., quantiles). An unexpected result was that other breed types, Paint horses, Appaloosas, and Thoroughbred horses sold at either a discount or did not show statistically significant difference in price, relative to grade horses. This could have been due to the small number of observations compared to other breeds and that for certain breeds, such as Appaloosas, there could be a lack of buyers for these types of horses. Economic downturn. The results show that the recession or downturn in the general economy caused a consistently negative effect on horse prices across the range of price categories. This effect was greater, in dollar terms, for the higher price categories. Across the five price categories, we estimate that for each percentage point increase in average unemployment in the relevant regions, horse prices decreased by 5.2, 5.2, 4.8, 4.7, and 4.8 percentage points, respectively. Cessation of domestic slaughter. The results show that the cessation was related to declines in prices for lower- to middle-value horses but diminished for higher-value horses (i.e., horses in the higher price categories in the table). For example, in the first three price categories, horse prices declined by 21, 10, and 8 percentage points, respectively. Table 2 lists the results, expressed as semi-log coefficients, of the hedonic quantile regression for five categories of horse sale prices—the 20th, 40th, 50th (median), 60th, and 80th percentiles. From the table, we see that most of the regression estimates for the model have the expected directional signs and are statistically significant. The retransformed results, from the semi-log form back to dollar and percentage changes, are presented for our two variables of interest—cessation of domestic slaughter and economic downturn—in table 1 of this report. Appendix III: Comments from the U.S. Department of Agriculture Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, James R. Jones, Jr., Assistant Director; Jim Ashley; Mark Braza; Antoinette Capaccio; Barbara El Osta; Emily Gunn; Terrance N. Horner, Jr.; Armetha Liles; Kimberly Lloyd; Jeff Malcolm; John Mingus; Kim Raheb; and Carol Herrnstadt Shulman made key contributions to this report. Related GAO Products Live Animal Imports: Agencies Need Better Collaboration to Reduce the Risk of Animal-Related Diseases. GAO-11-9. November 8, 2010. Humane Methods of Slaughter Act: Weaknesses in USDA Enforcement. GAO-10-487T. March 4, 2010. Humane Methods of Slaughter Act: Actions Are Needed to Strengthen Enforcement. GAO-10-203. February 19, 2010. Humane Methods of Slaughter Act: USDA Inspectors’ Views on Enforcement. GAO-10-244SP. February 19, 2010. Veterinarian Workforce: The Federal Government Lacks a Comprehensive Understanding of Its Capacity to Protect Animal and Public Health. GAO-09-424T. February 26, 2009. Veterinary Workforce: Actions Are Needed to Ensure Sufficient Capacity for Protecting Public and Animal Health. GAO-09-178. February 4, 2009. Bureau of Land Management: Effective Long-Term Options Needed to Manage Unadoptable Wild Horses. GAO-09-77. October 9, 2008. Humane Methods of Handling and Slaughter: Public Reporting on Violations Can Identify Enforcement Challenges and Enhance Transparency. GAO-08-686T. April 17, 2008. USDA: Information on Classical Plant and Animal Breeding Activities. GAO-07-1171R. September 13, 2007. National Animal Identification System: USDA Needs to Resolve Several Key Implementation Issues to Achieve Rapid and Effective Disease Traceback. GAO-07-592. July 6, 2007. Workplace Safety and Health: Safety in the Meat and Poultry Industry, While Improving, Could Be Further Strengthened. GAO-05-96. January 12, 2005. Humane Methods of Slaughter Act: USDA Has Addressed Some Problems but Still Faces Enforcement Challenges. GAO-04-247. January 30, 2004.
Since fiscal year 2006, Congress has annually prohibited the use of federal funds to inspect horses destined for food, effectively prohibiting domestic slaughter. The U.S. Department of Agriculture (USDA) is responsible for overseeing the welfare of horses transported for slaughter. Congress directed GAO to examine horse welfare since cessation of domestic slaughter in 2007. GAO examined (1) the effect on the U.S. horse market, if any, since cessation; (2) any impact of these market changes on horse welfare and on states, local governments, tribes, and animal welfare organizations; and (3) challenges, if any, to USDA's oversight of the transport and welfare of U.S. horses exported for slaughter. GAO analyzed horse price and shipping data, and interviewed officials from USDA, state and local governments, tribes, the livestock industry, and animal welfare organizations, and reviewed documents they provided.. Since domestic horse slaughter ceased in 2007, the slaughter horse market has shifted to Canada and Mexico. From 2006 through 2010, U.S. horse exports for slaughter increased by 148 and 660 percent to Canada and Mexico, respectively. As a result, nearly the same number of U.S. horses was transported to Canada and Mexico for slaughter in 2010--nearly 138,000--as was slaughtered before domestic slaughter ceased. Available data show that horse prices declined since 2007, mainly for the lower-priced horses that are more likely to be bought for slaughter. GAO analysis of horse sale data estimates that closing domestic horse slaughtering facilities significantly and negatively affected lower-to-medium priced horses by 8 to 21 percent; higher-priced horses appear not to have lost value for that reason. Also, GAO estimates the economic downturn reduced prices for all horses by 4 to 5 percent. Comprehensive, national data are lacking, but state, local government, and animal welfare organizations report a rise in investigations for horse neglect and more abandoned horses since 2007. For example, Colorado data showed that investigations for horse neglect and abuse increased more than 60 percent from 975 in 2005 to 1,588 in 2009. Also, California, Texas, and Florida reported more horses abandoned on private or state land since 2007. These changes have strained resources, according to state data and officials that GAO interviewed. State, local, tribal, and horse industry officials generally attributed these increases in neglect and abandonments to cessation of domestic slaughter and the economic downturn. Others, including representatives from some animal welfare organizations, questioned the relevance of cessation of slaughter to these problems. USDA faces three broad challenges in overseeing the welfare of horses during transport to slaughter. First, among other management challenges, the current transport regulation only applies to horses transported directly to slaughtering facilities. A 2007 proposed rule would more broadly include horses moved first to stockyards, assembly points, and feedlots before being transported to Canada and Mexico, but delays in issuing a final rule have prevented USDA from protecting horses during much of their transit to slaughtering facilities. In addition, GAO found that many owner/shipper certificates, which document compliance with the regulation, are being returned to USDA without key information, if they are returned at all. Second, annual legislative prohibitions on USDA's use of federal funds for inspecting horses impede USDA's ability to improve compliance with, and enforcement of, the transport regulation. Third, GAO analysis shows that U.S. horses intended for slaughter are now traveling significantly greater distances to reach their final destination, where they are not covered by U.S. humane slaughter protections. With cessation of domestic slaughter, USDA lacks staff and resources at the borders and foreign slaughtering facilities that it once had in domestic facilities to help identify problems with shipping paperwork or the condition of horses before they are slaughtered. GAO suggests that Congress may wish to reconsider restrictions on the use of federal funds to inspect horses for slaughter or, instead, consider a permanent ban on horse slaughter.
Leadership Commitment, Improved Timeliness, and Development of Metrics Were Key to Removal of DOD’s Security Clearance Program from GAO’s High-Risk List Since we identified DOD’s Personnel Security Clearance program as a high-risk area, DOD, in conjunction with Congress and other executive agency leadership, took actions that resulted in significant progress toward resolving problems we identified with the security clearance program. In 2011, we removed DOD’s personnel security clearance program from our high-risk list because of the agency’s progress in improving timeliness and the development of tools and metrics to assess quality, as well as DOD’s commitment to sustaining progress. Importantly, congressional oversight and the committed leadership of the Suitability and Security Clearance Performance Accountability Council (Performance Accountability Council) –which has been responsible for overseeing security clearance reform efforts since 2008—greatly contributed to the progress of DOD and the governmentwide security clearance reform. Top Leadership Demonstrated Commitment and Collaboration in Reforming Security Clearance Process Leadership in Congress and the executive branch demonstrated commitment to reforming the security clearance process to address longstanding problems associated with the personnel security clearance program. As we have previously noted, top leadership must be committed to organizational transformation. Specifically, leadership must set the direction, pace, and tone and provide a clear, consistent rationale that brings everyone together behind a single mission. Figure 1 illustrates key events related to the Suitability and Personnel Security Clearance Reform Effort. Congressional legislation and oversight has helped focus attention and sustain momentum to improve the processing of security clearances not only for DOD but governmentwide. The Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) established, among other things, milestones for reducing the time to complete initial clearances. We previously identified best practices for agencies to successfully transform their cultures including among other things, setting implementation goals and a timeline to build momentum and show progress from day one. IRTPA established an interim objective to be met by December 2006 under which DOD and other agencies that adjudicate security clearances were to make a decision on at least 80 percent of initial clearance applications within 120 days, on average. Further, IRTPA called for the executive branch to implement a plan by December 17, 2009, under which, to the extent practical, at least 90 percent of decisions are made on applications for an initial personnel security clearance within 60 days, on average. Additionally, IRTPA required the executive branch to begin providing annual reports to Congress in 2006 on the progress made the preceding year toward meeting IRTPA’s objectives for security clearances, including the length of time agencies took to complete the investigations and adjudications—the decision as to whether an individual should be granted eligibility for a clearance. GAO has testified on security clearance reform before this committee as well as the (1) Subcommittee on Intelligence Community Management, House Permanent Select Committee on Intelligence, (2) the Subcommittee on Government Management, Organization, and Procurement, House Committee on Oversight and Government Reform, and (3) Subcommittee on Readiness, House Committee on Armed Services. security clearance reform efforts. After that meeting, OMB, ODNI, DOD, OPM, and GAO provided a memorandum on May 31, 2010 to Chairman Akaka containing a matrix with 15 metrics for assessing the timeliness and quality of investigations, adjudications, reciprocity (an agency’s acceptance of a background investigation or clearance determination completed by any authorized investigative or adjudicative agency), and automation. The development of these metrics played a key role in GAO’s decision to remove DOD’s Personnel Security Clearance program from the high-risk list. Furthermore, we have noted for many years the central role that the Government Performance and Results Act (GPRA) could play in identifying and fostering improved coordination across related federal program efforts. The GPRA Modernization Act of 2010 (GPRAMA) for a more coordinated and crosscutting approach to achieve meaningful results. GPRAMA provides an opportunity for agencies to collect and report more timely and useful performance information on crosscutting programs. This performance information can play an important role in congressional decision making. In fact, Mr. Chairman, we conducted work for you focusing on how Congress can use such information to address challenges facing the government. DOD’s personnel security clearance program was one of three case studies we used to illustrate how Congress has used agency performance information in its decision making. Pub. L. No. 111-352, 124 Stat. 3886 (2011). GPRAMA amended the Government Performance and Results Act of 1993, Pub. L. No. 103-62, 107 Stat. 285 (1993). placed the program on our high-risk list, top executive branch leadership put in place an effort to reform the security clearance process. For example, in 2007, DOD and ODNI formed the Joint Security Clearance Process Reform Team, known as the Joint Reform Team, to improve the security clearance process governmentwide. Specifically, they tasked the Joint Reform Team to execute joint reform efforts so that they achieve IRTPA timeliness goals and improve the processes related to granting security clearances. In 2008, the President in a memorandum called for a reform of the security clearance program and subsequently issued an executive order establishing the Performance Accountability Council. Under the executive order, this council is accountable to the President for leading the implementation of reform, including aligning security and suitability processes, holding agencies accountable for implementation, and establishing goals and metrics for progress. DOD worked with the Joint Reform Team and the Performance Accountability Council to develop a corrective action plan to improve timeliness and demonstrate progress toward reforming the security clearance process. For example, DOD’s leadership, in conjunction with the Joint Reform Team, developed a plan for reform that continuously evolved to incorporate new goals and address identified issues. To communicate these plans, the Joint Reform Team issued an initial reform plan in April 2008 that presented a new seven-step design intended to streamline the security clearance process, including the use of a more sophisticated electronic application, a more flexible investigation process, and the establishment of ongoing evaluation procedures between formal clearance investigations. The report was updated in December 2008 to include an outline of reform progress and further plans, and in March 2009 the Joint Reform Team issued its Enterprise Information Technology Strategy for the security clearance and suitability reform program. Then, in line with GAO recommendations, DOD worked with the Performance Accountability Council to issue a strategic framework that the council included in its 2010 report to the President. The strategic framework identified key governmentwide reform goals and identified the root causes for timeliness delays and delays to agencies honoring reciprocity. It also set forth a governmentwide mission, performance measures, a communications strategy, roles and responsibilities, and metrics to measure the quality of security clearance investigations and adjudications. DOD continues to work with the Performance Accountability Council to sustain clearance reform efforts and enhance transparency and accountability through annual reporting to Congress. DOD issued guidance on adjudication standards. In May 2009, we found that although DOD asserted that adjudicators follow a risk- management approach for granting security clearances, DOD had not issued formal guidance clarifying if and under what circumstances adjudicators can adjudicate incomplete investigative reports—such as missing information relevant to residences, employment, or education. As a result, we recommended that DOD issue guidance that clarifies when adjudicators may use incomplete investigative reports as the basis for granting clearances. Subsequently, on November 8, 2009, the Under Secretary of Defense for Intelligence issued guidance on adjudication standards that outline the minimum documentation requirements adjudicators must adhere to when documenting personnel security clearance determinations for cases with potentially damaging information. On March 10, 2010, the Under Secretary of Defense for Intelligence issued additional guidance that clarifies when adjudicators may use incomplete investigative reports as the basis for granting clearances. This guidance provides standards that can be used for the sufficient explanation of incomplete investigative reports. Further, according to DOD officials, in 2010, DOD created a Performance Accountability Directorate within the Directorate of Security to provide oversight and accountability for the DOD Central Adjudication Facilities that process DOD adjudicative decisions. DOD Developed Assessment Tools and Performance Metrics and Improved Timeliness to Demonstrate Progress One of DOD’s key actions that led to the removal of its personnel security clearance program from our high-risk list was that DOD was able to demonstrate its progress in having implemented corrective measures. Longstanding backlogs and delays in the clearance process led to our initial designation of this area as high risk. For example, in 2004, we testified that from fiscal year 2001 through fiscal year 2003, the average time for DOD to determine clearance eligibility for industry personnel increased by 56 days to over 1 year. In 2005, we reported that DOD could not estimate the full size of its backlog, but we identified over 350,000 cases exceeding established timeframes for determining eligibility. Moreover, in 2007 and 2009, we reported that clearances continued to take longer than the timeliness goals prescribed in IRTPA. In 2011, we reported that DOD processed 90 percent of initial clearances in an average of 49 days for federal civilians, military, and industry personnel and met the 60-day statutory timeliness objective for processing all initial clearances in fiscal year 2010. Also we found that DOD completed 90 percent of initial clearances for industry personnel in an average of 63 days for all the data we reviewed in fiscal year 2010,demonstrating an improvement from what we found in 2004, when the average processing time for industry personnel was over a year. Our high-risk designation was based not only on problems with timeliness but also incomplete documentation of investigations and adjudications. We reported on missing documentation in investigative reports prepared by OPM that DOD adjudicators had used to make clearance eligibility decisions. In 2009, we estimated that 87 percent of about 3,500 OPM investigative reports provided to DOD in July 2008 were missing required documentation, which in most cases pertained to residences, employment, and education. DOD adjudicators granted clearance eligibility without requesting missing investigative information or fully documenting unresolved issues in 22 percent of DOD’s adjudicative files. These findings led us to recommend that OPM and DOD, among other things, develop and report metrics on completeness and other measures of quality for investigations and adjudications that address the effectiveness of the new procedures. DOD agreed and implemented our recommendations regarding adjudication. OPM neither concurred nor nonconcurred with our recommendation; however, as noted earlier, OPM has taken steps to develop metrics. Subsequently, DOD developed two quality tools to evaluate completeness of documentation used to determine clearance eligibility. First, the Rapid Assessment of Incomplete Security Evaluations (RAISE) tracks the quality of investigations conducted by OPM. Results of RAISE will be reported to the Director of National Intelligence, which, as the Security Executive Agent of the Performance Accountability Council, will arbitrate any potential disagreements between OPM and DOD and clarify policy questions. DOD deployed RAISE to four Central Adjudication Facilities from July to October 2010 and planned to complete deployment to the remaining Central Adjudication Facilities by calendar year 2011. According to DOD officials, as of June 2012 this tool has been deployed to all of DOD’s non-intelligence agencies adjudication facilities. Although the Joint Reform Team is considering using it in the future, it is not being used by other executive agencies. Second, in 2008 DOD developed the Review of Adjudication Documentation Accuracy and Rationales (RADAR), which tracks the quality of clearance adjudications. In 2009, the Under Secretary of Defense for Intelligence directed DOD Central Adjudication Facilities to provide adjudication case records to the Defense Personnel Research Center for analysis. According to DOD officials, the department plans to use results of the RADAR assessments to monitor Central Adjudication Facilities’ compliance with documentation policies, communicate performance to the Central Adjudication Facilities, identify potential weaknesses and training needs, increase compliance, and establish trend data. DOD has completed a pilot program for the use of RADAR and began its implementation for the Army, Defense Industrial Security Clearance Office, and Navy Central Adjudication Facilities in September 2010. In addition to these assessment tools, in 2010 DOD, OMB, ODNI, and OPM developed 15 metrics that assess the timeliness and quality of investigations, adjudications, reciprocity, and automation. The quality metrics, in turn, can be used to gauge progress and assess the quality of the personnel security clearance process. These metrics represented positive developments that could contribute to greater visibility over the clearance process. Having assessment tools and performance metrics in place is a critical initial step toward instituting a program to monitor and independently validate the effectiveness and sustainability of corrective measures. The combination of congressional reporting requirements, the strategic framework, and the development of quality metrics, will help ensure transparency throughout the reform effort. It is important not only to have metrics but to use them to guide implementation. By using metrics for timeliness, DOD was able to show progress over time that helped build momentum to reach the final goal. Continuing Executive Branch Leadership and Management Attention May Enhance the Security Clearance Reform Efforts DOD’s security clearance reform effort aligned with our criteria for removal from the high-risk list in fiscal year 2011. However, security clearance reform extends beyond DOD throughout the executive branch. This is evidenced by the oversight structure, through the Performance Accountability Council, and broad executive branch participation in the reform effort. Building on the factors for reforming the security process that we have reported in the past, continued leadership and attention, such as continuing to monitor and update outcome-focused performance measures, seeking opportunities to enhance efficiency and managing costs, and ensuring a strong requirements determination process, may enhance the security clearance reform effort. Implementing, Monitoring, and Updating Outcome- Focused Performance Measures DOD has developed tools to monitor quality as well as participated in the development and tracking of quality metrics for OPM’s investigations and DOD’s adjudications through the Performance Accountability Council. We view the development of quality metrics as a positive step towards creating greater visibility over the quality of the clearance process and identifying specific quantifiable targets linked to goals that can be measured objectively. Moreover, leaders and others need to use these metrics to gauge progress toward improvements. Further, the development of performance measures related to the security clearance process by the Performance Accountability Council aligns with our previous recommendation to develop outcome-focused performance measures to continually evaluate the progress of the reform effort. We have also previously reported on the importance of continually assessing and evaluating programs as a good business practice, including evaluating metrics to help ensure that they are effective and updated when necessary.the reform and that DOD and OPM complete implementation of the As a result, it is important to sustain the momentum of quality tools and metrics so that the executive branch can demonstrate progress in improving the quality of investigations and adjudications. Leaders of the reform effort have consistently stated that implementation of reform will be incremental, and therefore, it is important that the information necessary to capture performance is up-to-date. The Performance Accountability Council quality metrics were developed subsequent to the issuance of the 2010 Strategic Framework, which articulates the goals of the security and suitability process reform. As a result, the 2010 Strategic Framework did not include a detailed plan or guidance for the implementation of the quality metrics. Further, the May 31, 2010 memorandum in which the Performance Accountability Council detailed its metrics did not discuss how often the metrics will be reexamined for continuous improvement. Moreover, according to DOD, the tools and metrics to assess quality have not been fully implemented. For example, while DOD has implemented its RAISE tool for investigation quality, it is not being used by other executive branch agencies— including OPM, which conducts the investigations and would be the appropriate agency to take actions to improve investigation quality— although the Joint Reform Team is considering using it in the future. Without these tools and metrics the executive branch will be unable to demonstrate progress in improving quality. Emphasis on quality in clearance processes should promote positive outcomes, including more reciprocity among agencies in accepting each others’ clearances. Building quality throughout clearance processes is important, but government agencies have not paid the same attention to quality as they have to timeliness. The emphasis on timeliness is due in part to the requirements and objectives established in IRTPA regarding the speed with which clearances should be completed. Our work has repeatedly called for more emphasis on quality. As previously noted, IRTPA required an annual report of progress and key measurements as to the timeliness of initial security clearances in February of each year from 2006 through 2011. It specifically required those reports to include the periods of time required for conducting investigations, adjudicating cases, and granting clearances. IRTPA required the executive branch to implement a plan by December 2009 in which, to the extent practical, 90 percent of initial clearances were completed within 60 days, on average. In its initial reports, the executive branch reported only on the average of the fastest 90 percent of clearances and excluded the slowest 10 percent. We previously reported that full visibility was limited by the absence of comprehensive reporting of initial clearance decisions timeliness. Consistent with our recommendation, the executive branch began reporting on the remaining 10 percent in its 2010 and 2011 reports. However, the IRTPA requirement for the executive branch to annually report on its timeliness expired last year. More recently, in 2010, the Intelligence Authorization Act of 2010 established a new requirement that the President annually report the total amount of time it takes to process certain security clearance determinations for the previous fiscal year for each element of the Intelligence Community. The Intelligence Authorization Act of 2010 requires, among other things, annual reports from the President to Congress that include the total number of active security clearances throughout the United States government, to include both government employees and contractors. Its timeliness reporting requirement, however, applies only to the elements of the Intelligence Community. Unlike the IRTPA reporting requirement, the requirement to submit these annual reports does not expire. Further, the Intelligence Authorization Act requires two additional one-time reports: first, a report to Congress by the President including metrics for adjudication quality, and second, a report to the congressional intelligence committees by the Inspector General of the Intelligence Community on reciprocity. The report containing metrics for adjudication quality summarizes prior information on developed tools and performance measures; however, it does not provide additional information on the implementation or update of the performance measures that were identified in the May 2010 memorandum on quality metrics. Additionally, according to an ODNI official, the report on reciprocity has not been provided, although these reports were required 180 days after the law was enacted on Oct 7, 2010. The Intelligence Authorization Act of 2010 reporting requirement on reciprocity—an agency’s acceptance of a background investigation or clearance determination completed by any authorized investigative or adjudicative agency—is the first time the executive branch has been required to report on this information since the reform effort began. Further, in 2010 we reported that although there are no governmentwide metrics to comprehensively track when and why reciprocity is granted or denied, agency officials stated that they routinely take steps to honor previously granted security clearances. We found that agencies do not consistently document the additional steps they have taken prior to granting a reciprocal clearance. For example, the Navy keeps electronic documentation, the Department of Energy and the Department of the Treasury keep paper documentation, and the Army and the Air Force do not maintain any documentation on the additional steps taken to accept a previously granted security clearance. Consequently, there is no consistent tracking of the amount of staff time spent on the additional actions that are taken to honor a previously granted security clearance. In addition, agencies do not consistently and comprehensively track the extent to which reciprocity is granted. OPM has a metric to track reciprocity, but this metric captures limited information, such as numbers of requested and rejected investigations, but not the number of cases in which a previously granted security clearance was or was not honored. Similarly, the metrics proposed by the Performance Accountability Council do not track the extent to which reciprocity is or is not ultimately honored. For example, metrics proposed by the Performance Accountability Council, such as the number of duplicate requests for investigations, percentage of applications submitted electronically, number of electronic applications submitted by applicants but rejected by OPM as unacceptable because of missing information or forms, and percentage of fingerprint submissions determined to be “unclassifiable” by the Federal Bureau of Investigation, provide useful information but do not track the extent to which reciprocity is or is not ultimately honored. Without comprehensive, standardized metrics to track reciprocity, and documentation of the process, decision makers lack a complete picture of the extent to which reciprocity is granted and the challenges to honoring previously granted security clearances. To further improve governmentwide reciprocity, in 2010 we recommended that the Deputy Director of Management, OMB, in the capacity as Chair of the Performance Accountability Council, develop comprehensive metrics to track reciprocity and then report the findings from the expanded tracking to Congress. OMB generally concurred with our recommendation, stating that the Performance Accountability Council is working to develop these additional metrics. According to a 2011 report on security clearance performance metrics, the executive branch is making progress toward developing metrics to track reciprocity specifically with the intelligence community agencies. We are encouraged by the Performance Accountability Council’s development of quality metrics, which include some metrics for tracking reciprocity. These are positive steps that can contribute to greater visibility of the clearance process, but these measures have not yet been fully implemented or their effectiveness assessed. Enhancing Efficiencies and Managing Costs Our previous work has highlighted the importance of the executive branch enhancing efficiency and managing costs related to the reform effort. For example, in 2008, we noted that one of the key factors to consider in current and future reform efforts was the long-term funding requirements. Further, in 2009, we found that reform-related reports did not detail what reform objectives require funding, how much they will cost, or where funding will come from. Furthermore, the reports did not estimate potential cost savings resulting from the streamlined process. At that time, senior reform leaders stated that cost estimates had not been completed by the Joint Reform Team or the agencies affected by reform as it was too early. Accordingly, we recommended that reform leaders issue a strategic framework that contained the long-term funding requirements of reform, among other things. Consequently, in February 2010, the Performance Accountability Council issued a strategic framework that responded to our recommendation; however, that framework did not detail funding requirements. Instead, it noted that DOD and OPM would cover costs for major information technology acquisitions. As reform leaders, through the Performance Accountability Council, consider changes to the current clearance processes, they should ensure that Congress is provided with the long-term funding requirements necessary to implement any such reforms. Those funding requirements to implement changes to security clearance processes are necessary to enable the executive branch to compare and prioritize alternative proposals for reforming the clearance processes. For example, DOD officials told us that it was unable to conduct quality assessment of adjudications during fiscal year 2011 due to lack of funding. In addition, DOD officials noted that the department is using its tool to assess the quality of investigations. However, there is no evidence that this tool is being used by other agencies to assess the quality of investigations. Given current fiscal constraints, identifying the long-term costs is critical for decision-makers to compare and prioritize alternative proposals for completing the transformation of the security clearance process. Without information on longer-term funding requirements necessary to implement the reform effort, Congress lacks the visibility it needs to fully assess appropriations requirements. We most recently reported on two areas of opportunity for which the executive branch may be able to identify efficiencies: information technology and investigation and adjudication case management and processes. In February 2012, we reported that information technology investments were one of OPM’s background investigations programs’ three main cost drivers. While these investments represent less than 10 percent of OPM’s fiscal year 2011 reported costs, they have increased more than 682 percent over 6 years (in fiscal year 2011 dollars), from about $12 million in fiscal year 2005 to over $91 million in fiscal year 2011. Moreover, we reported that OPM’s investigation process reverts its electronically-based investigation back into paper-based files. In November 2010, the Deputy Director for Management of the Office of Management and Budget testified that OPM now receives over 98 percent of investigation applications electronically, yet we observed that it is continuing to use a paper-based investigation processing system and converts electronically submitted applications to paper. OPM officials stated that the paper-based process is required because a small portion of their customer agencies do not have electronic capabilities. Furthermore, OPM’s process has not been studied to identify efficiencies. As a result, OPM may be simultaneously investing in process streamlining technology while maintaining a less-efficient and duplicative paper-based process. We recommended that OPM take actions to identify process efficiencies, including its use of information technology to complete investigations, which could lead to cost savings within its background investigation processes. OPM concurred with our recommendation and commented that these actions also reinforce a Federal Investigative Services priority and that the agency will continue to map its process to achieve maximum process efficiencies and identify potential cost savings. In commenting on our final report, OPM stated in a May 25, 2012 letter to us that it is taking a number of actions that could lead to cost savings within its background investigation process. For example, OPM noted it is conducting a study of business processes identifying time savings and efficiencies for future Federal Investigative Services’ business processes which will conclude by December 2013. In February 2012, as part of our annual report on opportunities to reduce duplication, overlap and fragmentation, we reported that multiple agencies have invested in or are beginning to invest in potentially duplicative, electronic case management and adjudication systems despite governmentwide reform effort goals that agencies leverage existing technologies to reduce duplication and enhance reciprocity. According to DOD officials, DOD began the development of its Case Adjudication Tracking System in 2006 and, as of 2011, invested a total of $32 million to deploy the system. The system helped DOD achieve efficiencies with case management and an electronic adjudication module for secret level cases that did not contain issues, given the volume and types of adjudications performed. According to DOD officials, after it observed that the Case Adjudication Tracking System could easily be deployed to other agencies at a low cost, the department intended to share the technology with interested entities across the federal government. For example, the Department of Energy is piloting the electronic adjudication module of DOD’s system, and, according to DOD officials, the Social Security Administration is also considering adopting the system. In addition to DOD, Department of Justice officials said they began developing a similar system in 2007 at a cost of approximately $15 million. In an effort to better manage the adjudication portion of the suitability and security clearance process, agencies have transitioned or plan to transition from a paper-based to an electronic adjudication case- management system. Although the investment in electronic case- management systems will likely lead to process efficiencies, agencies may not be leveraging adjudication technologies in place at other executive branch agencies to minimize duplication. One of these other agencies, the National Reconnaissance Office, is itself a component of DOD. would have to initially invest approximately $300,000 for implementation, plus any needed expenditures related to customizations, and long-term support and maintenance, which could require approximately $100,000 per year. Officials from OPM, one of the five other agencies developing or seeking funds to develop similar systems, explained that they plan to develop an electronic case-management system that is synchronized with its governmentwide background investigations system that would be available for their customer agencies to purchase. OPM released a request for information to evaluate the options for this system. DOD responded to OPM’s request for information by performing a comparative analysis of its own case-management system and said that it believes its system meets the needs set out in OPM’s request for information. However, OPM officials said that DOD’s system would cost too much money for smaller agencies to adopt, so OPM plans to continue exploring other options that would allow customer agencies access to their electronic case-management system without the need to make an expensive initial investment. Additionally, OPM officials said that their effort is intended to promote process efficiency by further integrating OPM with its more than 100 customer agencies. However, some OPM customer agencies, including DOD, which makes up approximately 75 percent of OPM’s investigation workload, expressed concern that such a system would likely be redundant to currently available case- management technology. Further, any overhead costs related to the development of an OPM system would be incorporated into OPM’s operating costs, which could affect investigation prices. The investment in electronic case-management systems aligns with the reform effort’s goal to automate information technology capabilities to improve the timeliness, efficiency, and quality of existing security clearance and suitability determinations systems. It also will likely lead to process efficiencies; however, agencies may be unclear how they might achieve cost savings through leveraging adjudication technologies in place at other executive branch agencies. In its March 2009 Enterprise Information Technology Strategy, the Joint Reform Team stated that agencies will leverage existing systems to reduce duplication and enhance reciprocity. Moreover, the Performance Accountability Council is positioned to promote coordination and standardization related to the suitability and security clearance process through issuing guidance to the agencies. The reform effort’s strategic framework includes cost savings in its mission statement, but this framework lacks specificity regarding how agencies might achieve costs savings. Without specific guidance, the opportunities to minimize duplication and achieve cost savings may be lost. Therefore, in 2012 we recommended that OMB as the Chair of the Performance Accountability Council expand and specify reform-related guidance to help ensure that reform stakeholders identify opportunities for cost savings, such as preventing duplication in the development of electronic case management.OMB concurred with our recommendation. A Sound Requirements Process for Determining Required Clearances and Level of Clearances May Reduce Costs In February 2008 and in subsequent reports, we have noted the importance of having a sound requirements determination process for security clearances. Specifically, a sound requirements determination process may help ensure that workload and costs are not higher than necessary. Further, the Performance Accountability Council’s reformed security clearance process identified determining if a position requires a security clearance as the first step of the process. Specifically, the clearance process begins with establishing whether a position requires a clearance, and if so, at what level. The numbers of requests for initial and renewal clearances and the levels of such clearance requests are two ways to look at outcomes of requirements setting in the clearance process. As of October 2010, the Director of National Intelligence reported that 3.9 million federal employees (military and civilian) and contractors hold security clearances. Moreover, OPM reported that its cost to conduct background investigations for much of the executive branch outside the intelligence agencies increased about 79 percent from about $602 million in fiscal year 2005 to over $1.1 billion in fiscal year 2011. In our prior work, DOD personnel, investigations contractors, and industry officials told us that the large number of requests for investigations could be attributed to many factors. For example, they ascribed the large number of requests to the heightened security concerns that resulted from the September 11, 2001, terrorist attacks. They also attributed the large number of investigations to an increase in the operations and deployments of military personnel and to the increasingly sensitive technology that military personnel, government employees, and contractors come in contact with as part of their jobs. Having a large number of cleared personnel can give the military services, agencies, and industry a great deal of flexibility when assigning personnel, but the investigative and adjudicative workloads that are required to provide clearances and that flexibility further tax the clearance process. A change in the higher level of clearances being requested also increases the investigative and adjudicative workloads. For example, top secret clearances must be renewed twice as often as secret clearances (i.e., every 5 years versus every 10 years). More specifically, the average investigative report for a top secret clearance takes about 10 times as many investigative staff hours as the average investigative report for a secret clearance. As a result, the investigative workload increases about 20-fold. Additionally, the adjudicative workload increases about 4-fold, because in our previous work, DOD officials estimated that investigative reports for a top secret clearance took about twice as long to review as an investigative report for a secret clearance. Further, a top secret clearance needs to be renewed twice as often as the secret clearance. In August 2006, OPM estimated that approximately 60 total staff hours are needed for each investigation for an initial top secret clearance and 6 total staff hours are needed for the investigation to support a secret or confidential clearance. The doubling of the frequency along with the increased effort to investigate and adjudicate each top secret reinvestigation adds costs and workload for the government. For fiscal year 2012, OPM’s standard base prices are $4,005 for an investigation for an initial top secret clearance; $2,711 for an investigation to renew a top secret clearance, and either $228 or $260 for an investigation for a secret clearance. As we reported in February 2012, these base prices can increase if triggered by the circumstances of a case, such as issues related to credit or criminal history checks. For example, in 2011, DOD officials stated that the prices contained in OPM’s Federal Investigative Notices are not always reflective of the amount DOD actually pays for an investigation, as a result of these circumstances. Further, the cost of getting and maintaining a top secret clearance for 10 years is almost 30 times greater than the cost of getting and maintaining a secret clearance for the same period. For example, an individual getting a top secret clearance for the first time and keeping the clearance for 10 years would cost the government a total of $6,716 in current year dollars ($4,005 for the initial investigation and $2,711 for the reinvestigation after the first 5 years). In contrast, an individual receiving a secret clearance and maintaining it for 10 years would result in a total cost to the government of $228 ($228 for the initial clearance that is good for 10 years). Requesting a clearance for a position in which it will not be needed, or in which a lower level clearance would be sufficient, will increase investigative workload and thereby costs unnecessarily. We are currently reviewing the process that the executive branch uses to determine whether a position requires a security clearance for the Ranking Member of the House Committee on Homeland Security, and the expected issuance date for this report is this summer. In conclusion, Mr. Chairman, Mr. Johnson, and Members of the Subcommittee, as evidenced by our removal of the DOD’s security clearance program from our high-risk list, we are strongly encouraged by the progress that the Performance Accountability Council, and in particular, DOD, has made over the last few years. DOD has shown progress by implementing recommendations, improving overall timeliness, and taking steps to integrate quality into its processes. The progress that has been made with respect to the overall governmentwide reform efforts would not be possible without committed and sustained leadership of Congress and by the senior leaders involved in the Performance Accountability Council as well as their dedicated staff. Continued oversight and stewardship of the reform efforts is the cornerstone to sustaining momentum and making future progress. As the executive branch continues to move forward to enhance the suitability and security clearance reform, the actions to monitor quality and enhance efficiency will be key to enhance the progress made on timeliness to date. Chairman Akaka, Ranking Member Johnson, and Members of the Subcommittee, this concludes my prepared statement, and I would be pleased to answer any questions that you may have. Thank you. For further information on this testimony, please contact Brenda S. Farrell, Director, Defense Capabilities and Management, who may be reached at (202) 512-3604. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Lori Atkinson (Assistant Director), Grace Coleman, Sara Cradic, James Krustapentus, Gregory Marchand, Jillena Roberts, and Amie Steele. Related GAO Products Background Investigations: Office of Personnel Management Needs to Improve Transparency of Its Pricing and Seek Cost Savings. GAO-12-197. Washington, D.C.: February 28, 2012. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Personnel Security Clearances: Overall Progress Has Been Made to Reform the Governmentwide Security Clearance Process. GAO-11-232T. Washington, D.C.: December 1, 2010. Personnel Security Clearances: Progress Has Been Made to Improve Timeliness but Continued Oversight Is Needed to Sustain Momentum. GAO-11-65. Washington, D.C.: November 19, 2010. DOD Personnel Clearances: Preliminary Observations on DOD’s Progress on Addressing Timeliness and Quality Issues. GAO-11-185T. Washington, D.C.: November 16, 2010. Personnel Security Clearances: An Outcome-Focused Strategy and Comprehensive Reporting of Timeliness and Quality Would Provide Greater Visibility over the Clearance Process. GAO-10-117T. Washington, D.C.: October 1, 2009. Personnel Security Clearances: Progress Has Been Made to Reduce Delays but Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Personnel Security Clearances: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As of October 2010, the Office of the Director of National Intelligence reported that 3.9 million federal employees (military and civilians) and contractors hold security clearances. DOD comprises the vast majority of government security clearances. Longstanding backlogs and delays in the security clearance process led GAO to place the DOD’s Personnel Security Clearance Program on its high-risk list in 2005. Delays in issuing clearances can result in millions of dollars of additional cost to the federal government and could pose a national security risk. DOD and others have taken steps to address these issues and additional concerns with clearance documentation used to determine eligibility for a clearance. As a result, in 2011, GAO removed the program from its high-risk list. This testimony addresses (1) the key actions that led GAO to remove DOD’s security clearance program from its high-risk list and (2) the additional actions that can enhance the security clearance reform efforts. This statement is based on prior GAO reports and testimonies on DOD’s personnel security clearance program and governmentwide suitability and security clearance reform efforts. Since GAO first identified the Department of Defense’s (DOD) Personnel Security Clearance Program as a high-risk area, DOD, in conjunction with Congress and executive agency leadership, took actions that resulted in significant progress toward improving the processing of security clearances. Congress held more than 14 oversight hearings to help oversee key legislation, such as the Intelligence Reform and Terrorism Prevention Act of 2004, which helped focus attention and sustain momentum of the governmentwide reform effort. In addition, the committed and collaborative efforts of DOD, the Office of the Director of National Intelligence (ODNI), Office of Management and Budget (OMB), and Office of Personnel Management (OPM) as leaders of the Suitability and Security Clearance Performance Accountability Council (Performance Accountability Council) demonstrated commitment to and created a vision for the reform effort, which led to significant improvements in the timeliness of processing security clearances. As a result, in 2011, GAO removed DOD’s Personnel Security Clearance Program from its high-risk list because of the agency’s progress in improving timeliness, development of tools and metrics to assess quality, and commitment to sustaining progress. Specifically, GAO found that DOD met the 60-day statutory timeliness objective for processing initial clearances in fiscal year 2010 by processing 90 percent of its initial clearances in an average of 49 days. In addition, DOD developed two quality tools to evaluate completeness of investigation documentation and agencies' adjudication process regarding the basis for granting security clearances. Moreover, DOD, ODNI, OMB, and OPM developed and are in the process of implementing 15 metrics that assess the timeliness and quality of investigations, adjudications, reciprocity and automation of security clearances. Even with the significant progress in recent years, sustained leadership attention to the following additional actions, on which GAO has previously reported, can enhance the security clearance reform efforts of executive branch agencies and the Performance Accountability Council: Continue to implement, monitor, and update outcome-focused performance measures. The development of tools and metrics to monitor and track quality are positive steps, but full implementation of these tools and measures will enable the executive branch to demonstrate progress in quality improvements and contribute to greater visibility over the clearance process. Seek opportunities to enhance efficiencies and manage costs related to the reform effort. Given the current fiscal constraints, identifying long-term funding requirements for the security clearance process is critical for the executive branch to sustain the reform effort. Further, the reform efforts are a venue to facilitate the identification of efficiencies in areas including information technology and investigation and adjudication case management processes. Create a sound requirements process for determining which positions require clearances and level of clearances. A sound requirements determination process may help ensure that workload and costs are not higher than necessary by ensuring that clearances are only requested for positions when needed and that the appropriate clearance level is requested.
Background Traffic congestion is caused by more vehicles on a road than it is designed to accommodate, and it can be exacerbated by several factors. For example, bottlenecks at highway interchanges or on bridges and tunnels can worsen congestion. Vehicles traveling at different speeds can increase the average amount of space between cars, thus not efficiently using all space available in a lane. Stop-and-go traffic leads to increased queuing and periodic events such as traffic accidents and roadway construction can compound already congested conditions. Although a transportation system is designed to handle a certain number of vehicles, the flow of traffic can be improved at certain times and places such as during rush hours or at bottlenecks. Congestion pricing is designed to improve the flow of traffic by charging drivers a toll that can vary with the level of congestion or time of day. Drivers pay a higher price for using a lane or roadway at times of heavy traffic, and a lower price when and where traffic is light. To avoid toll payment, drivers may choose to share rides, use transit, travel at less congested (generally off-peak) times, or travel on less congested routes. Drivers who place a high value on time may choose to pay the toll to use the priced lane during congested times in return for a faster and more reliable trip. Alternatively, drivers who wish to pay a discounted toll on an already tolled roadway can travel at off-peak times. Economists generally believe that congestion pricing has the potential to alleviate congestion on roadways in an economically efficient way. Those who value a fast and reliable trip will pay for the option. Drivers who place a lower value on time will choose to stay in the unpriced and potentially more congested roadways. Economists believe that congestion pricing can also enhance economic efficiency by making drivers take into account the external costs they impose on others when making their travel choices. Any given driver’s highway use entails extra costs that the driver does not bear, in the form of congestion, noise, and pollution. Thus, paying a toll that reflects a driver’s value of time and covers external costs can potentially reduce congestion and the demand for road space at peak periods. We have reported that the existing infrastructure can be managed more efficiently and that congestion pricing could be one method to do so. All congestion pricing projects in the United States have used either (1) High Occupancy Toll (HOT) lanes or (2) peak-period pricing on already tolled facilities. HOT lanes have been created by constructing new lanes or converting existing carpool or High Occupancy Vehicle (HOV) lanes, some of which had been previously underused, and allowing solo drivers to use these lanes if they pay a toll. Users of the prior HOV-only lanes, such as carpools and express buses, generally continue to use the lanes for free and are allowed to use newly constructed HOT lanes for free as well. HOT lane operators seek to influence the number of vehicles in the HOT lane and maintain 45- to 55-mile-per-hour travel speeds through “dynamic” pricing—that is, increasing or decreasing tolls in real time depending on traffic in the HOT lane. Peak-period pricing on already tolled highways, bridges, and tunnels or on new or planned replacement facilities is another type of congestion pricing project in the United States. In this type of pricing, tolls are fixed higher during peak travel times and lower during off-peak times to encourage drivers to use the roadway off- peak. Three HOT lane projects have used a hybrid approach to dynamic and peak-period pricing called “variable” pricing. Variable pricing uses a pre-set schedule of tolls that is periodically revised to account for changes in congestion or other factors. Since the first U.S. congestion pricing project was implemented in Orange County, California, in 1995, 19 project sponsors have initiated 41 pricing projects on highways, bridges, and tunnels. Projects operate in Georgia, Utah, Colorado, Maryland, and New Jersey with multiple projects in California, Florida, New York, Texas, Virginia, Minnesota, and Washington State. Of the 41 pricing projects, 30 are completed and open to traffic. The 30 opened projects include 12 HOT lane projects and 18 peak-period priced facilities, covering about 400 miles of priced lanes. Projects range in length from 4.1 miles on the State Route (SR) 133 in Orange County, California, to nearly 150 miles on the New Jersey Turnpike, and charge tolls varying from 25 cents to $14. Eleven HOT lane projects are under construction; in addition, 2 of the 12 HOT lane projects in operation are extending the length of their tolled lanes. Figure 1 shows congestion pricing projects in operation and under construction including extensions to existing projects. Appendix II provides additional details on congestion pricing projects and toll rates. More metropolitan areas across the country are using or plan to use pricing as a way to relieve congestion on highways and bridges, and some regions are planning to implement networks of HOT lanes. Dallas- Ft. Worth, Atlanta, Minneapolis-St. Paul, Seattle, and the San Francisco Bay Area have networks of HOT lanes in their long-term plans. For example, the Metropolitan Transportation Commission of the San Francisco Bay Area proposes to add a 570-mile HOT lane network by 2025 as part of its 35-year regional plan. The Washington State Department of Transportation proposes to convert carpool lanes to HOT lanes on nearly 300 miles in the Seattle/Puget Sound area. Such networks are also being considered in Los Angeles, Washington, D.C., and the Miami-Ft. Lauderdale area. Congestion pricing has raised equity concerns among the public and elected officials. In general, an analysis of equity issues examines how costs and benefits of projects are distributed among members of society. In the transportation economics literature, four concepts of equity are cited. The extent to which members of the same group are treated equally; for example, whether some people with the same income pay a larger amount in taxes or fees. The extent to which those who benefit from a project, such as a new lane, pay for those benefits; for example, is the lane paid for by a toll on users or by a state sales tax paid in part by persons who may not use or benefit from the lane? How the costs and benefits of a project are distributed across members of different groups such as high- and low-income people; for example, whether all groups pay in proportion to their income or whether low-income people pay proportionally more of their income for tolls than high-income people. The extent to which those who impose social costs bear those costs; for example, whether polluters or drivers on crowded highways pay the full social cost of their driving, or, if a toll causes diversion from the tolled highway to adjacent neighborhoods, those neighborhoods incur the costs of pollution and crowding. While recognizing that all of these concepts of equity cited in the literature may be important, public and elected officials’ concerns regarding congestion pricing have been primarily with the latter two concepts of equity, in particular, what is termed income equity and geographic equity. Income equity refers to whether the costs of congestion pricing that users incur are proportional to their incomes, or whether low-income drivers are disproportionately affected. For example, low-income drivers may spend a greater proportion of their income to pay to travel at preferred times or incur greater costs in travel time by choosing alternate unpriced routes. High-income drivers, who, economists generally believe, place a higher value on their time, may be more likely to pay the toll and benefit from a faster trip than low-income drivers, thus possibly generating income equity concerns. Geographic equity refers to how equally the costs and benefits associated with congestion pricing are distributed within an affected metropolitan area. For example, if one corridor in a metropolitan area has congestion pricing and another does not, drivers in the tolled corridor may incur greater costs than drivers in the untolled corridor because of the tolls they pay or the increase in travel time they incur by choosing an alternate route. Furthermore, drivers who choose to avoid the tolls and take an alternate route may contribute to congestion on the alternate route. Such diversion of traffic from tolled routes within a corridor can reduce the performance of the alternate untolled routes and negatively affect surrounding neighborhoods. Issues of equity are further complicated if this traffic is diverted through low-income and minority communities. The transportation economics literature also suggests that the equity impacts of congestion pricing be assessed in comparison to alternatives—namely the predominant sources of funding roadways, such as motor fuel and sales taxes. Comparing these sources could address whether those who benefit from a project, such as a new lane, pay for those benefits; for example, is the lane paid for by users of the facility or by persons who may not use or benefit from the lane? According to the Transportation Research Board, it may be the case that tolling and pricing provide a more equitable means of funding roadways than these other alternatives. We have reported that tolling is consistent with the “user pay” principle because tolling a particular road and using the tolls collected to build and maintain that road more closely link the costs with the distribution of the benefits that users derive from it. As a general rule, charging tolls on highways constructed with federal funds is prohibited. However, Congress has enacted several exceptions that authorize DOT to permit tolling in certain instances. DOT Helps Facilitate Congestion Pricing through Project Approvals and Funding for Implementation, Monitoring, and Evaluation DOT approves all congestion pricing projects on any roadway that receives federal funds. DOT approval grants the project sponsor permission to have congestion pricing on newly constructed roadwaysand lanes and converted HOV lanes through three programs. DOT also approves design exceptions and environmental reviews that allow for pricing on federally funded roads. DOT awarded funding to study, implement, and evaluate congestion pricing projects and DOT programs require monitoring and evaluation of pricing projects, although the level of detail varies by program. When applicable, DOT oversees projects and certifies that program performance standards have been met. DOT Approves Tolling, Project Design Exceptions, and Environmental Reviews on Federally Funded Highways Congress has authorized DOT to approve tolling, which can include congestion pricing, through three programs. Table 1 provides a summary of the three DOT congestion pricing programs and number of operational or under construction congestion pricing projects authorized under each program. DOT has also approved design exceptions for certain highway projects that include congestion pricing. DOT has approved exceptions to highway standards to allow for changes to highways to increase capacity within the existing right of way or “footprint.” The Florida Department of Transportation received design exceptions for I-95 in Miami to convert parts of the median and shoulder lanes and to narrow other lanes from the standard 12 feet to 11 feet to make two HOT lanes in each direction. The Minnesota Department of Transportation has received design exceptions to convert shoulder lanes for electronic tolling and bus service during peak periods on I-35W in Minneapolis. This lane also serves as a HOT lane for solo drivers who pay a toll during the same period. The Minnesota Department of Transportation’s design exceptions included changes in lane width and shoulder width as well as advisory speed limits. In accordance with the National Environmental Policy Act of 1969, as amended (NEPA) and its implementing regulations as well as Executive Order 12898, DOT reviews projects to assess their anticipated environmental and socioeconomic impacts and to determine their need for any additional reviews. Projects that are deemed to have significant environmental impacts must prepare an Environmental Impact Statement. When it is unclear whether or not a significant environmental impact will occur as a result of the project’s impacts, an Environmental Assessment must be prepared. Environmental impacts may include effects on air, noise, water quality, wildlife, and wetlands. Additionally, projects may be required to undergo an environmental justice assessment to determine its impacts on low-income and minority populations. Projects that a federal agency has previously determined to have no significant environmental impacts may receive a categorical exclusion, meaning that they do not have to complete an Environmental Impact Statement or Environmental Assessment to comply with NEPA. DOT has approved categorical exclusions for congestion pricing projects that do not lead directly to construction and changes in the facility’s “footprint” in accordance with NEPA implementing regulations, along with projects that include new electronics and communications systems for tolling. According to project sponsors that we interviewed, pricing projects that have not changed a facility’s “footprint,” such as HOV to HOT lane conversions or peak-period pricing on already tolled highways, bridges, and tunnels, have received categorical exclusions. In addition, projects that have narrowed the width of lanes and converted medians and shoulder lanes that have not involved changing the “footprint” of the highway have received categorical exclusions. DOT Has Provided Funds for Studies, Implementation, and Evaluations of Congestion Pricing Projects DOT has provided funds to promote congestion pricing through several programs that involve tolling—the Urban Partnership Agreement (UPA) and Congestion Reduction Demonstration (CRD) programs and the Value Pricing Pilot Program (VPPP). UPA and CRD, the largest programs that involve tolling, advance congestion pricing through funding awards from 10 separate grant programs. As part of one-time initiatives, the UPA and CRD participants—Seattle, Washington; San Francisco, California; Minneapolis-St. Paul, Minnesota; Miami-Ft. Lauderdale, Florida; Los Angeles, California; and Atlanta, Georgia—were provided approximately $800 million through grant programs to implement tolling as well as transit, technology, and telecommunications strategies to reduce congestion. Funds have been used to build new HOT lanes, convert HOV lanes to HOT lanes, establish electronic tolling systems, and purchase buses for express bus service on HOT lanes. In addition, DOT has provided about $100 million in grants for studies, implementation, and some evaluations of congestion pricing projects through VPPP since it was established in fiscal year 1998. Nearly all congestion pricing projects in operation have received VPPP funds at one time or another for these purposes. About a third of total VPPP grants were awarded to fund three of the six UPA participants—Seattle in fiscal year 2007 and Minnesota and San Francisco in fiscal year 2008. that, Congress authorized $11 million in fiscal year 2005 and $12 million per year for fiscal years 2006 through 2009 for projects that involve highway pricing, of which $3 million per year was set aside for nontolling projects, such as parking and car sharing projects. See appendix III for a list of VPPP grants and activities from fiscal years 1999 through 2010. San Francisco’s SFPark uses congestion pricing by adjusting parking meter and garage prices up or down based on the demand for parking. Drivers can receive real-time information about where parking is available and at which price using personal mobile devices such as iPhones. This “demand-responsive” pricing encourages drivers to park in underused areas and garages, reducing demand for parking in overused areas. VPPP funds have been used to: Study the potential of pricing in a corridor or region or the feasibility of a particular pricing project. Studies have examined the benefits of implementing variable pricing on an already tolled facility such as the Florida Turnpike in Miami-Dade County and the Pennsylvania Turnpike near Pittsburgh and Philadelphia. VPPP-funded studies have also examined the feasibility of extending HOT Lanes such as I-15 in San Diego, California. Implement elements of projects. Lee County, Florida, used a grant to purchase transponder readers for electronic tolling on two of its bridges. The Washington State Department of Transportation used VPPP funds to install electronic tolling technology on the SR 520 bridge that determine changes in tolls based on congestion. Evaluate specific projects. Evaluations studied the results and challenges of implementing pricing projects including SR 91 in Orange County, California; I-15 in San Diego; California; I-394 in Minneapolis, Minnesota; and I-10 and U.S. 290 in Houston, Texas. In addition to the funding provided through VPPP, federal funding is available for congestion pricing projects through other programs. For example, federal credit assistance available under the Transportation Infrastructure Finance and Innovation Act program has been used to help finance construction of HOT lanes for 7 projects including those on I-495 in Virginia, I-635/I-35E in Texas, and I-595 in Florida. In addition, states receive nearly $40 billion a year in federal funding for highways through a series of grant programs collectively known as the Federal-Aid Highway Program. These grant programs have also been used to help finance the construction of congestion pricing projects. DOT Requires Performance Monitoring for All Its Toll Programs and Compliance with Performance Standard for Its HOV Facilities Program The HOV Facilities program requires project sponsors to annually monitor and report HOT lane traffic speeds and is the only DOT tolling program that requires project sponsors to meet an annual performance standard. In the case of an HOV facility with a speed limit greater than 50 miles per hour, vehicles must be able to travel at least 45 miles per hour 90 percent of the time during weekday morning and evening peak hours over a 180- day period. If this standard is not met, the road operator must make changes to bring the facility back into compliance. Such changes could include raising tolls on paying cars or changing carpooling requirements to achieve the standard. DOT established this speed requirement because HOV lanes, by law, are transit “fixed guideway” facilities that encourage transit use and thus traffic must maintain speeds compatible with express bus service. Nearly all HOV to HOT lane conversions in operation have been authorized under either the HOV Facilities program or VPPP (and its predecessor, the Congestion Pricing Pilot Program). Projects that were authorized under HOV Facilities have this performance requirement. DOT monitors the reported performance of HOT lanes authorized under the HOV Facilities program. According to DOT officials, there has not been a case in which a HOT lane has not met the standard. Pub. L. No. 109-59, § 1604(b)(7)(A). construction and construction has not begun on the fifth, no performance reporting for completed ELD projects currently exists. DOT also requires project sponsors that receive VPPP funds to monitor and evaluate the performance of their projects so that the agency can report results biannually to Congress as required by statute. Project sponsors report five categories of effects—(1) driver behavior, traffic volumes, and travel speeds; (2) transit ridership; (3) air quality; (4) equity for low-income individuals; and (5) availability of funds for transportation programs. As with the ELD program, projects that receive VPPP grants are not required to meet specific performance standards. Under the UPA and CRD programs, DOT has provided funds to the Battelle Memorial Institute to conduct an independent national evaluation of the effectiveness of the program’s four congestion reduction strategies—tolling, transit, technology, and telecommuting. Projects will be assessed individually and results compared across all projects in the six metropolitan areas according to specific metrics. These metrics include reductions in congestion delay and duration; increases in the number of cars and people in cars (i.e., vehicle and passenger throughput), and shifts to travel during off-peak times, among other factors. See appendix IV for list of performance and monitoring requirements for federal programs for congestion pricing projects. Project Evaluations Have Generally Shown Reduced Congestion, but Other Effects Have Not Been Consistently Assessed Evaluations of 14 congestion pricing projects in the United States have generally shown reduced congestion, although other results are mixed, and not all possible relevant effects have been assessed. HOT lane projects, which aim to improve the flow of traffic and throughput with increased speeds and decreased travel times, have reduced congestion by increasing vehicle throughput, and have generally shown reduced congestion, increased speeds, and decreased travel times in the priced and unpriced lanes. Some HOT lane projects have added new lanes and thus, for these projects, the effects of pricing on performance have not been distinguished from the effects of the added lane. In addition, although the number of cars using HOT lanes has risen, there were fewer people in the cars—a fact attributed to an increase in the share of toll-paying solo drivers or a decrease in carpooling on HOT lanes. Peak-period pricing projects that aim to reduce congestion by encouraging drivers to travel at off-peak times have shifted some drivers to travel during those times. Other effects of congestion pricing projects, such as equity income impacts, have not always been evaluated. Evaluating these impacts is important to address public and elected officials’ concerns about the effects of pricing on travelers and communities. Not evaluating these effects leads to an incomplete understanding of the full effects of pricing. Of the project sponsors that have operational congestion pricing, 8 have a current and completed evaluation of at least one of their projects, for a total of 14 evaluated projects. These eight evaluations assess five HOT lane projects and nine peak-period pricing projects, as shown in figure 2. For a description of our objectives, scope, and methodology in analyzing the congestion pricing projects, see appendix I. Because of differences in project objectives and in DOT’s monitoring and evaluation requirements, the completed evaluations vary in which aspects of performance they report. No evaluation has assessed the performance of congestion pricing across projects. In addition, the evaluations represent an assessment of the results of the projects at specific points in time. Were the evaluations on-going or repeated at different time intervals, it is possible that the results would differ. The most common measures used across the evaluations have been travel time and speed, throughput, off-peak travel, transit ridership, and equity. Table 2 lists and defines these five common performance measures and definitions used in congestion pricing project evaluations. Evaluations of HOT lane projects, which are designed to improve travel time and speed, have shown improvements. Both travel time and travel speed improved on at least some sections of all five HOT lane projects that were evaluated. Sometimes the improved travel times in the HOT lanes also led to improved travel times in the adjacent unpriced lanes because solo drivers paid to switch to the HOT lanes. For example, on SR 167 in Seattle, peak-hour travel speeds on the adjacent unpriced lanes increased as much as 19 percent compared with travel speeds in 2007 while speeds on the HOT lanes remained about the same, averaging the speed limit of 60 miles per hour.evaluation of I-15 in San Diego, drivers in the HOT lanes reportedly saved up to 20 minutes more than drivers in the adjacent unpriced lanes during the most congested times. Neither project included a new lane. Two other HOT lane projects—on I-95 in Miami and SR 91 in Orange County — included two new lanes in each direction, which also helped improve travel times and speed. On I-95, for example, which Florida Department of Transportation officials identified as the most heavily congested highway in the state before pricing began in 2008, the evaluation reported that drivers have reportedly saved about 14 minutes in the HOT lanes and 11 minutes in the adjacent unpriced lanes per trip. Evaluations of the I-95 and SR 91 projects did not, however, isolate the effects of the added lane and pricing on performance. Isolating such effects is challenging because even if a study accounts for the increased vehicle throughput on the new lanes, it may then understate the throughput the other lanes could have handled if the new lanes had not been added. Evaluations of the nine peak-period pricing projects with completed evaluations reported no effects on travel time and speed. Two of these evaluations—of the New Jersey Turnpike and Lee County bridges— analyzed these effects. Although travel times on the New Jersey Turnpike improved from 2000 to 2001 when electronic tolling and peak-period pricing were introduced at the same time, the project evaluation attributed the improvement mostly to electronic tolling and not to pricing. Evaluations of all five HOT lane projects reported an increase in vehicle throughput—as measured by traffic volumes—on the HOT lanes and sometimes on the adjacent unpriced lanes and attributed this increase to both congestion pricing and the addition of new lanes. For example, according to a 2006 evaluation of the I-394 project in Minneapolis, vehicle throughput in the HOT lanes increased by 9 to 13 percent and by 5 percent in the adjacent unpriced lanes after the lanes opened. A 2000 evaluation of the SR 91 project in Orange County estimated that vehicle throughput increased 21 percent on the entire roadway. Four of the five HOT lane project evaluations that tracked the average number of people in a car (known as average vehicle occupancy) showed a decrease in the number of passengers per car. Thus, while there were more cars using the HOT lanes, there were, on average, fewer people in the cars, which project sponsors attributed to an increase of the share of toll-paying solo drivers or a decrease in carpooling in the HOT lanes. In addition, evaluations of two projects assessed passenger throughput which takes into account the number of people riding buses as well as the average vehicle occupancy rate to estimate the total number of people moved through the roadway. According to the evaluations, passenger throughput on I-15 in San Diego increased slightly between 1997 and 1998, and then decreased between 1998 and 1999, and passenger throughput on I-95 in Miami increased 42 percent between 2008 and 2010 on the HOT lanes—a result the evaluation attributed to an increase in toll-paying solo drivers, transit ridership, and the addition of two HOT lanes. Evaluations of peak-period pricing projects found no increase in throughput due to congestion pricing. Specifically, evaluations of the New Jersey Turnpike and Lee County bridges assessed the impact of pricing on traffic volume and found no changes in vehicle throughput or average vehicle occupancy due to pricing that differed from overall traffic trends. To evaluate two of the five HOT lane projects—I-15 in San Diego and SR 91 in Orange County—project sponsors surveyed drivers to determine whether they changed their trips to travel at off-peak times. According to the I-15 survey results, some traffic shifted from the middle of the peak rush hour to the “peak-shoulder” times—the times directly before and after peak periods. However, the sponsors did not explain why this shift occurred. Drivers surveyed for the SR 91 evaluation said that the level of congestion affected their travel time decisions more than the presence of the toll. Project sponsors that did not study shifts to off-peak travel times said they did not do so because, in one case, the sponsor did not see the HOT lanes as offering incentives that would encourage peak-period travelers to shift their travel to off-peak periods, and, in another case, the sponsor was not required to study shifts to off-peak travel but would consider doing so in the future. Sponsors of peak-period pricing projects conducted more robust studies of off-peak travel because it was a more explicit goal of their projects. These studies showed some success in reducing congestion during peak times. Evaluations for two of the three peak-period pricing projects showed that drivers chose to take trips at off-peak times on highways, bridges, and tunnels to take advantage of discounted tolls. For example, according to a 2005 performance evaluation of traffic on bridges and tunnels into New York City conducted by the City University of New York for the Port Authority of New York and New Jersey, car and truck traffic increased in off-peak periods for most crossings. The evaluation reported more significant improvements in the morning before the peak periods than after the peak periods at the end of the day, which the evaluation attributed to drivers finding it easier to arrive at work early than to arrive at work later. According to this survey, a majority of drivers had little flexibility to change their schedule to travel at off-peak times. For example, truck drivers said that they could not adjust their delivery schedules to travel at off-peak times. Furthermore, drivers said that the toll difference of $1 was not great enough to influence them to change their travel time. Despite this, a significant minority could alter their travel departure times between 30 minutes and 2 hours. Thirty-five of the 505 surveyed drivers representing 7.4 percent of passenger trips said that they changed their travel behavior as a result of the project. According to the Port Authority, the 7.4 percent change in passenger trips to off-peak times is significant, since small changes can have exponential effects because of traffic queuing. An evaluation of peak-period pricing on the New Jersey Turnpike based on a driver survey found that work schedules or a desire to avoid traffic created a greater incentive for determining when to travel than a slightly lower toll for off-peak travel. Thus, it appears that drivers chose to travel at off-peak times because of congestion and not because of modest differences in price. Evaluations of four of the five HOT lane projects assessed changes in transit ridership, but results were mixed. I-95 in Miami was the only one with demonstrated increases in transit ridership. Between 2008 and 2010, the average weekday ridership on the I-95 express bus increased by 57 percent, from about 1,800 riders in 2008 to more than 2,800 in 2010. About 38 percent of these riders reported in a transit rider survey that they used to drive alone. The other three HOT lane project evaluations found no increase in transit ridership on buses using the HOT lanes as a result of the project. While transit ridership reportedly increased on I-15 in San Diego, the evaluation stated that this increase was not linked to the project. Evaluations of all three peak-period pricing projects assessed whether drivers shifted to transit, but none found evidence of any changes in transit ridership. Many efforts have been made to assess the effects of congestion pricing projects on equity, including income equity (the distribution of costs and benefits of congestion pricing between low- and high-income drivers) and geographic equity (the relative effects of congestion pricing on two geographic areas, including the effects of any traffic diversion). Three of the eight evaluations, covering one HOT lane project and three peak- period projects, attempted to assess both income and geographic equity, and none attempted to assess other effects on equity, such as whether members of the same group are treated differently or to what extent the beneficiaries of a project, such as a new lane, pay for those benefits. Evaluations for four of the five HOT lane projects attempted to assess equity through surveys or focus groups of travelers concerning their use of congestion pricing projects; however, different elements of equity were evaluated. For example, three of the four HOT lane project evaluations that assessed equity did so by considering the effects of congestion pricing on drivers of different income levels. Results for these three projects—SR 91 in Orange County, I-394 in Minneapolis, and SR 167 in Seattle—indicated that drivers of all incomes used the HOT lanes, but high-income drivers used them more often than low-income drivers.addition, evaluations for all four HOT lane projects—SR 91 in Orange County, I-394 in Minneapolis, SR 167 in Seattle, and I-15 in San Diego— found that drivers liked having the option of using the HOT lanes and thus were supportive of them. The fifth HOT lane project—I-95 in Miami—has not undergone an assessment of the effect of congestion pricing on low- In income drivers because, according to the project sponsor, the benefits of congestion pricing—including increased travel speeds—accrue to all users. Evaluations of the nine peak-period pricing projects considered different elements of equity and found few impacts. However, all three projects were previously tolled and toll discounts were offered for travel at off-peak times. Thus, no tolls, including those for peak periods, were raised. The New Jersey Turnpike Authority evaluated the income and ethnicity of those who shifted to the off-peak times, while the Lee County Department of Transportation assessed the age, gender, and work schedules of those who shifted to the off-peak times. Both project sponsors also surveyed drivers, who said they thought the off-peak discounts were equitable and the pricing program fair. However, the sample sizes for both these surveys were small; thus, the results may not provide reliable estimates for the various subgroups they measured. Evaluating geographic equity—or the effects of any traffic diverted from HOT lanes or from peak-period priced highways, bridges and tunnels onto unpriced lanes and roads—would provide decision makers with information about potential negative effects, such as whether traffic on the unpriced alternatives increased. The sponsor of one of the five HOT lanes projects—SR 91 in Orange County—studied diversion and reported that traffic was drawn to the roadway and its HOT lanes because the priced lanes were new and added capacity. Sponsors of the other four HOT lane projects did not evaluate traffic diversion for several reasons, according to the sponsors. First, drivers can choose to drive in the unpriced lanes and none of the projects took away an unpriced lane—only HOV lanes were converted. Furthermore, two of the HOT lane projects added a lane to the roadway and allowed solo drivers to use a previously underused HOV lane. As a result, the sponsors said they expected drivers to be diverted to the HOT lanes, not away from them. Second, even if they had anticipated traffic diversion to alternative roads, the sponsors said they would not have surveyed drivers or asked them to maintain travel diaries because these methods were expensive and challenging to implement. They added that electronic data collection methods, such as GPS tracking in vehicles, transponder tracking, and license plate tracking, can be expensive and raise privacy issues. Two peak-period pricing project sponsors—the New Jersey Turnpike Authority and the Lee County Department of Transportation—studied traffic diversion to adjacent unpriced roads and found no evidence of diversion. According to the studies, such diversion would not be likely for these roadways because there are no comparable alternative routes. Furthermore, the projects were previously tolled and congestion pricing was implemented with off-peak discounts. As for the HOT lane projects, traffic diversion may be less of a concern if a highway, bridge, or tunnel was previously priced than if it was previously unpriced. Sponsors of one HOT lane project—SR 167 in Seattle—and one peak- period pricing project—the New Jersey Turnpike—evaluated the impact of pricing on minorities. An environmental justice assessment for SR 167 found that there would not be a disproportionate effect on minorities because there was a small minority population in the area, the project was limited to 9 miles southbound and 11 miles northbound, and there were unpriced alternatives—adjacent lanes and roads—that could be used.be no disproportionate effect on minority populations; however, as mentioned before, the survey sample size was small and therefore its results cannot be generalized to all users. The 2005 New Jersey Turnpike evaluation found that there would Evaluations of three HOT lane projects—I-15 in San Diego, SR 91 in Orange County, and I-394 in Minneapolis—and one peak-period pricing project—the New Jersey Turnpike—assessed environmental effects. All four evaluations assessed the impacts of pricing on air quality, and one HOT lane project—I-394—also assessed noise impacts. The air quality assessments, designed to test whether air quality improved as experts said it could with fewer cars idling in traffic, showed mixed results: minimal air quality improvements were reported on I-15, I-394, and the New Jersey Turnpike, but no effects on SR 91. The noise impact assessment on I-394 found that there would be no significant noise impacts resulting from pricing. UPA Evaluations Should Improve Understanding of the Performance and Impacts of Congestion Pricing The completed performance evaluations provide some information as to the effectiveness of congestion pricing, but the UPA and CRD evaluation framework has the potential to provide decision makers with a more consistent and comprehensive picture of the effects of pricing. The UPA and CRD evaluations which began in 2009 will collect data on UPA and CRD projects in the six metropolitan areas for 1 year before a project is implemented and then for another year after it has been implemented to assess its effects. The evaluation framework will provide standard performance measures such as travel times and vehicle throughput and help develop a more comprehensive study of congestion pricing, including increased monitoring of passenger throughput and socio- economic information of HOT lane users. Performance measures and detailed metrics will be used to assess individual projects and across projects, which has not been consistently done so far. The evaluation framework will also assess the impacts of pricing on low-income drivers and changes in their travel time and distance traveled as a result of pricing. Travel diary surveys will be conducted at two of the UPA sites, which may provide some basis to study equity impacts including whether driver behavior changes based on income, such as diverting traffic to adjacent unpriced roads. In addition, the evaluation framework will include surveys to assess transit ridership for all projects that have a transit element. Despite the potential for greater understanding of the effects of congestion pricing as part of the UPA and CRD evaluation framework, evaluations have only been completed in one of the six metropolitan areas—and this was for its first phase. Thus we cannot assess the evaluation framework or surveys’ effectiveness until they are completed. DOT expects to have all of the UPA project evaluations completed by 2014. Greater Equity and Safety Issues Might Develop as New Projects Are Implemented Expanded Use of Pricing Could Raise Equity Concerns Income and geographic equity concerns may become more prevalent as congestion pricing becomes more widespread. Such concerns could be particularly relevant for HOT lane projects with a potential for large toll increases, such as projects that must meet HOV Facilities program performance requirements to maintain traffic speeds of 45 to 55 miles per hour. Tolls on pricing projects in operation are relatively low, but can be as high as a dollar per mile. Though currently capped, these tolls could be raised if necessary to maintain the required traffic speeds. In turn, higher toll rates could lead to more traffic diversion if drivers chose not to pay for HOT lanes and took adjacent unpriced lanes or roads instead. These concerns may be particularly acute in the future for projects designed to use pricing not only to manage congestion but also to meet toll revenue targets. SR 520 in Seattle, which began pricing in December 2011, will generate toll revenues to pay for bonds to build a replacement bridge. All cars, including carpools, pay a toll that varies up to $5.00. Registered vanpools, express buses, and emergency vehicles have free use. Because this is the first project to toll a previously untolled bridge and there are parallel alternative routes, traffic diversion may become a concern. According to traffic models from the area’s transportation planning council, traffic could increase on the parallel Interstate route by 5 to 8 percent and on an alternative state road by 5 percent. Geographic equity concerns could be minimized by introducing tolling on both the Interstate and state road because drivers on all three routes would then pay a toll and diversion from tolled routes to untolled routes would be less of a concern. However, according to officials in the Seattle metropolitan area, the public and elected officials are opposed to tolling these other routes. Several other projects under construction involve public-private partnerships that plan to use toll revenues to pay for construction debt, operations, maintenance, and provide a return to private investors. However, meeting revenue targets can be at odds with policies to increase throughput on highways and bridges by encouraging more people to use carpools and express bus service. According to one expert, project sponsors seeking to maximize revenue could in theory charge a higher toll and make more money off of fewer paying vehicles. Raising revenue could be at odds with managing congestion (e.g., increasing passenger throughput) if higher tolls can produce more revenue from fewer paying vehicles. In addition, as we have previously reported, tolls on roadways operated by private concessionaires can be expected to be higher than on comparable facilities operated by public agencies. Three projects—I-495 in Northern Virginia and the LBJ Express (I-635/I-35E) and North Tarrant Express (I-820-SH 121/183) in Dallas-Ft. Worth—are public-private partnerships in which the private operator sets the toll rate. I-595 project in Broward County, Florida, is also a public-private partnership, but the state department of transportation has retained the authority to set the toll rate. In addition, I-95 in Miami has raised its carpool occupancy requirements for free use from two to three passengers. Thus, two-passenger carpools pay a toll to use the HOT lanes or use the adjacent unpriced lanes. According to the Florida Department of Transportation, it has not studied the impact of pricing on two-passenger carpools and therefore does not know how many people have been affected. conversion of HOV lanes or the addition of new lanes.bridge in Seattle is the only previously unpriced facility to be fully priced. According to experts, pricing existing unpriced lanes and roadways could lead to geographic equity concerns as drivers divert to alternate routes to avoid tolls. While future pricing projects may raise equity concerns, these concerns should be weighed against the potential benefits, including enhancing economic efficiency, increasing throughput, and reducing congestion. For example, greater use of pricing could enhance economic efficiency by discouraging solo driving and making alternatives such as carpooling or taking transit more appealing, thus resulting in more efficient use of existing roadways. Such changes in drivers’ behavior could also improve throughput and reduce congestion. In addition, as previously discussed, the equity impacts of congestion pricing can also be assessed in comparison to the equity concerns raised by the alternatives—namely the prevalent sources of funding roadways such as motor fuel and sales taxes. A number of options are also available to address equity issues. One such option is to use a portion of toll revenues for alternative transportation modes in the highway corridor, such as express bus service on HOT lanes. In general, bus riders are disproportionately lower- income individuals who would benefit from both reduced congestion on the HOT lanes and increased transit investments from toll revenues. In a survey of Seattle residents, public support for tolling the SR 520 bridge grew substantially if a portion of the toll revenue was dedicated to transit, even if tolls had to be significantly higher to pay for transit service. As part of its UPA program funding, Seattle has received grant funds for new buses to begin service on the SR 520 bridge. Other UPA program participants, including Minnesota and Miami, have received DOT grants for express bus service on HOT lanes. Under the ELD program, Federal law permits the use of excess toll revenue for eligible highway and/or transit service; however, pricing projects generally have not had excess revenues. Officials with the transportation planning council for the Minneapolis-St. Paul area told us that revenues from the I-394 project have generally not exceeded the project’s operational costs and therefore local transit funds are being used, as they were before the project was initiated, to provide express bus service on the HOT lanes. The San Diego Association of Governments also has used bus fare revenue to fund express bus service on I-15 in San Diego because the HOT lanes have not generated enough toll revenues to pay for bus service. In addition to the availability of revenues, the effectiveness of providing revenues to transit to address equity also depends on the availability of transit service and traveler commuting patterns. Transit may not be an option for some travelers given the location of homes, jobs, and other travel destinations. I-25 in Denver has taken steps to address equity among passengers and drivers on the HOT lanes by setting its peak period toll rate based on the fare charged to express bus passengers on the HOT lanes. Thus, as the fare for express bus passengers increases, the toll rate for drivers to use the HOT lanes also increases. An additional option to address equity issues would be to use toll revenues to reimburse low-income drivers, whether by exempting them from paying tolls or by providing them with a tax credit for the difference between the toll and transit fares. However, such reimbursement programs would involve complex efforts to determine and verify drivers’ low-income status. Nonetheless, some states such as California offer discounted utility rates for eligible households and in these states, transportation agencies have considered whether to use these preexisting mechanisms for eligibility and enforcement to provide discounted toll rates. However, no pricing project has used this option. Some experts have noted that discounts on tolls for low-income drivers would counteract the goal of reducing congestion because the discounts would encourage continued driving. Instead, they propose charging the same tolls to all users, but returning revenues to affected groups, such as groups of vehicle owners or a class of residents. VPPP federal funds were used to study the potential of providing credits for low-income highway users in Alameda County, California but no pricing project has used this option. To provide insight on environmental justice issues, DOT approved two VPPP grants for fiscal year 2010-2011 to assess the impacts of pricing on low-income drivers. One grant, for I-30 in Dallas-Ft. Worth is to examine environmental justice issues related to pricing I-30 through the use of Intelligent Transportation Systems technology. According to DOT, “the project is important because it will provide more data on environmental justice and pricing, given that there is little experience with strategies designed to address these issues related to the introduction of pricing.” The other grant, for a pricing project in Hartford, Connecticut, is to study the application of pricing and the impacts of environmental justice issues that resulted from the original construction of a project. Potential Safety Issues May Occur with Pricing Converted Shoulder Lanes and Narrowing Lanes Many highway projects increase the capacity of a roadway by converting shoulders or narrowing lanes which has the advantage of eliminating the need to widen highway and acquire additional property—of particular advantage in urban areas. Several congestion pricing projects have employed this strategy. For example, the Minnesota Department of Transportation has converted bus-only shoulder lanes on I-35W in Minneapolis to serve as HOT lanes during peak periods and has narrowed traffic lanes from 12 feet to 11 feet. In addition, as previously noted, the Florida Department of Transportation has incorporated portions of its medians and shoulders and narrowed traffic lanes from 12 feet to 11 feet to create two HOT lanes on I-95 in Miami. Moreover, two new projects—Loop 1 in Austin and I-94 in Minneapolis—will create new capacity by using shoulders and narrowing traffic lanes. The Loop 1 project will reduce shoulder width and incorporate parts of the shoulders to create an express lane, while the I-94 project may, if implemented, use shoulders and narrowed traffic lanes to create a HOT lane between Minneapolis and St. Paul. According to the project sponsor, I-94 does not have the space to build new HOT lanes if shoulders are not incorporated. Projects that convert shoulders and narrow lanes to create new lanes, including congestion pricing projects, raise concerns about driver safety and highway operations that transportation planners must address. Additional lanes on converted shoulders remove the safety refuge areas for motorists during vehicle breakdowns and emergencies and must be approved by DOT. According to the American Association of State Highway Transportation Officials, highways with paved shoulders have lower accident rates. Paved shoulders provide space to make evasive maneuvers; accommodate driver error; and add a recovery area to regain control of a vehicle, among other things. In addition, an analysis by the Highway Safety Information System sponsored by DOT reported that narrowing lanes or using shoulders to expand urban highways increased accidents by 10 percent. With traffic moving faster in the additional (or HOT) lane and slower in the unpriced lanes, the potential for sideswiping and lane-changing accidents increases. Another study conducted by the Texas Transportation Institute found that maintaining a shoulder and a wider HOV lane than adjacent unpriced lanes can help mitigate safety concerns. FHWA officials in Florida suggested that some of these safety issues could be mitigated using an incident management system, such as the one the Florida Department of Transportation has used on I-95 since 2008. As a condition of approving several design exceptions, FHWA required the Florida Department of Transportation to implement Intelligent Transportation Systems to mitigate safety issues related to incorporating a shoulder and narrowing lanes on I-95 in Miami. While using an incident management system does not prevent incidents from occurring, cameras can survey highways and detect incidents such as accidents, debris, and stalled vehicles. Highway message signs then convey information to drivers about the incidents and traffic conditions. Florida Department of Transportation staff can summon emergency, police, and tow truck crews to resolve problems and direct traffic. Thus, in the event of an incident on the HOT lanes, the lanes can be closed and, because there are no permanent barriers between the HOT lanes and the adjacent unpriced lanes, traffic can be diverted to the adjacent unpriced lanes. In the event of an accident in the unpriced lanes, the lanes can be closed and traffic diverted to the HOT lanes and tolls temporarily lifted. Since the HOT lanes on I-95 opened for traffic in December 2008, preliminary safety data found that the number of reported incidents involving accidents, debris, and stalled vehicles in the northbound express lanes increased from 132 in fiscal year 2009 to 209 in fiscal year 2010. Florida Department of Transportation officials have suggested, however, that the actual number of incidents may not have increased but that incidents are now tracked more accurately. In addition, according to FHWA officials in Florida, the number and severity of crashes declined after the I-95 HOT lanes became operational. In their view, the HOT lanes decreased congestion, and as a result, fewer rear-end crashes occurred. Additionally, FHWA officials stated that they have not seen evidence of more sideswiping since the traffic lanes were narrowed to form the HOT lanes. The UPA and CRD National Evaluation will assess the safety impacts of pricing, including the number of accidents and their severity and any change in the perception of safety by travelers and emergency personnel since pricing began. Concluding Observations Although traffic congestion has declined recently in many metropolitan areas, future demand for travel during peak times is expected to increase as the population grows and the economy recovers. Fiscal and environmental concerns prevent building new capacity in many metropolitan areas. Transportation decision makers have a variety of traffic demand management tools, including road or congestion pricing, to more efficiently operate and manage their infrastructure. Pricing has the potential to reduce congestion by influencing drivers to carpool, use transit, or drive at off-peak travel times. Congestion pricing has, where evaluated, helped reduce congestion. However, it is difficult to draw overall conclusions about the effectiveness of pricing because only half the sponsors with projects now open to traffic have evaluated their projects. Other results, where available, are mixed, as project sponsors have used different measures to assess performance and little has been done to compare performance across projects. Where congestion pricing projects have also added lanes, the results of pricing have not been distinguished from the results of adding capacity. Finally, congestion pricing’s impact on traveler behavior and equity has yet to be fully explored. Congestion pricing in the United States is in its relative infancy. With about 400 miles of priced lanes in operation, which includes 150 miles of the New Jersey Turnpike, pricing has not been implemented beyond a limited number of locations. However, its popularity is growing. New projects under construction and in planning will not only increase the number of roadway miles that use congestion pricing, they will also change the character of pricing in the United States, as some will be operated privately and some will add congestion-priced tolls to previously nontolled roadways. The changing character of congestion pricing and the new challenges it brings make improving the understanding of congestion pricing even more important. While a more complete understanding of the potential benefits and effects of congestion pricing is needed, we are not making a recommendation in this report because the evaluations conducted through the UPA and CRD programs are an important step to furthering understanding of the relevant benefits and effects of pricing. These evaluations of pricing projects address reservations we have about gaps in knowledge about such projects—for example, these evaluations will compare results across projects to assess the effectiveness of congestion reduction strategies and assess several measures of equity. In addition, monitoring and reporting on the five ELD projects could also provide better information about the performance of pricing and its effects. However, only one of the six UPA and CRD metropolitan sites has been evaluated and only for its first phase and four of the five ELD projects are under construction. As such, we cannot assess the evaluation framework or their results until projects and their evaluations are complete. Agency Comments DOT provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to congressional subcommittees with responsibilities for surface transportation issues and the Secretary of Transportation. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our work was focused on the performance of congestion pricing projects on highways, bridges and tunnels in the United States and issues associated with developing and implementing pricing projects. We examined (1) the federal role in supporting congestion pricing, (2) results of congestion pricing projects in the United States, and (3) emerging issues in congestion pricing projects. Our scope was limited to assessing congestion pricing projects in the United States that involved passenger vehicles. We did not review other types of congestion pricing such as priced parking facilities. We collected information on pricing projects directly from the 19 project sponsors of the 41 operational or under construction congestion pricing projects and received comments and validation of data from project sponsors. To address the federal role in supporting congestion pricing, we reviewed pertinent legislation and regulations; prior GAO reports and testimonies; and relevant documents from the U.S. Department of Transportation (DOT), state departments of transportation, and metropolitan planning organizations (MPO). This included policy documents from the Federal Highway Administration (FHWA) and various public presentations made by FHWA officials. We interviewed FHWA and Federal Transit Administration (FTA) officials, officials from state DOTs and MPOs, experts from academia and policy institutions, the Congressional Research Service (CRS), and the Congressional Budget Office (CBO). Our discussions with FHWA included DOT’s programs that involve tolling—Value Pricing Pilot Program (VPPP), High Occupancy Vehicle (HOV) Facilities, Express Lanes Demonstration, Section 129 program, the Interstate System Reconstruction and Rehabilitation Toll Pilot Program, and the Interstate System Construction Toll Pilot Program. We also discussed the Urban Partnership Agreement (UPA) and Congestion Reduction Demonstration (CRD) programs—one-time initiatives that were established with 10 separate federal grant programs. We discussed the eligibility and performance monitoring requirements of the federal programs and associated projects and verified this information with DOT documents. We also collected data of VPPP and UPA and CRD program funding from DOT and corroborated the status of the project data in interviews with FHWA. We analyzed the VPPP and UPA and CRD program funding to assess how federal funds are used to support congestion pricing projects. We interviewed sponsors of pricing projects that received federal funds on how the funds were used and with what results. We reviewed relevant FHWA environmental assessment manuals and interviewed FHWA officials about the environmental review process. We also interviewed FHWA field staff and project sponsors regarding their individual project’s environmental review process. To determine the results of congestion pricing projects in the United States, we reviewed performance evaluation reports from eight project sponsors that covered 14 of the 30 operational pricing projects. The remaining project sponsors did not have current and completed evaluations because their projects had opened too recently for sponsors to have evaluated them or because projects had changed significantly in character since they were studied and thus the original evaluations are no longer relevant. In addition, one project evaluation did not use pre- and post-data to measure performance and thus the impact of pricing could not be measured. Two project sponsors (one sponsor of the SR 261, SR 241, and SR 133 in Orange County, California and one sponsor of the SR 73 in Orange County, California) did not perform evaluations of their peak-period priced highways. Two project sponsors (one for I-15 in Salt Lake City, Utah, and one for I-10 and US 290 in Houston, Texas) completed performance evaluations, however these three highways have significantly changed in character since then and thus the original evaluations are no longer relevant. For example, I- 15 in Salt Lake City currently uses electronic tolling with dynamic pricing whereas when the project was evaluated a monthly decal with a static toll was used. I-10 was evaluated as one HOT lane with free access of carpools of 3 passengers but is now 2 HOT lanes with free access for carpools of 2 passengers. In addition, although US 290 was assessed in the I-10 evaluation, no pre- and post-data were used to measure the impact of pricing on performance. The project sponsor of I-25 in Denver, Colorado reported on monthly counts of traffic volumes and other measures but did not do so before and after pricing was introduced; therefore, we were unable to use the data to compare with other projects. Three project sponsors (I-680 in Alameda County, California; the San Francisco-Oakland Bay Bridge in California; and MD-200 in Montgomery County, Maryland) implemented projects recently and have not had adequate time to evaluate the projects. Three other projects—I-35 W in Minneapolis, Minnesota, which has the same project sponsor as I-394; SR 520 in Seattle, Washington, which has the same project sponsor as SR 167; and I-85 in Atlanta, Georgia—became operational recently and have also not been evaluated. I-35W, SR 520, and I-85 will be assessed as part of the UPA evaluations. The eight project sponsors with current and completed evaluations we reviewed were I-95 in Miami, Florida; I-15 in San Diego, California; SR 91 in Orange County, California; SR 167 in Seattle, Washington; I-394 in Minneapolis, Minnesota; the New Jersey Turnpike, New Jersey; two bridges in Lee County, Florida; and four bridges and two tunnels managed by the Port Authority of New York and New Jersey, New York. For each, we assessed the studies’ methodology to determine whether or not the data reported was valid and sufficient for our analysis. However, we did not conduct a thorough assessment of the quality of the evaluations’ methods as our objective was to assess the projects’ performance results where available. Once we determined which data was sufficiently reliable for our uses, we summarized the results reported in each performance evaluation for each project. The eight performance evaluations assessed various performance measures such as traffic speed, travel time, throughput, and transit ridership. The performance measures we focused our analysis on were: travel time and speed, throughput, off-peak travel, transit ridership, and equity. As a basis for assessing what performance measures we should review, we used a list of performance measures, some of which are outlined in the DOT’s UPA and CRD National Evaluation Framework. We chose these measurement areas because they were the most commonly reported in the evaluations and because they are the most commonly required measures for projects with federal monitoring requirements. We corroborated our choice of measures with FHWA and the Battelle Memorial Institute which is conducting the UPA evaluations using similar measures. We then compared the projects by qualitatively assessing the results for the five performance measures listed above and counting how many of the projects reported positive or negative results in each performance measurement category. We could not quantitatively compare results across projects because they did not use the same metrics and thus were assessed and reported differently according to project sponsor preference and resources. Project evaluations covered specific time periods and thus performance results are only for those time periods. Projects’ performance results may have changed since evaluations were completed. We discussed with Battelle Memorial Institute, Texas Transportation Institute, and the Volpe National Transportation Systems Center the UPA and CRD evaluation framework including its performance measures and metrics as well as challenges of conducting an evaluation across multiple projects. We also conducted site visits for the SR 167 and SR 520 projects in Seattle, Washington; I-95 project in Miami, Florida; and I-394 and I-35W in Minneapolis-St. Paul, Minnesota. We selected our site visits based on a judgmental sample of projects with completed evaluations; these sites included both HOT lanes and peak-period priced projects in different geographical areas of the United States. For each site visit, we met with relevant officials from state DOT, officials from the FHWA division office, project sponsors, and officials from local agencies such as the MPO and transit agencies. Discussion with project sponsors included clarifying the goal of the pricing projects and evaluations of the project performance. In addition to conducting interviews, we collected relevant documents, including environmental analyses, performance evaluations, and traveler surveys, and analyzed these documents as necessary. Where appropriate, we corroborated the interviews with documents obtained from project sponsors and FHWA. To identify the emerging issues in congestion pricing projects, we reviewed literature on congestion pricing, equity, environmental justice, traffic diversion, safety and other topics related to the benefits, costs, and trade-offs associated with congestion pricing. We reviewed prior GAO reports and analyses and reports from the FHWA, CRS, CBO, and industry experts and organizations that have evaluated the impacts of congestion pricing projects. We discussed our review of the reports with FHWA and FTA officials, officials from state DOTs and MPOs, and transportation experts from academia and think-tanks. We identified and interviewed experts with published work on congestion pricing and its impacts. Discussions with officials and experts included the costs and benefits of congestion pricing projects, trends in pricing designs and implementation, and methods to mitigate negative impacts. We also provided a copy of the draft report to a group of experts for an independent review. We selected these experts because they have published numerous studies analyzing the benefits and challenges of congestion pricing and its effects that are prominent in the transportation literature, and come from a cross section of institutions including academia, research organizations, and the private sector. We considered and incorporated their comments into the final report as appropriate. Group of Experts that Reviewed Draft Congestion Pricing Report Tod Litman, Researcher, Victoria Transport Policy Institute Lee Munnich, Professor, Senior Fellow and Director, State and Local Policy Program, Hubert H. Humphrey School of Public Affairs, University of Minnesota Robert Poole, Jr., Director of Transportation Policy, Reason Foundation Joseph L. Schofer, Professor, Northwestern University Brian Taylor, Professor of Urban Planning/Director, Institute of Transportation Studies, Luskin School of Public Affairs, UCLA David Ungemah, Senior Planning Manager, Parsons Brinckeroff Martin Wachs, Senior Principal Researcher, RAND Corporation We conducted this performance audit from October 2010 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Congestion Pricing Projects Open to Traffic in the United States, 2011 Date pricing operational December 1995 16- miles (4-mile extension under construction) Salt Lake City, Utah 40 miles (11-mile extension under construction) Dynamic pricing $0.50-$9.00 7 miles (15-mile extension under construction) Dynamic pricing $0.30-$1.75 Dynamic pricing $1.55- $13.95 7 miles (in operation, 11 are under construction) Variable pricing $0-$5.00 The New Jersey Turnpike discontinued off-peak discounts to out-of-state vehicles as of July 2011. As of September 2011, 1 mile of MD 200 extension is under procurement. Appendix III: DOT’s Value Pricing Pilot Program Grants, Fiscal Years 1999 through 2010 Fiscal year 1999 Fiscal year Fiscal year Fiscal year Fiscal year Fiscal year Project Express Lanes System Concept Study Strategies to Manage Traffic and Parking Strategies to Manage On- Street Parking and Reduce Congestion From Circling Vehicles. Total awarded for Fiscal years 1999-2010 Project Evaluate the application of cordon/area pricing within major activity centers in the downtown Los Angeles core and build out a network of HOT lanes Study the application of pricing on the I-84 Viaduct, Hartford, CT including assessing the impacts of environmental justice issues that resulted from the original construction of the viaduct Study the application of full facility pricing to the I- 95 Corridor from New York to New Haven, CT and identify how toll revenues would be applied to provide strong support for transit Evaluate a two-tiered pricing on an existing toll facility and develop performance measures to track the changes in congestion, air quality, safety, livability and other factors that would result. Fiscal year Project Study will look at a transit credit program designed to provide occasional free use of the HOT lane for regular transit users when they need to drive, and a parking pricing program at a park and ride lot with free parking and shuttle services added from a more distant lot. Influencing Travel Behavior and Considering Environmental Justice. Would examine important environmental justice issues related to pricing I- 30, through the use of innovative Intelligent Transportation Systems (ITS) technology. The project is important because it will provide more data on environmental justice and pricing, given that there is little experience with strategies designed to address environmental justice issues related to the introduction of pricing. This project was selected because it met the statutory eligibility criteria and was highly qualified for the above stated reasons. Therefore, this project meets the evaluation criteria for innovation, equity, and congestion reduction. Fiscal year Project 183A Turnpike Pilot Downstream Impacts. Pilot implements a peak period toll in conjunction with dynamic ridesharing on an existing congested toll road. Would explore applying dynamic ridesharing as an equity mitigation strategy. An actual field trial is included as part of project. The road opens in 2012. The local agencies are contributing their own funds to support the project. Therefore, this project meets the evaluation criteria for innovation, livability, sustainability, equity, congestion reduction, safety and state of good repair. Total approved for funding for Fiscal year 2010-2011 Total awarded and approved for Fiscal years 1999-2011 Fiscal year 2008-2009 grants were part of the one-time Urban Partnership Agreement initiative. Appendix IV: Performance and Monitoring Requirements for Federal Programs for Congestion Pricing Projects Federal program Performance measure Express Lanes Demonstration Performance metric Report percentage of time facility is operating at a minimum average speed of 50 mph broken down by daily averages a.m. peak, off-peak, and p.m. peak hours. Performance and monitoring requirement Annual reporting to U.S. DOT required. Changes in mode split/ridership/vehicle occupancies of priced versus general purpose (or adjacent free) lanes on lane availability for the managed lanes during this time, including the length of time each such lane was unavailable. Report number of declared High Occupancy Vehicles (HOV) for the year and differences from the previous year (on a total and percentage-change basis) broken into daily averages by a.m. peak and p.m. peak for the managed lanes. Report number of buses (i.e., registered non-revenue accounts) for the year and differences from the previous year on a total and percentage-change basis) broken into daily averages by a.m. peak, off-peak, and p.m. peak for the managed lanes. Report average toll charged for the year and differences from the previous year (on a total and percentage-change basis), broken into daily averages, by a.m. peak, off-peak, and p.m. peak for managed lanes. Federal program Performance measure Change in criteria pollutant emissions at the regional level (particle pollution, ground-level ozone, carbon monoxide, sulfur oxides, nitrogen oxides and lead) during the current year and differences from the previous year (on a total and percentage-change basis) utilizing reasonably available and reliable air quality reporting tolls and mechanisms. Traffic speed on priced lane to maintain express bus service Report a minimum average operating speed of 45 mph on the HOV lanes with a speed limit of over 50 mph or not more than 10 mph below the speed limit for HOV lanes with a speed limit less than 50 mph. This speed must be maintained for 90 percent of the time over 180 days during morning or evening weekday peak. Annual certification required by U.S. DOT. If performance standards not met, actions must be taken to comply. Performance and monitoring requirement No certification required by U.S. DOT as no specific performance standards must be met. However, participants are required to report on “value pricing” elements up to 10 years. Multi-year project-specific and national evaluations which U.S. DOT is overseeing. Percent change in route/corridor travel time by time of day Percent change in the travel time index for comparisons across sites (having corridors of differing lengths) Percent change in number of hours of the day with congested conditions and the number of congested travel links per day Percent change in average travel speeds by hour of the day Percent change in travel time reliability and planning time index Percent change in vehicle and person trips by time of day and person and vehicle throughput Change in traveler perceptions about congestion after deployment of strategies Level of service in tolled lanes Travel-time reliability in tolled lanes Average occupants per vehicle of tolled lanes versus general purpose lanes Use of tolling options Traffic density in tolled lanes Travel-time reliability (seasonally controlled) Days exceeding reliability and performance thresholds End-to-end travel time Service reliability User ratings of service performance Corridor ridership Corridor mode split (%) Federal program Performance measure Equity Analysis Socio-economic and geographic distribution of benefits including: tolls paid and adaptation costs; change in travel time and distance by group; total transportation cost; environmental impacts Public perception of the individualized equity impacts of pricing Spatial distribution of revenue reinvestment Reduction in criteria pollutants Reduction in noise Reduction in vehicle miles traveled Qualitative assessment of perceived benefits of the Reductions in estimated fuel use Use and impact of alternative fuel vehicles for transit improvements Percentage change in crash rate by type and severity Percentage change in time to clear accidents Change in the perception of safety by service patrol operators, state patrol officers, medical first responders, and bus operators Change in the perception of safety by travelers This is a sample of UPA and CRD performance analyses that include measures and metrics related to the effects of congestion pricing. Other UPA and CRD analyses will look at telecommuting/travel demand management and technology as well as impacts on businesses, goods movement, and nontechnical success factors. Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Steve Cohen (Assistant Director), Maureen Luna-Long, Sarah Jones, Thanh Lu, Michael Kendix, Sara Ann Moessbauer, James Wozny, Elizabeth Eisenstadt, Crystal Wesco, and Josh Ormond made key contributions to this report.
Many Americans spend frustrating hours each year stuck in traffic. While estimates vary, the Department of Transportation (DOT) estimates that traffic congestion costs the United States $200 billion each year, and that more than one-quarter of total annual travel time in metropolitan areas occurs in congested conditions. Road pricing or congestion pricing—assessing tolls that vary with the level of congestion or time of day—aims to motivate drivers to share rides, use transit, travel at less congested times, or pay to use tolled lanes. Since the first U.S. congestion pricing project opened in 1995, 19 project sponsors have 41 pricing projects in operation or under construction. About 400 miles of priced highway lanes including nearly 150 miles on the New Jersey Turnpike are in operation today with current tolls varying from 25 cents to $14. All U.S. projects in operation are either High Occupancy Toll (HOT) lanes, which charge solo drivers to use newly constructed lanes or carpool lanes, or peak-period pricing projects, which charge a lower toll on already tolled roads, bridges and tunnels during offpeak periods. GAO examined (1) the federal role in supporting congestion pricing, (2) results of U.S. congestion pricing projects, and (3) emerging issues in congestion pricing. Eight project sponsors have current and completed evaluations on at least 1 project, for a total of 14 evaluated projects, all of which GAO reviewed. GAO interviewed officials about the performance of their pricing projects and effects. DOT provided technical comments, which GAO incorporated as appropriate. DOT approves all congestion pricing projects on roadways that receive federal funds and provides grants for project studies, implementation, and evaluation. Nearly all HOT lane projects and most peak-period pricing projects in operation today received federal funds at one time or another. DOT’s largest programs for congestion relief, the Urban Partnership Agreement and Congestion Reduction Demonstration programs, have provided grant funds totaling nearly $800 million since 2006 to six metropolitan areas to implement pricing and other strategies. DOT requires sponsors of congestion pricing projects to monitor and evaluate performance and, for HOT lanes when applicable, ensure that a federal standard for minimum traffic speeds is met. The 14 congestion pricing projects that have current and complete evaluations generally show that pricing can help reduce congestion, although other results are mixed, and not all possible relevant impacts have been assessed. HOT lane projects, which aim to reduce congestion by decreasing travel time and increasing speed and the number of vehicles using the lane, have reduced congestion, but some HOT lane projects also added new lanes, and studies did not distinguish the extent to which performance improvements were due to added lanes or pricing. In addition, although the number of cars using HOT lanes has risen, there were fewer people in those cars because of an increase in the proportion of toll-paying solo drivers or a decrease in carpools. Peak-period pricing projects, which aim to reduce congestion by encouraging drivers to travel at off-peak times, have shifted some travel to those times. Other congestion pricing effects—such as equity income impacts—have not always been evaluated. Potential concerns include income equity (whether low-income drivers are disproportionately affected by congestion pricing) and geographic equity (whether one geographic area is more negatively affected than another, such as when traffic diversion occurs). These impacts are important to assess as they address the public and elected officials’ concerns about the effects of pricing on travelers and communities. Ongoing multi-year evaluations across six metropolitan areas will assess the performance and effects of congestion pricing projects using a specific set of measures to assess the effectiveness of congestion reduction strategies. Concerns about equity may grow as pricing projects become more widespread. New projects are under construction, and several metropolitan areas have networks of HOT lanes planned that will expand the relatively limited use of pricing today. Equity concerns may become more acute where sponsors are using pricing not only to manage congestion, but also to raise revenue to build new projects. Raising revenue can be at odds with managing congestion (e.g., increasing passenger throughput) if higher tolls can produce more revenue from fewer paying vehicles. Options to address equity issues include using a portion of toll revenues to finance public transit service.
Background The Identity Theft and Assumption Deterrence Act of 1998 made identity theft a federal crime. Although FTC does not have the authority to bring criminal cases, the act established FTC as the federal clearinghouse for identity theft complaints. FTC is required to keep a log of such complaints and to notify consumers that their complaints have been received. In response to this requirement, in November 1999 FTC established the Identity Theft Data Clearinghouse to gather information from consumers who file complaints or inquire about identity theft. FTC inputs this information into its Consumer Sentinel database, which is used by more than 1,000 law enforcement agencies. According to FTC, the number of identity theft complaints it has received has steadily risen each year— climbing from 31,000 in 2000 to 247,000 in 2004. FTC staff noted that the increase in reported instances of identity theft may in part reflect enhanced consumer awareness and willingness to report such crimes. However, not all identity theft victims contact FTC. No single federal law enforcement agency has primary jurisdiction over identity theft crimes. Identity theft is not typically a stand-alone crime but rather a component of one or more crimes such as bank fraud, credit card fraud, social program fraud, tax refund fraud, and mail fraud. For example, a fraudster might steal another individual’s personal identifying information in one city and use the information to commit credit card fraud and mail fraud in another city or state. Consequently, a number of federal law enforcement agencies can have a role in investigating identity theft crimes, including the Federal Bureau of Investigation (FBI), Internal Revenue Service, U.S. Postal Inspection Service, U.S. Secret Service, and the Social Security Administration’s Office of the Inspector General. The Department of Justice (DOJ) prosecutes federal identity theft cases. The FACT Act of 2003, among other things, strengthened victims’ rights with respect to identity theft and gave FTC and businesses a larger role in dealing with identity theft crimes. The act also highlighted the need for law enforcement agencies to assist victims in documenting their identity theft crime. Section 609(e) requires business entities to provide victims of identity theft with information on fraudulent business transactions—for example, copies of applications for credit or records of purchases. In the past, businesses have been reluctant to provide such information, fearing their potential exposure to lawsuits for the inappropriate disclosure of sensitive personal financial information. To address this concern, Congress added a provision protecting businesses from civil liability claims for disclosing such information. In addition, the FACT Act reinforces the need for police to assist victims in taking official reports. These reports can then be used to substantiate claims of identity theft when alleged victims request, for example, copies of business records involving instances of potential fraud. The FACT Act also requires that FTC develop a model summary of rights for consumers who believe that they are victims of identity theft (see app. II). CRAs are required to provide a substantially similar version of the model summary of identity theft victim rights to any consumer who “contacts a consumer reporting agency and expresses a belief that the consumer is a victim of fraud or identity theft.” As previously mentioned, the act mandated that FTC launch a public campaign on how to prevent identity theft by December 2005, but did not specifically require that section 609(e) be included in the campaign. Some Efforts to Increase Awareness of Section 609(e) Were Under Way as of June 2005 At the time of our review, some efforts to educate consumers, business entities, and local enforcement officials about their rights and obligations under section 609(e)—notably efforts undertaken by the FTC, U.S. Postal Inspection Service, International Association of Chiefs of Police, and National Credit Union Administration—were under way. We found that while most of the federal agencies and law enforcement agencies and other groups that we contacted were engaged in outreach related to identity theft issues, those efforts generally did not have a component specifically addressing section 609(e). In particular, FTC staff told us that outreach for section 609(e) would be part of broader efforts to educate the public, business entities, and law enforcement officials about identity theft and FACT Act provisions and that outreach on 609(e) would increase beginning in December 2005 as part of its public identity theft campaign. As of June 2005, outreach efforts by a variety of interest groups, including businesses and their trade groups, federal law enforcement agencies, and banking regulators, were also just beginning. Most of these groups saw FTC as having primary responsibility for outreach. FTC’s Outreach Initiatives on the FACT Act Provisions Are Ongoing and Expected to Increase FTC staff told us that they have undertaken a number of outreach efforts to educate the public, law enforcement, and others on the FACT Act, including section 609(e). However, FTC staff explained that section 609(e) is only one tool in the resources available to victims for remedying the effects of identity theft and that FTC’s first priority is to make those affected aware of all of the relevant provisions contained in the FACT Act. Since 2004, FTC in conjunction with federal law enforcement agencies has cosponsored six conferences and presentations geared directly to local law enforcement, which included a discussion on the FACT Act. In addition, since January 2004 FTC has participated in more than 50 conferences, seminars, and presentations on the FACT Act involving attorneys, bar associations, business trade groups, financial institutions and state regulators. FTC staff told us that these outreach efforts addressed increasing the awareness of section 609(e) provisions, as appropriate to the particular audience. Other FTC outreach efforts on section 609(e) include links on FTC’s Web site at www.consumer.gov/idtheft to its model summary of identity theft victim rights and other information on identity theft, a toll-free hotline (1-877-IDTHEFT) offering counseling to help consumers who want or need more information about dealing with the consequences of identity theft, and FTC’s Consumer Response Center and Distribution Office that provides publications on identity theft, among other topics. For example, FTC recently updated its identity theft booklet, renamed Take Charge: Fighting Back Against Identity Theft, to incorporate the FACT Act requirements, including section 609(e). Further, FTC has included section 609(e) in the mandated summary of identity theft victim rights that the CRAs began distributing in January 2005. At the time of our review, this summary was the primary mechanism for providing consumers with information on their right to obtain business records on potentially fraudulent transactions. According to FTC staff, FTC’s Web site on identity theft included links to section 609(e) as of its effective date of June 1, 2004, which was subsequently integrated into an updated identity theft Web site. According to FTC staff, section 609(e) will also be incorporated into the public education campaign that FTC is required by the FACT Act to implement by December 2005 which will help increase outreach on the provisions. FTC staff told us that the agency has conducted outreach on privacy, consumer reporting, identity theft, and related legislation and regulations for several years and that the initiatives target consumers, businesses, and law enforcement agencies. FTC conducts its identity theft education campaign through its Web site, printed publications, conferences and presentations, syndicated news articles and newscasts, training sessions, communications with state attorneys general, and visits to high school and college campuses. It also holds seminars for small businesses that may not be active with trade groups. Additionally, the agency provides counseling over the telephone to consumers who contact FTC with complaints or inquiries about identity theft. FTC staff explained that the agency attempts to leverage its resources to conduct outreach—that is, it relies on consumers, law enforcement agencies, and businesses to spread the relevant information it provides among one another. FTC staff stated that the mandated public education campaign would build upon and become a component of FTC’s ongoing consumer and business education campaigns. FTC staff stated that the mandated campaign will be under way by the December 2005 deadline and will include coverage of section 609(e). The staff told us that in terms of its identity theft outreach to consumers, FTC’s priority is for consumers to know that FTC is the organization that consumers should contact if they need information or assistance with identity theft problems. They added that requesting business transaction records under section 609(e) is not the first step an identity theft victim takes to restore his or her credit. According to FTC staff, the campaign will focus on all aspects of identity theft prevention and remediation as well as informing consumers, businesses, and law enforcement agencies about the new rights and responsibilities discussed in section 609(e) and other provisions that are useful to consumers. At the time of our review, FTC could not provide us with information on the exact extent of coverage that section 609(e) would receive in FTC’s outreach materials to consumers and businesses. FTC did provide us with a copy of its identity theft public education solicitation dated June 1, 2005. According to the solicitation, one of the expected targets of the campaign are identity theft victims with the goal of assisting those victims in the recovery process by teaching them the steps to take to reclaim their good names. FTC plans to evaluate the effectiveness of the public outreach campaign using various means. According to the solicitation, the program plan for the public education campaign is expected to include strategies for monitoring and evaluation of program results. FTC staff also told us that they also intend to monitor and evaluate the results of this contract through traffic to the identity theft Web site, publications distribution, and identity theft complaints to the FTC. Identity Theft Outreach Efforts by Others Are Also Under Way, but Few Specifically Address Section 609(e) We found that a variety of outreach efforts on identity theft were under way or were being planned by businesses, trade groups, law enforcement agencies, federal banking regulators, and consumer groups. We were able to obtain only limited information on efforts undertaken by businesses and their trade groups and associations. A few business trade group representatives told us that they had been active in reaching out to their constituents on identity theft issues through presentations and newsletters. These representatives told us that it was likely that the level of awareness among midsize and large business entities regarding the FACT Act and section 609(e) was greater than among small businesses, because larger businesses were more likely to belong to trade groups and associations and to have more internal legal resources. The representatives also told us that at the time of our review business entities and their trade groups were focused on other issues likely to have a more direct impact on their operations than section 609(e), such as working with Congress on bankruptcy legislation, which was ultimately passed and became law on April 20, 2005. Most of the federal law enforcement officials that we contacted had general identity theft outreach efforts under way that included some form of outreach on the FACT Act, although few specifically included information that addressed section 609(e). Rather, most of these groups focused on general identity theft prevention rather than on the section 609(e) provisions. Officials from one federal law enforcement agency indicated that their identity theft outreach efforts that include FACT Act provisions were in the initial stages of implementation. These efforts are designed to reach consumers, businesses, and law enforcement entities. For example, as previously discussed, federal law enforcement agencies such as DOJ, FBI, the U.S. Postal Inspection Service, and the Secret Service have held joint conferences on identity theft, including a discussion of the FACT Act, and invited local police to attend. As shown in figure 1, one effort that did specifically address section 609(e) and was directed to law enforcement agencies was an advertisement developed by the U.S. Postal Inspection Service that prominently noted the new tools for law enforcement and new rights of victims under the FACT Act, including section 609(e). The International Association of Chiefs of Police (IACP) has also specifically addressed section 609(e) in the information that it has disseminated on identity theft. An IACP official told us that a lack of awareness among local police departments of the new provision could make it problematic for some identity theft victims to get local police departments to take a police report. The IACP official also emphasized the importance of ensuring that police were aware of the FACT Act and of their responsibility to take reports to validate an identity theft claim. The official added that local police were likely to become increasingly involved in identity theft crimes because these officers are committed to being responsive to the citizens within their communities. To address the perceived lack of awareness among local police, the IACP featured articles on identity theft and the FACT Act, including section 609(e), in its January through April 2005 editions of Police Chief magazine. The IACP official also stated that the association was in the process of finalizing a national report on identity theft that would be released in print, on the Internet, and on a CD, and would be featured at conferences. The report will discuss policies and recommended procedures under current laws and describe the responsibilities, including the FACT Act requirements, of law enforcement. While all of the federal banking regulators provided general identity theft outreach, only the National Credit Union Administration specifically addressed the section 609(e) provision in its identity theft outreach efforts. Each of the regulator’s general identity theft outreach included conducting presentations, posting related information on their Web site, and publishing identity theft literature or brochures. For example, the Federal Deposit Insurance Corporation (FDIC) published materials such as Consumer News, which featured articles on identity theft but did not address the provisions of section 609(e). The only regulator that we identified as having specifically addressed section 609(e) was the National Credit Union Administration which issued a Regulatory Alert in January 2005 informing credit unions about the FACT Act’s provisions including section 609(e). Some officials noted that the Federal Financial Institutions Examination Council (FFIEC) had recently formed an interagency task force, to among other things, address how federal banking regulators could ensure that regulated institutions were in compliance with the new requirements. These officials added that their agencies had not yet established any outreach efforts specific to section 609(e) because they were waiting for the results of the recently formed FFIEC task force, in order to avoid duplicating the task force’s efforts. Some consumer groups we contacted maintained FACT Act information on their Web sites and educated identity theft victims who contacted them in some instances by providing telephone counseling and printed publications. Officials from one consumer group acknowledged FTC’s mandated campaign as a key outreach tool and suggested that the campaign should also include initiatives directed to businesses. These officials explained that it was important that business entities understand their obligations and roles under section 609(e). Specifically, the officials stated that these initiatives should involve business groups such as the Better Business Bureau, the U.S. Chamber of Commerce, the National Retail Federation, state retailer associations, and TRUSTe®. FTC staff told us that they often use associations in their outreach as an effective method to help spread information. The limited anecdotal information that FTC had on victims who attempted to obtain business transaction records related to identity theft suggested that not all businesses were aware of their obligations under section 609(e). According to FTC staff, a few identity theft victims had contacted FTC and reported that they were unable to obtain business transaction records related to the theft of their identity. According to FTC, these instances were caused primarily by businesses’ lack of knowledge about their obligations under the FACT Act. Once FTC informed these business entities about their obligations, the victims were able to obtain the necessary transaction records. Most agencies and groups that we spoke with had done some general identity theft outreach and had planned or already had under way a few efforts that focused on section 609(e), but viewed FTC as having the primary responsibility for providing outreach on the FACT Act, including section 609(e). FTC staff told us that they intend to evaluate the effectiveness of FTC’s mandated identity theft campaign, which will include the 609(e) provisions, but emphasized that FTC’s first priority is outreach to consumers, businesses, and law enforcement on the FACT Act, an effort that would occur over time. As a result, more time is needed to disseminate information about the section 609(e) provisions and determine how useful the provision is in helping victims correct their credit files and resolve their cases. Many Believe the New Provision Will Be Useful, but Some Potential Concerns Were Identified While not all identity theft victims will need section 609(e), FTC, law enforcement agencies and consumer groups with whom we spoke believed that the provision giving victims access to data on fraudulent business transactions would help in resolving identity theft cases. In particular, law enforcement agencies told us that the information would help victims build stronger cases to present to law enforcement agencies and should provide more of the data that are needed to identify patterns or trends in identity theft practices. Noting that victims of identity theft often have difficulty getting local police to take a report to help substantiate an identity theft crime, consumer advocacy groups also told us that they believed that the new provision should make filing these reports easier. State agencies and consumer advocacy groups also identified some potential concerns with the provision. Among these were the timeliness of the data provided to victims and a concern that businesses could require excessive documentation from victims to support an identity theft claim. FTC, Law Enforcement Agencies and Consumer Groups Believe That the New Provision Will Help Some Victims of Identity Theft FTC staff told us that depending on the specific circumstances, not all identity theft victims will need to assert their rights under section 609(e) but that section 609(e) would be extremely useful for those victims who need additional documentation to support their disputes of fraudulent accounts. Representatives from federal law enforcement agencies and IACP said that it was too early to determine whether victims were finding it easier to get local police to take identity theft reports and that local law enforcement agencies might not yet be fully aware of the requirements of this provision. But representatives of federal law enforcement agencies and consumer advocacy groups said that the new provision should help empower victims of identity theft by giving these victims access to data on fraudulent business transactions that could help resolve the crimes. The officials explained that before Congress created section 609(e), victims had generally been unable to obtain data on fraudulent business transactions because businesses feared being held liable for providing the information. To address this concern, Congress established limitations in the FACT Act provision so that businesses could not be held liable for disclosing such information to victims. Representatives of businesses we spoke to said that addressing the liability issue in the law had removed the barrier to providing information on allegedly fraudulent transactions to victims of identity theft. One consumer group told us that having records of fraudulent business transactions, such as copies of checks or signed applications for credit, would allow victims to prove that someone else was responsible—for instance, by comparing signatures. Without these records, victims may have no way of proving that the transactions were fraudulent and could be forced to pay the bills themselves. Officials from law enforcement agencies told us that as an added benefit, victims would be able to gather more information on their cases that may prompt law enforcement agencies into opening an investigation. In turn, law enforcement officials could use that information to assess the nature and scope of alleged crimes of identity theft. Additionally, law enforcement officials anticipated that the information would help investigators build cases more quickly and identify patterns or trends in identity theft practices. For instance, the information could help identify the frequency of certain types of fraud and the locations being targeted, allowing investigators to better determine whether individual crimes were part of a larger operation. However, federal and state law enforcement officials pointed out that having more information might not necessarily lead to an increase in prosecutions. In fact, one of the states we contacted that had a similar identity theft law told us that the number of police investigations or prosecutions of identity theft crimes had not increased since the state law had been in effect. The officials explained that workloads and other priorities often determined the types of investigations and prosecutions law enforcement undertake. Consumer Advocacy Groups Anticipated That the New Provision Would Make Obtaining a Local Police Report Easier for Victims Consumer advocacy groups we interviewed noted that in the past, victims of identity theft sometimes had difficulty getting local police to take reports about the crimes, although police reports help substantiate victims’ claims. As we reported in 2002, getting local police to file a police report is a critical first step in being able to investigate the crime and in undoing the impacts of identity theft. The consumer advocacy groups noted that the new provision will increase pressure on local police to take reports because these reports can play a key role in verifying the identity theft victim’s right to access information. These groups pointed out that in California, which has a similar identity theft law already in place, local police who were aware of their obligations under the state law were more likely to take identity theft reports. One consumer advocacy group told us they expect a similar outcome with the FACT Act provision. Additionally, officials from the two states—California and Washington—that have enacted similar identity theft laws agreed that since their laws had been in place, police had generally been more willing to take reports from identity theft victims. Officials from one of the states told us that law enforcement agencies there had also been more active in discussing identity theft issues. Representatives of a consumer advocacy group and law enforcement agencies acknowledged that the overall number of police reports charging identity theft crimes was increasing but noted that it was difficult to attribute this increase to any one cause, including the FACT Act. For instance, one consumer advocacy group we spoke with attributed the increasing willingness of local police to take these reports to the fact that identity theft was a growing problem and that the public was generally more aware of it. Officials from law enforcement agencies also pointed out that the difficulty of filing local police reports was only one of the frustrations victims of identity theft faced. For example, the amount of time required to clean up credit and the lack of criminal prosecutions for these crimes are even more frustrating for victims, and both of the issues remain unresolved. Some State Agencies and Consumer Groups Had Reservations about Portions of the New Provision Representatives of state agencies and consumer advocacy groups with whom we spoke identified two potential concerns about the provision. First, the provision gives businesses 30 days from the date of a victim’s request to provide information on fraudulent business transactions—a time period that some feel is too long. For instance, officials from one state agency and a consumer advocacy group we spoke to stressed the importance of providing information quickly so that victims could begin clearing their credit files and resolving their cases. Several of those we spoke with recommended 2 weeks as a more reasonable length of time for victims to gain access to records and pointed out that states such as California and Washington, which have similar identity theft laws, ask business entities to respond faster. California’s privacy laws require that businesses respond within 10 business days of receiving the person’s request (which must include a copy of the police report and identifying information). Washington’s identity theft law does not specify a time frame for responding to requests for records, but state officials stated that business entities are encouraged to respond within a reasonable amount of time. State officials from both California and Washington noted that victims in their respective states had generally been able to obtain data on fraudulent business transactions within their respective time frames. In contrast, business entities we spoke with believed that there could be complicated situations in which it might be difficult to respond within the 30-day time period. Additionally, representatives from two law enforcement groups said that the 30-day time period appeared to be reasonable. They explained that businesses might need the time to review the request and verify a victim’s identity and added that the 30 days could reflect the reality of running a business with competing priorities. FTC staff said that although they did not know how long businesses were taking to respond to victims, it would be unfortunate if businesses were in fact taking the full 30 days. While these officials agreed that victims needed to obtain information promptly in order to resolve their cases, they noted that the 30-day time period had been established to give businesses additional time to respond to requests if needed. Because the law affects a wide range of businesses, the officials told us, it must allow for a wide range of circumstances. Consumer advocacy groups were also concerned with the discretion the provision gives to businesses to request additional documentation— beyond a police report—as proof of a victim’s claim of identity theft. Under the provision, businesses may require victims to provide a copy of a standardized affidavit of identity theft or an acceptable affidavit of fact as well as a police report. FTC, in conjunction with credit grantors and consumer advocates, has developed the Identity Theft Affidavit, a standard form victims can use to report information on, for example, fraudulent accounts that have been opened. The affidavit of fact is a business’ own form used by a victim for documenting alleged identity theft. However, consumer groups we spoke with said that a police report should be sufficient evidence to verify an identity theft claim and questioned the amount of information businesses actually needed. Representatives of CRAs also pointed out that a broad range of what could be characterized as identity theft reports existed. These representatives explained that any law enforcement group, whether civil or criminal, could take an identity theft report, raising concerns about the consistency of the information being reported and the possibility that the credit repair industry could misuse it. Additionally, officials in California and Washington told us that victims of identity theft in their states had experienced difficulties trying to obtain data on fraudulent business transactions immediately after their state laws were enacted. The officials attributed the initial difficulties to the fact that businesses were probably not aware of the new statutes. Officials in California told us that they had developed a template for a letter that victims could send to businesses. The letter provides information both on the law and on penalties for noncompliance and had been effective in getting businesses to comply. Officials in Washington told us that they had provided education to consumers, businesses, and the law enforcement community early on. For instance, the business community was involved in disseminating information on the requirements of the law, and a law enforcement “tool kit” was developed that provided information on the law and criminal provisions. As mentioned earlier, we were only able to obtain opinions from a limited number of businesses or industry representatives, including trade associations, on the experiences of businesses in complying with section 609(e) or the expected impact of this provision. Several of the national business and industry representatives we contacted declined our requests for comments because they had limited information to share with us on the likely extent of awareness within the business community on this provision. While we did manage to gather some opinions from a few businesses and associations, the information obtained was extremely limited. FTC staff told us that as part of their overall FACT Act outreach efforts, they intend to monitor the implementation of section 609(e) to determine whether any additional efforts are necessary to ensure that the provision is working as Congress intended. They also stated that they would use their law enforcement authority as appropriate if they determined that a business or businesses were not complying with the provisions of section 609(e). FTC’s Model Summary of Rights Process Has Generally Been Viewed Favorably Officials and representatives of federal agencies and consumer groups we contacted believe that the FTC’s new summary of rights will be useful to victims of identity theft. As mandated by the FACT Act, FTC published its final summary of rights in November 2004, and CRAs began distributing a version of the summary to consumers in January 2005. Federal banking agencies spoke favorably of FTC’s process for soliciting comments while the agency was developing the model summary. However, some consumer groups told us that they still had some potential concerns with the final document. These potential concerns included the lack of a requirement that CRAs make the summary available in other languages, specifically Spanish, and the general readability of the summary. In response to these potential concerns, FTC stated that while CRAs are not required to provide the summary in other languages, FTC’s consumer model summary does contain a statement in Spanish directing consumers to FTC to obtain additional information. FTC has made a Spanish version available on its identity theft Web site. FTC also stated that it had tried to use plain language in the summary, and it recognized the need for additional outreach efforts. We also noted that overall FTC’s final summary was more concise and used shorter sentences than its draft summary, resulting in a document that we found generally easy to read. Federal Banking Regulators Had a Favorable View of FTC’s Process of Developing the Model Summary On November 30, 2004, FTC published its final version of the model summary of identity theft rights as mandated by the FACT Act (see app. II). The summary highlights the major rights FCRA provides to identity theft victims seeking to remedy the effects of fraud or identity theft. These include the right to obtain free file disclosures, the right to file fraud alerts, the right to obtain documents or information relating to transactions involving the consumers’ personal information, and the right to prevent consumer reporting agencies from reporting information that is the result of identity theft. As outlined in FTC’s guidance, CRAs were to begin distributing by January 31, 2005, a “substantially similar” version of FTC’s summary to consumers who believed they had been victims of fraud or identity theft. According to representatives with whom we spoke, these agencies had begun distributing their summaries of identity theft victim rights before this date. The representatives also noted that the summaries distributed were very similar to the FTC’s model summary of rights. Under the FACT Act, the FTC was required to consult with the federal banking agencies and the NCUA in preparing the model summary of consumers’ rights. Federal banking agency officials told us that FTC had effectively promoted collaboration among the regulators in developing the summary of identity theft rights. Federal banking agency officials also stated that FTC solicited comments on two draft versions. The officials told us that although they did not have substantive concerns with either version, they did provide editorial comments. These officials said that they suggested, among other things, avoiding technical terms, using fewer acronyms, shortening sentences, and in general focusing on keeping the summary easy to read by using simple English. Additionally, the federal banking agency officials stated that FTC had substantially incorporated the agencies’ input. Officials of law enforcement agencies and representatives of consumer groups whom we contacted believed that the summary should provide useful information for victims of identity theft. For instance, officials from two law enforcement agencies stated that the model summary would be a significant aid to victims. The officials explained that in the past victims had often felt helpless because of the limited avenues available to them in resolving their cases. With the summary of rights, however, victims can learn about concrete steps they can take to help themselves. Similarly, consumer advocacy groups believed that the summary of rights contained information that would be useful to victims of identity theft and added that the document would be among the most important tools in implementing the changes to the FACT Act. These groups also stated that FTC’s model summary of rights would be useful in setting the standards for efforts by media and nongovernmental organizations to educate consumers about their credit reporting rights in general. Some Consumer Groups Remain Concerned about the Availability of Translations and about Readability Some consumer advocacy groups we spoke with identified two potential concerns with FTC’s final model summary of rights. First, these groups pointed out that FTC did not require CRAs to make the model summary of rights available in other languages, primarily Spanish, and that access to bilingual information was especially important to those persons whose dominant or sole language is Spanish. According to these groups, the Census 2000 figures indicate that nearly 19.6 million U.S. citizens between the ages of 18 and 64 spoke Spanish and that one-third of this group spoke English “not well” or “not at all.” FTC staff told us that while the CRAs were not required to provide a copy of the summary in other languages, the final summary did contain a Spanish statement telling consumers to contact the FTC for information in Spanish and giving both the agency’s mailing and Web site addresses. A Spanish translation of the summary of rights is available on FTC’s identity theft Web site. Finally, FTC staff told us that FTC targets certain populations in its ongoing public outreach efforts and expects to continue to do so in the context of its mandated public campaign on identity theft prevention. The three nationwide CRAs we contacted provided us with copies of their summaries of rights for identity theft victims that the agencies had begun distributing to consumers in January 2005. Only one of the agencies had made a summary of rights available in Spanish; the other two had placed a Spanish statement similar to FTC’s on their summaries directing consumers to FTC for information in Spanish. Officials from the CRAs told us that they distributed the model summary of rights to consumers who notified them of potential identity theft and not, in general, to every consumer who contacted them. Second, consumer advocacy groups were concerned that the model summary would not be easy to read and understand. A comment letter to the FTC from nine consumer advocacy groups said that the model summary of rights should be tested for readability before it was finalized to ensure that it could be easily understood by all consumers, including those with limited education and those who did not speak English as their primary language. The letter stated that having a readable summary was vital to ensuring that consumers were aware of their rights with respect to identity theft, especially those consumers who might not be familiar with the financial services world. One consumer group we spoke with also stressed that readability, which includes the organization of the document and format, was important for any public message. In response to the comments the agency received, FTC’s final rule stated that the agency had tried as far as possible to use plain language in the summary and agreed that the notices needed to be supplemented by outreach efforts, which the agency said it intended to undertake. FTC staff also told us that while they did not have the document reviewed by a private readability expert, they did have the document reviewed internally for presentation and clarity by FTC’s Office of Consumer and Business Education. In our review of FTC’s draft and final summary of identity theft rights, we found that overall FTC’s final summary was more concise and used shorter sentences than its draft summary. Several of the comments to FTC had suggested streamlining the information to improve the clarity of the document. As a result, the final summary was generally easy to read. Conclusions Section 609(e) is intended to help victims of identity theft obtain access to data on fraudulent business transaction records that could help in repairing the damage, financial and otherwise, that crimes of identity theft can inflict. However, because section 609(e) has been in effect only a short time (since June 2004), it is too soon to assess the effectiveness of the provision. Because efforts to alert consumers, business entities, and local law enforcement agencies on their rights and responsibilities under section 609(e) were in their early stages, it is also too soon to determine the extent of the awareness and use of section 609(e) by these groups. The FACT Act mandates that FTC conduct outreach on identity theft prevention, and most of the groups we contacted felt that FTC should have primary responsibility on identity theft issues. FTC is in a unique position because it already has an existing dialogue with the critical groups involved in section 609(e) through its ongoing outreach efforts on identity theft issues, its interaction with consumers who use its identity theft hotline and consumer complaint database, and its mandated campaign on identity theft prevention. In contrast, no other agency or group maintains public outreach efforts that are as far reaching as the FTC’s. FTC intends to assess the effectiveness of its mandated identity theft campaign which will include coverage of section 609(e). Such an assessment would be useful as a means of determining the extent that consumers, businesses, and local law enforcement agencies are aware of their rights and obligations under section 609(e), the extent of any implementation issues, and whether the new provision is helping consumers as intended to remedy the effects of identity theft. Similarly, experience with victims who have attempted to obtain business records is limited by the short period of time that has elapsed since the act went into effect. It is too early to assess the actual impact of section 609(e) on consumers’ ability to get business records relating to suspected fraudulent transactions. While consumer groups and state agencies identified some potential problems with the provision, additional experience and input from identity theft victims will be needed to determine whether these concerns prove to be valid and what, if any, other issues may arise. While FTC’s process for developing its mandated model summary of identity theft victim rights was viewed favorably and CRAs had begun distributing a similar version of the summary to consumers, some potential concerns with the summary of rights were noted. These potential concerns center primarily on the limited availability of a Spanish version of the summary of rights and, to a lesser extent, on the clarity of the summary of rights to the general population. While it is too early to determine the extent of any implementation issues, FTC efforts to monitor the implementation of section 609(e) should provide additional information on the usefulness of the summary of rights in aiding identity theft victims. We are sending copies of this report to interested congressional committees and subcommittees; the Chairman, FTC; the Attorney General; the Director, FBI; the Secretary of Homeland Security; the Commissioner, Social Security Administration; the Chief Postal Inspector, U.S. Postal Inspection Service; the Director, U.S. Secret Service; the Chairman, Federal Deposit Insurance Corporation; the Chairman, Board of Governors of the Federal Reserve System; the Acting Comptroller of the Currency; the Acting Director, Office of Thrift Supervision; the Chairman, National Credit Union Administration; and the Secretary of the Treasury. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff who contributed to this report are Harry Medina, Tania Calhoun, Heather Dignan, and Janet Fong. Objectives, Scope, and Methodology Our reporting objectives were to (1) provide information on outreach efforts to consumers, businesses, and local law enforcement agencies on the provision in the Fair and Accurate Credit Transactions (FACT) Act of 2003 that allows identity theft victims to obtain business records relating to fraudulent transactions; (2) describe the views and opinions of relevant federal agencies, private business entities, and consumer groups regarding the expected impact of the provision; and (3) discuss the process used by the Federal Trade Commission (FTC) to develop the model summary of rights of identity theft victims mandated in the FACT Act and examine the opinions of related groups on this process. To address all three objectives, we contacted representatives of FTC and five federal law enforcement agencies that are involved in the investigation and prosecution of identity theft crimes—Department of Justice, Federal Bureau of Investigation, Social Security Administration, U.S. Postal Inspection Service, and U.S. Secret Service—and the International Association of Chiefs of Police, which includes the heads of police departments around the country and abroad; met with officials of the five federal banking regulators—Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, National Credit Union Administration, Office of the Comptroller of the Currency, and Office of Thrift Supervision— regarding compliance by federally insured depository institutions with the FACT Act provision and their interaction with consumers on identity theft issues; spoke with representatives of the three national credit reporting agencies (CRAs)—Experian, Equifax, and Transunion—which play a key role in distributing the summary of identity theft victim rights and in helping identity theft victims correct their credit records; held meetings with representatives of two states—California and Washington—that had previously enacted identity theft laws with provisions similar to the section 609(e) to obtain their views on the expected effectiveness of the federal provision; contacted five consumer advocacy groups—Consumers Union, Identity Theft Resource Center, National Consumer Law Center, Privacy Rights Clearinghouse, and U.S. Public Interest Research Group—that were identified as being active in identity theft issues to obtain their views and perspectives as representatives of consumers and identity theft victims; and obtained limited information from a few businesses and trade associations on these subjects. Specifically, we contacted officials from state retailers’ associations in California, Florida, and Texas, as well as the Coalition to Implement the FACT Act which represents a range of trade associations and business entities that furnish and use consumer information, including financial services companies and retail associations. We also attempted to contact other businesses and associations through other groups such as the U.S. Chamber of Commerce and a private consultant. However, these businesses and associations declined to offer comments, in some cases citing their limited exposure to these provisions. For all the groups that we contacted, we reviewed information pertaining to identity theft and the FACT Act that was available to consumers on their Web sites. We obtained and examined information associated with their outreach programs. However, we did not perform test callings of FTC’s identity theft hotline to determine how the FACT Act provisions had been incorporated. We also did not interview identity theft victims. Additionally, to describe the process FTC used to develop the model summary of rights of identity theft victims required by the FACT Act and the views of groups that commented on the process, we reviewed a variety of documents from the agency and other sources. These documents included FTC’s draft and final versions of the model summary, final guidance on model disclosures, public comment letters FTC received on the draft, and other summaries of identity theft victims’ rights created by the CRAs. We conducted our work in Washington, D.C., and San Francisco, California, from September 2004 through June 2005 in accordance with generally accepted government auditing standards. Reprint of FTC’s Model Summary of Identity Theft Victim Rights Para informacion en espanol, visite www.consumer.gov/idtheft o escribe a la FTC, Consumer Response Center, Room 130-B, 600 Pennsylvania Avenue, N.W. Washington, D.C., 20580. Remedying the Effects of Identity Theft You are receiving this information because you have notified a consumer reporting agency that you believe that you are a victim of identity theft. Identity theft occurs when someone uses your name, Social Security number, date of birth, or other identifying information, without authority, to commit fraud. For example, someone may have committed identity theft by using your personal information to open a credit card account or get a loan in your name. For more information, visit www.consumer.gov/idtheft or write to: FTC, Consumer Response Center, Room 130-B, 600 Pennsylvania Avenue, N.W. Washington, D.C., 20580. The Fair Credit Reporting Act (FCRA) gives you specific rights when you are, or believe that you are, the victim of identity theft. Here is a brief summary of the rights designed to help you recover from identity theft. 1. You have the right to ask that nationwide consumer reporting agencies place “fraud alerts” in your file to let potential creditors and others know that you may be a victim of identity theft. A fraud alert can make it more difficult for someone to get credit in your name because it tells creditors to follow certain procedures to protect you. It also may delay your ability to obtain credit. You may place a fraud alert in your file by calling just one of the three nationwide consumer reporting agencies. As soon as that agency processes your fraud alert, it will notify the other two, which then also must place fraud alerts in your file. Equifax: 1-800-525-6285; www.equifax.com Experian: 1-888-EXPERIAN (397-3742); www.experian.com TransUnion: 1-800-680-7289; www.transunion.com An initial fraud alert stays in your file for at least 90 days. An extended alert stays in your file for seven years. To place either of these alerts, a consumer reporting agency will require you to provide appropriate proof of your identity, which may include your Social Security number. If you ask for an extended alert, you will have to provide an identity theft report. An identity theft report includes a copy of a report you have filed with a federal, state, or local law enforcement agency, and additional information a consumer reporting agency may require you to submit. For more detailed information about the identity theft report, visit www.consumer.gov/idtheft. 2. You have the right to free copies of the information in your file (your “file disclosure”). An initial fraud alert entitles you to a copy of all the information in your file at each of the three nationwide agencies, and an extended alert entitles you to two free file disclosures in a 12-month period following the placing of the alert. These additional disclosures may help you detect signs of fraud, for example, whether fraudulent accounts have been opened in your name or whether someone has reported a change in your address. Once a year, you also have the right to a free copy of the information in your file at any consumer reporting agency, if you believe it has inaccurate information due to fraud, such as identity theft. You also have the ability to obtain additional free file disclosures under other provisions of the FCRA. See www.ftc.gov/credit. 3. You have the right to obtain documents relating to fraudulent transactions made or accounts opened using your personal information. A creditor or other business must give you copies of applications and other business records relating to transactions and accounts that resulted from the theft of your identity, if you ask for them in writing. A business may ask you for proof of your identity, a police report, and an affidavit before giving you the documents. It also may specify an address for you to send your request. Under certain circumstances, a business can refuse to provide you with these documents. See www.consumer.gov/idtheft. 4. You have the right to obtain information from a debt collector. If you ask, a debt collector must provide you with certain information about the debt you believe was incurred in your name by an identity thief – like the name of the creditor and the amount of the debt. 5. If you believe information in your file results from identity theft, you have the right to ask that a consumer reporting agency block that information from your file. An identity thief may run up bills in your name and not pay them. Information about the unpaid bills may appear on your consumer report. Should you decide to ask a consumer reporting agency to block the reporting of this information, you must identify the information to block, and provide the consumer reporting agency with proof of your identity and a copy of your identity theft report. The consumer reporting agency can refuse or cancel your request for a block if, for example, you don’t provide the necessary documentation, or where the block results from an error or a material misrepresentation of fact made by you. If the agency declines or rescinds the block, it must notify you. Once a debt resulting from identity theft has been blocked, a person or business with notice of the block may not sell, transfer, or place the debt for collection. 6. You also may prevent businesses from reporting information about you to consumer reporting agencies if you believe the information is a result of identity theft. To do so, you must send your request to the address specified by the business that reports the information to the consumer reporting agency. The business will expect you to identify what information you do not want reported and to provide an identity theft report. To learn more about identity theft and how to deal with its consequences, visit www.consumer.gov/idtheft, or write to the FTC. You may have additional rights under state law. For more information, contact your local consumer protection agency or your state attorney General. In addition to the new rights and procedures to help consumers deal with the effects of identity theft, the FCRA has many other important consumer protections. They are described in more detail at www.ftc.gov/credit.
The Fair and Accurate Credit Transactions (FACT) Act of 2003 which amended the Fair Credit Reporting Act (FCRA), contains provisions intended to help consumers remedy the effects of identity theft. For example, section 609(e) of the amended FCRA gives identity theft victims the right to obtain records of fraudulent business transactions, and section 609(d) requires the Federal Trade Commission (FTC) to develop a model summary of identity theft victims' rights. This report provides information on (1) outreach efforts to inform consumers, businesses, and law enforcement entities about section 609(e); (2) the views of relevant groups on the provision's expected impact; and (3) FTC's process for developing its model summary of rights and views on the summary's potential usefulness. Some efforts to educate consumers, business entities, and local law enforcement officials about their rights and obligations under section 609(e), which grants identity theft victims access to fraudulent business transaction records, were under way as of June 2005--notably by the FTC, U.S. Postal Inspection Service, International Association of Chiefs of Police, and National Credit Union Administration. For example, FTC had a number of outreach efforts on section 609(e) including coverage in conferences and presentations as well as information available through its Web site, toll-free hotline, and identity theft publications. While many of the other federal regulators and law enforcement agencies have undertaken outreach efforts on identity theft, most did not specifically include information on section 609(e). FTC staff indicated that the public education campaign on identity theft prevention mandated to be implemented by December 2005 by the FACT Act will also include coverage of section 609(e). According to FTC, law enforcement agency officials, and consumer advocacy group representatives we spoke with, section 609(e) should help victims to remedy the effects of identity theft more quickly. Other cited benefits include allowing victims to build stronger cases that could assist law enforcement agencies in developing intelligence data for their investigations. However, due to the limited experience with victims attempting to obtain business records, it is too early to assess the actual effectiveness of the section 609(e) provisions. Consumer groups and state agencies identified some potential problems with the timeliness of business transaction data and the extent of documents needed to verify a victim's identity theft claim. Given the newness of the provision, additional experience is needed to verify the validity of these potential concerns or other concerns not yet anticipated. FTC staff told us that as part of their overall FACT Act outreach efforts, they intend to monitor the implementation of section 609(e) to determine whether the provision is working as intended. Most of the agencies and groups we spoke with had favorable views of FTC's process to develop the model summary of identity theft victim rights mandated under section 609(d). FTC published its final form of the summary on November 30, 2004, and as required by FTC's guidance, the three national credit reporting agencies told us they began distributing a summary to consumers who contacted them with identity theft concerns before January 31, 2005. While most of the groups that we contacted felt that FTC had been responsive to their comments, consumer advocacy groups identified two potential concerns. These potential concerns center on the limited availability of a Spanish version of the summary of rights and the clarity of the model summary of rights to the general population. However, due to the limited time that the summary has been available, it is too early to determine the extent of any implementation issues.
Background Research studies, including eight large randomized clinical trials with 11- 20 years of followup, indicated that widespread use of mammography could reduce breast cancer mortality. The benefit of mammography has recently been challenged by two Danish researchers and an NCI advisory panel made up of independent experts; they cite serious flaws in six of the eight clinical trials that showed benefits. However, subsequent to the Danish report and the NCI panel’s statement, both NCI and the U.S Preventive Services Task Force reiterated their recommendations for regular mammography screening. While acknowledging the methodological limitations in these trials, the U.S. Preventive Services Task Force concluded that the flaws in these studies were unlikely to negate the reasonable consistent and significant mortality reductions observed in these trials. The effectiveness of mammography as a cancer detection technique is directly tied to the quality of mammography procedures. Concerned about the quality of mammography procedures provided by the nation’s mammography facilities, the Congress enacted the Mammography Quality Standards Act (MQSA) of 1992, which imposed standards effective October 1, 1994. FDA has major oversight responsibilities, including establishing quality standards for mammography equipment and personnel and certifying and inspecting each facility to ensure it provides quality services. For mammography personnel, such as radiologic technologists and interpreting physicians, FDA specifies detailed qualifications and continuing training requirements. Mammography technologists are required to be licensed by a state or certified by the American Registry of Radiologic Technologists in general radiography, and meet additional mammography-specific training and continuing education and experience requirements. Similarly, FDA specifies that all interpreting physicians be licensed in a state and certified in the specialty by an appropriate board, such as the American Board of Radiology, and meet certain mammography-specific medical training, as well as continuing education and experience requirements. FDA collects detailed information about each facility when a facility is initially certified. FDA has established a database that incorporates data from the certification process and from its annual inspection program. Besides facility identification information, the database contains information on the number of machines, personnel, and whether the facility is active or no longer certified. Medicare, the federal government’s health insurance program for people age 65 and above, is the nation’s largest purchaser of health services. Beginning in 1991, Medicare provided coverage of annual mammography screening for women beneficiaries. Medicare is administered by CMS. As a part of its health care improvement program, since 1999, CMS and a set of contractors, called peer review organizations, have been involved in monitoring and improving the quality of care, including increasing mammography screening rates among women Medicare beneficiaries. National Capacity for Mammography Services Is Generally Adequate The nation’s overall capacity to meet the growing demand for mammography services is generally adequate. Between 1998 and 2000, the use of services, as measured by the number of mammograms provided to women age 40 and older, increased nearly 15 percent. The most recent data on capacity show that the total number of machines and radiologic technologists available to perform mammography services increased 11 percent and 21 percent respectively from October 1998 to October 2001. During this same period, the total number of mammography facilities decreased about 5 percent, indicating that facilities were consolidating or becoming somewhat larger. The average number of mammograms performed per machine increased slightly but was considerably below estimates of full capacity. The one potentially negative development is in personnel, where the number of new entrants into the field—as measured by the number of persons who sit for mammography technologist or diagnostic radiology examinations for the first time—has dropped each year since 1997. Utilization of Mammography Services Continues to Grow The use of mammography as a tool for detecting early cancer continues to increase. Data from CDC’s Behavioral Risk Factor Surveillance System indicate a continuing increase in national mammography screening rates. The proportion of women age 40 and over who had received a mammogram within the past year increased from 58 percent in 1998 to about 64 percent in 2000. These screening rate increases, coupled with the growth of this population, have resulted in significant increases in the number of mammograms provided each year. Based on CDC’s data on screening rates and Bureau of Census population data, we estimate that the total number of mammograms received by women 40 and above nationwide has increased nearly 15 percent, from about 35 million in 1998 to more than 40 million in 2000. These increases in mammography utilization extended across nearly every state. Using the screening rates and the Bureau of Census population data, we computed the number of mammograms received by women age 40 and above on a state-by-state basis. Between 1998 and 2000, screening rates for women in this age group increased in all but one state (i.e., Oklahoma) and the District of Columbia, and 39 states had an increase of more than 10 percent in the total number of women age 40 and above who had received a mammogram within the past year. Capacity to Provide Mammography Services Has Also Increased The nation’s capacity to provide mammography services, as measured by the numbers of machines and radiologic technologists available to perform mammography services, has also increased. FDA’s data show that between October 1998 and October 2001, the total number of mammography machines and radiologic technologists available nationwide to perform mammography services increased 11 percent and 21 percent respectively (see table 1). While FDA’s data showed that the total number of certified facilities has decreased about 5 percent between 1998 and 2001, the average number of machines per facility increased from 1.22 in 1998 to 1.42 in 2001. Overall, the 5 percent decrease in facilities has been offset by the 16 percent increase in the number of machines per facility and the increase in personnel. Utilization Does Not Appear To Be Straining Capacity The current average number of mammograms actually being performed per machine appears to be well below estimates of how many mammograms could be performed, if equipment is operating at full capacity. While there is no uniform standard on the number of mammograms that a mammography machine can do in a day, FDA officials estimated that one machine and one full-time technologist can potentially perform between 16 and 20 mammograms in an 8-hour work day, or between 4,000 to 5,000 mammograms a year (assuming 5 days a week and 50 weeks a year). Using CDC’s data on mammography screening rates, Bureau of Census data on the population of women age 40 and older, and FDA’s data on the number of machines, we computed the average number of mammograms performed per machine. At the national level, the average number of mammograms per machine was 2,759 in 1998. While this average number of mammograms per machine had increased to 2,840 in 2001, it was still well under 4,000, the lower end range of estimated full capacity. At the state level, the average number of mammograms per machine in 2001 ranged from a low of 1,790 in Alaska to a high of 3,720 in Maryland. While the number of radiologic technologists has increased in the past in general proportion with the increase in mammography utilization, certain trends bear monitoring. According to an American Hospital Association survey, the job vacancy rate for radiologic technologists was 18 percent in 2001, and 63 percent of hospitals reported that they had more difficulty recruiting radiologic technologists than the previous year. Data from ARRT show the rate of increase for certified mammography technologists through 2000 has slowed down substantially in recent years. Similarly, the number of new entrants to the field, as represented by the number of first- time examinees for the mammography certificate, declined substantially each year from 1996 through 2000 (see table 2). In addition, while comprehensive data are not available on the total number of radiologists available to interpret mammograms, the limited data available also indicate that the availability of radiologists may bear watching. For example, data from the employment placement service of the American College of Radiology show an increasing ratio of job listings per job seeker for radiologists –from 1.3 in 1998 to 3.8 in 2000. Also, data from the American Board of Radiology show that the number of first-time candidates who sit for diagnostic radiology examination has declined each year from 1997 through 2001 (see table 3). Capacity Has Decreased in Some Locations, Causing Scattered Problems Because of local factors such as a shortage of personnel or closure of certain facilities, waiting times for routine mammograms could be several months in certain locations. Nationwide, 241 counties had a net loss of mammography machines between October 1998 and October 2001, with 121 of them losing more than 25 percent. Our follow-up at 55 rural and metropolitan counties where reductions occurred indicated that lengthy appointment waiting times for mammography services were primarily in metropolitan locations. Small Proportion of Counties Nationwide Lost Capacity Our county-by-county analysis of data on equipment shows that overall, 241 counties had a net loss in the number of mammography machines between October 1998 and October 2001. Of these counties, 121 lost more than 25 percent of their machines. This number represents counties spread throughout the nation. These counties together contained less than 1.9 percent of the total U.S. population in the 2000 census. We conducted an analysis to determine what had occurred in those counties close to the 121 counties that lost more than 25 percent of their machines. In general, the adjacent counties showed an increase in the number of machines, with nearly all of the 121 counties being within 50 miles of a county that gained machines. Thus, residents in most of the counties that lost services appear to be able to draw on increased resources nearby. Counties with Largest Losses Are Mostly Rural; Most Reported No Significant Problems Because data are not available to measure the effect of capacity loss on the mammography utilization rates at the county level, we randomly selected 37 of the 121 counties that lost more than 25 percent of their machines for in-depth analysis at the local level. These 37 counties are located in 19 states (see appendix I for a list of these 37 counties). Over three quarters of these counties are in nonmetropolitan areas. Eighteen of the counties we selected had one facility and 11 had no facility at all in 2001. We interviewed state and local officials familiar with conditions in these counties, asking them to assess the impact of the loss of facilities. With two exceptions, officials generally reported no significant problems. They said existing facilities in the county or neighboring counties were able to provide needed services, and the longest appointment waiting time reported for routine screening mammograms was 1 month or less, which they considered to be reasonable. In most counties where women had to travel to neighboring counties for services, the travel distance was less than 40 miles, which officials considered common in rural areas. Several officials also said that some counties were served by mobile facilities that travel to their areas. Largest Service Dislocation Appears to Be Occurring in Some Metropolitan Areas In metropolitan counties, the picture was more mixed than for rural counties. To examine the extent of problems in metropolitan areas, we selected 18 additional counties (including the District of Columbia) from a list of counties that lost the largest number of machines. All of these counties are classified as metropolitan counties (see appendix I for a list of these counties). As we did for the rural counties, we contacted state and local officials and asked them to assess the impact of the loss of machines on women’s access to services. These officials reported wide variations in availability of services. While no problems were reported in nine counties, officials in the other nine counties reported a variety of problems. The nine counties with problems are concentrated in five metropolitan areas— Baltimore, Boston, the District of Columbia, and San Antonio and Wichita Falls, Texas. For example, officials in three counties surrounding the Baltimore metropolitan area reported an average waiting time of up to 3 months for screening mammograms and 2 to 3 weeks for follow-up diagnostic mammograms. Similarly, a survey conducted by Massachusetts officials in April 2001 found that, in the Boston metropolitan area, appointment waiting time for screening mammograms ranged from 1 to 20 weeks, depending on facilities. In the District of Columbia, officials reported that the only facility available in one part of the city had up to an 8-week backlog of appointments, while the rest of the city generally did not have significant problems. In addition to contacting these 18 counties, we also contacted state and local officials to inquire about six other urban areas—Buffalo, Chicago, Houston, Los Angeles, New York, and Tallahassee—where no significant number of machines was lost but problems were cited by state and local officials or media reports. Officials familiar with situations in these cities reported that most of the problems were limited to certain facilities. For example, an official in Buffalo said that one well-known facility there had a 3-month waiting list for appointments while others could accommodate appointments within 2 weeks. In Chicago, Houston, and Los Angeles, long waiting time problems were concentrated in public health facilities that served low income populations. In New York and Tallahassee, long waiting times of 5 to 6 months were reported in 2000, but our recent interviews with officials found no significant problem. In almost all cases where some problems were reported, officials said that women who needed a diagnostic mammogram generally were able to get appointments within 1 to 3 weeks. Several factors have contributed to the waiting time problems in the nine metropolitan counties and the six urban areas that we identified. Among the reasons provided by state and local officials were the following: Demand for services grew while capacity declined. In the Baltimore area, for example, officials said that a shortage of technologists and financial difficulty caused many facilities to consolidate or shut down, resulting in a net decrease in capacity, while the demand for services continued to grow. High demand for services at some facilities. In cities such as Buffalo, Boston, Houston, and Los Angeles, where variation was more on a facility- by-facility basis, officials provided various reasons for the high demand at some facilities. For example, such factors as facilities’ reputations, physicians’ referral patterns, and large patient workload from public assistance programs cause some facilities to have a large backlog of appointments. Some women may experience waiting time problems because they are restricted by insurance coverage as to where they can go for services. Inability to meet FDA’s quality requirements. Several officials told us that many small facilities with old machines had shut down because they could not meet FDA quality requirements. For example, an official from Los Angeles said that one provider had shut down three mobile units during the last 2 years because of quality problems. Temporary interruptions in availability. The waiting time problems may also be caused by the closure of one or more large facilities—a temporary problem that often resolves itself when new facilities open or existing facilities expand in the area. For example, lengthy waiting problems in Tallahassee in 2000 were largely generated by the closure of one large mammography facility but a local public assistance program official told us in March 2002 that women in her program could get appointments within 2 weeks as the result of a recent opening of one new facility. In addition to these factors, state and local officials also frequently raised concerns about the adequacy of the Medicare reimbursement rate, particularly in the high cost metropolitan areas. However, during the course of our work, CMS implemented a statutory change to the method for determining the Medicare reimbursement rate for screening mammography. The new method includes geographic adjustments for cost differences among areas and resulted in significant rate increases for high cost areas. Concluding Observations In general, the increase in mammography equipment and personnel has been sufficient to meet the steady increase in demand for mammography services. However, while the general buildup of personnel has been in line with the growth in the use of services, the last few years show a substantial decline in the number of new entrants to the fields, which could result in a reversal in this trend. If this reversal occurs, more personnel shortage problems could arise in the future. Some instances of long waiting times for services are occurring. Consolidation of facilities and increases in demand can create a strain on service availability in specific communities. However, appointment delays are primarily for screening mammograms rather than for follow-up diagnostic mammograms. These conditions, which can be temporary, may be exacerbated by local physicians’ referral patterns, patients’ insurance coverage, or local shortages in available personnel. Agency Comments We provided FDA with a draft of the report for review and comment. FDA responded that it found the report to be accurate and it had no other general comments. In addition, FDA provided technical comments, which we incorporated as appropriate. Appendix II contains FDA’s written response. As arranged with your offices, unless you release its contents earlier, we plan no further distribution of this report until 10 days after its issue date. At that time, we will send copies to the secretary of health and human services, the commissioner of FDA, the director of NCI, the director of CDC, the administrator of CMS, appropriate congressional committees, and other interested parties. If you or your staff have any questions about this report, please contact me at (202) 512-7250. Other contacts and major contributors are included in appendix III. Appendix I: Scope and Methodology To compare recent trends in the use of mammography services with changes in facilities, equipment, and personnel available to deliver these services, we did the following. We used data from CDC’s Behavioral Risk Factor Surveillance System for calendar years 1998 and 2000 (the most recent year available) to estimate mammography screening rates for women age 40 and older on a state-by- state basis. To estimate the number of mammograms provided to these women in 1998 and 2000, we then multiplied these screening rates by the population of women age 40 and over, using Census’ population estimates for 1998 and the 2000 Census population. We used FDA’s national database on mammography facilities to assess the change in the total numbers of certified facilities, machines, and radiological technologists at national, state, and county levels. We compared the characteristics of facilities operating on October 1, 1998, with those operating 3 years later on October 1, 2001. FDA estimated an error rate of less than 1 percent for the data on mammography facilities. We excluded facilities in Puerto Rico, other U.S. territories, and federal facilities operated by the Department of Defense and the Department of Veteran Affairs from the analysis. To identify geographical areas where the capacity to perform mammography services had decreased, and to assess the effect of these decreases on access to services, we used FDA’s national database to identify counties that lost mammography machines and focused on those that lost more than 25 percent of their machines from October 1, 1998, to October 1, 2001. To determine if machines became more available in areas close to these counties, we analyzed what had happened to the number of machines in nearby counties. Because data were not available to measure the effect of changes in capacity on mammography utilization rates at the county level, we carried out follow-up interviews with state and local officials in a random sample of 37 counties that lost more than 25 percent of their machines (see table 4). Because over three quarters of these counties are in nonmetropolitan areas, we selected an additional 18 counties (including the District of Columbia) from a list of counties that lost the largest number of machines (though not enough to reduce the number by more than 25 percent). All of these 18 counties are in metropolitan areas. We also made additional inquiries about six other urban areas—Buffalo, Chicago, Houston, Los Angeles, New York, and Tallahassee—where problems had been cited by state and local officials or media reports. Table 5 lists the 18 counties and their metropolitan areas. Because no systematic data were available on waiting times and travel distances for mammography services, we relied on observations of state and local officials about the situations at each location. For each selected location, both rural and metropolitan, we interviewed officials familiar with the availability of mammography services in these areas to obtain their views on whether women in their areas were experiencing problems with long waiting times for appointments and/or long travel distance to obtain services. These officials generally included state radiation control personnel contracted by FDA to conduct annual onsite inspections of mammography facilities; state and local public health officials involved in CDC’s Breast and Cervical Cancer Early Detection Program, which contracts with mammography facilities in each state to provide screening and diagnostic mammograms to underserved women; and in some locations, officials of Medicare peer review organizations contracted by CMS to monitor and improve the quality of care, including increasing statewide mammography screening rates for Medicare beneficiaries. While most of these officials have not conducted any formal studies to gather this type of information, some have conducted informal surveys about waiting times and others were able to provide estimates of waiting times and travel distances through their involvement and frequent contacts with mammography facilities. In addition, we interviewed representatives from several professional organizations, such as the American College of Radiology, the American Cancer Society, and ARRT, along with officials of FDA, CDC, NCI, and CMS. We performed our work from June 2001 through March 2002 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Food and Drug Administration Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, Jennifer Cohen and Stan Stenersen made key contributions to this report.
Breast cancer is the second leading cause of cancer deaths among American women. In 2001, 192,200 new cases of breast cancer were diagnosed and 40,200 women died from the disease. The probability of survival increases significantly, however, when breast cancer is discovered in its early stages. Currently, the most effective technique for early detection of breast cancer is screening mammography, an X-ray procedure that can detect small tumors and breast abnormalities up to two years before they can be detected by touch. Nationwide data indicate that mammography services are generally adequate to meet the growing demand. Between 1998 and 2000, both the population of women 40 and older and the extent to which they were screened increased by 15 percent. Although mammography services are generally available, women in some locations have problems obtaining timely mammography services in some metropolitan areas. However, the greatest losses in capacity have come in rural counties. In all, 121 counties, most of them rural, have experienced a drop of more than 25 percent in the number of mammography machines in the last three years. Officials from 37 of these counties reported that the decrease had not had a measurable adverse effect on the availability of mammography services. By contrast, in 18 metropolitan counties that lost a smaller percentage of their total capacity, officials in half of the counties reported service disruptions. Officials from six other urban areas, including Houston and Los Angeles, reported that public health facilities serving low income women had long waiting times. However, most women whose clinical exam or initial mammogram indicated a need for a follow-up mammogram were able to get appointments within one to three weeks.
Background FECA (5 USC 8101, et seq.) authorizes federal civilian employees compensation for wage loss and medical and death benefits for treatment of injuries sustained or for diseases contracted during the performance of duty. OWCP is responsible for administering and adjudicating the federal workers’ compensation program. During fiscal year 2000, OWCP’s paid workers’ compensation totaled about $2.1 billion including wage loss, medical, and death benefits stemming from job-related injuries and OWCP received approximately 174,000 new injury claims. A workers’ compensation claim is initially submitted to an OWCP district office and is evaluated by a claims examiner. The examiner must determine whether the claimant has met all of the following criteria for obtaining benefits: The claim must have been submitted in a timely manner. An original claim for compensation for disability or death must be filed within 3 years of the occurrence of the injury or death. The claimant must have been an active federal employee at the time of injury. The injury, illness, or death had to have occurred in a claimed accident. The injury, illness, or death must have occurred in the performance of duty. The claimant must be able to prove that the medical condition for which compensation or medical benefits is claimed is causally related to the claimed injury, illness, or death. Since medical evidence is an important component in determining whether an accident described in a claim caused the claimed injury and if the claimed injury caused the claimed disability, workers’ compensation claims are typically accompanied by medical evidence from the claimant’s treating physician. Considerable weight is typically given to the treating physician’s assessment and diagnosis. However, should the OWCP claims examiner conclude that the claimant’s recovery period seems to be outside the norm or that a better understanding of the medical condition is needed to clarify the nature of the condition or extent of disability, the examiner may obtain a second medical assessment of the claimant’s condition. In such instances, a second opinion physician, who is selected by a medical consulting firm contracted by an OWCP’s district office, reviews the case, examines the claimant, and provides a report to OWCP. If the second opinion physician’s reported determination conflicts with the claimant physician’s opinion regarding the injury or condition, the claims examiner determines if the conflicting opinions are of “equal value.” If the claims examiner considers the two conflicting opinions to be of equal value, OWCP appoints a third or “referee physician” to evaluate the claim and render an independent medical opinion. Claims may be approved in full or part, or denied. For example, a claimant may be paid full wage loss benefits and provided physical and vocational rehabilitation services, but denied a request for a medical procedure. When all or part of a claim is denied the claimant has three avenues of recourse: (1) an oral hearing or a review of the written record by the Branch of Hearings and Review (BHR), (2) reconsideration of the claim decision by a different claims examiner within the district office, or (3) a review of the claim by the Employees Compensation Appeals Board (ECAB). Under the first appeal option, the claimant can request an oral hearing or a review of the claim’s written record by a BHR hearing representative. At an oral hearing, the claimant can testify in person, be represented by a designated representative, or submit written evidence. The employing agency may attend but not participate unless invited to do so by the BHR hearing representative or the claimant. For either a hearing or review of the record, the hearing representative decides whether to affirm the initial decision, reverse the initial decision and administer benefits to the claimant, or remand the claim to the district office for a new decision. A second option to the claimant is to request reconsideration of the decision at the district office. During reconsideration, the district office reevaluates its initial decision and the decision-making process to ensure that it properly considered all facets of the claim. This reconsideration is typically performed by a senior claims examiner who played no role in making the original decision. After the entire record and resulting decision are reevaluated, the claims examiner decides whether to affirm the initial decision denying all or part of the claim or to modify the initial decision. Generally the final appeal available to the claimant is made to the ECAB. The ECAB consists of three members who are appointed by the secretary of labor. The board was created within DOL but outside OWCP to give federal employees the same administrative due process of law and appellate review that most nongovernment workers enjoy under workers’ compensation laws in most states. While regulations prohibit the claimant from submitting new evidence during this phase, the ECAB is not limited by previous “findings of fact” by the district office or BHR and can therefore reevaluate the evidence and determine if the law was appropriately applied. As with the other appeals levels, ECAB renders decisions that affirm the district office’s decision, remand all or part of the claimant’s appealed decision to the district office for additional review, or reverse the district office’s decision. While OWCP regulations do not require claimants to exercise these three methods of appeal in any particular order, certain restrictions apply that, in effect, encourage claimants to file appeals in a specific sequence—first going to the BHR, then requesting another review at the OWCP district office, and finally involving the ECAB. For example, the regulations state that a claimant seeking a BHR hearing on a decision must not have previously requested reconsideration of that decision regardless of whether the earlier request was granted. However, the BHR director said that claimants may, and sometimes do, choose to request a district office reconsideration first because the decisions on claims appealed through reconsideration are made in a more timely manner. Not withstanding the regulatory provision, OWCP explained that a claimant may request a discretionary oral hearing by BHR after receiving a reconsideration decision and both OWCP procedures and ECAB precedent require OWCP to exercise its discretion in considering such a request. Appendix I contains a graphic presentation of OWCP’s claims adjudication process. Scope and Methodology We performed our work in Washington, D.C., from March 2001 through April 2002 in accordance with generally accepted government auditing standards. To assist us in addressing the objectives, we reviewed a statistical sample of more than 1,200 of the estimated 8,100 appealed claims for which a decision was rendered by OWCP’s BHR or DOL’s ECAB during the period from May 1, 2000, through April 30, 2001, to determine the following: (1) the primary reasons why appealed decisions were reversed or claims were remanded to the OWCP district offices for further development, (2) the amount of time OWCP took to inform claimants of hearing decisions, (3) whether OWCP used certified and licensed physicians whose areas of specialty were consistent with the injuries evaluated, and (4) the methods OWCP uses to identify customer satisfaction and potential claimant fraud. Additional information on the scope and methodology of our review and approaches for addressing these and other objectives is presented in appendix II and confidence intervals and other statistical information regarding our work are presented in appendix III. Evaluation Problems, Case File Mismanagement, and New Evidence Are Reasons Appealed Claims Decisions Are Reversed or Remanded From May 1, 2000, to April 30, 2001, decisions were rendered by BHR or ECAB on approximately 8,100 appealed claims. BHR or ECAB affirmed an estimated 67 percent of these initial decisions as being correct and properly handled by the district office, but reversed or remanded an estimated 31 percent of the decisions—25 percent because of questions or problems with OWCP’s review of medical and nonmedical information or management of claims files, and 6 percent because of additional evidence being submitted by the claimant after the initial decision. The following figure characterizes the outcome of BHR and ECAB reviews of appealed claims. For those claims decisions that were reversed or remanded, the figure shows the reason, including (1) evaluation of evidence problems, (2) mismanagement of claims file problems, or (3) new evidence submitted by the claimant. About One-fourth of the Appealed Claims Decisions Were Reversed or Remanded Due to OWCP Evaluation Problems or Claims File Mismanagement Based on a statistical sample of appealed claims decisions made during the period May 1, 2000, through April 30, 2001, we estimate that 25 percent of the appealed claims decisions (approximately 2,000 of 8,100) were reversed or remanded because of questions about or problems associated with the initial decision by OWCP. These included problems with (1) the initial evaluation of medical evidence (e.g., physicians’ examinations, diagnoses, or x-rays) or nonmedical evidence (e.g., coworker testimonies) or (2) management of the claim file (e.g., failure to forward a claim file to ECAB in a timely manner). Problems in evaluating medical evidence frequently involved OWCP failing to properly identify medical conflicts between the conclusions of the claimant’s physician and OWCP’s second opinion physician, and therefore not appointing a referee physician as required by FECA. OWCP has interpreted the FECA requirement to apply only when the opinions of the two physicians involved are of equal value, that is, when both physicians have rendered comparably supported findings and opinions. Other initial claims decisions were reversed or remanded when BHR or ECAB determined that nonmedical evidence had not been properly evaluated. One example of this involved the OWCP provision that when suitable work is found for the claimant, benefits will terminate. For example, based on its review of a job offer to a claimant who had work restrictions—such as not being able to lift over 50 pounds—an OWCP district office decided that the job represented suitable work and terminated the claimant’s compensation. However, when that decision was appealed by the claimant, BHR identified a flaw in the job offer. In order for OWCP to meet its burden of showing that an offered job is suitable for a claimant, both the duties and physical requirements of the job need to be fully described in the job offer. For this claim, the job offer had only set forth the duties, such as inputting social security numbers on a keyboard. The BHR representative decided that the offer did not describe the physical requirements associated with the job and thus, did not “allow the district office to properly determine whether the offered job was suitable work within the claimant’s work restrictions.” BHR concluded that the district improperly terminated the claimant’s compensation and directed that the claimant’s monetary compensation be reinstated. We estimate that 21 percent of appealed claims were remanded and reversed due to problems with evaluating medical or nonmedical evidence. Some remands and reversals result from OWCP failing to administer claims files in accordance with FECA or OWCP guidance for claims management. The guidance includes (1) a description of the information that is to be maintained in the claim file and transmitted by OWCP to the requestor (i.e., BHR or ECAB) and (2) requires claims files to be transmitted within 60 days after a request is received. Failure to meet this 60-day requirement was one of the more common deficiencies in claims file management in our sample. For example, ECAB initially requested a claim file for one injured worker from OWCP on April 29, 2000. On December 19, 2000 (almost 8 months later), the Board notified OWCP that the claim file had not been transferred and that if the file was not received within 30 days, ECAB would issue orders remanding the case to the relevant district office for “reconstruction and proper assemblage of the record.” As of March 12, 2001—more than 10 months after the initial ECAB request —the claim file had still not been transferred and the claim was remanded back to the district office. We estimate that 4 percent of appealed claims were reversed or remanded by the BHR or ECAB for claims file management problems. For claims that were initially denied and then the decisions were reversed by the BHR or ECAB due to problems identified with the initial evaluation of evidence or mismanagement of claims files, there are delays in claimants receiving benefits to which they were entitled. According to OWCP, the average amount of time that elapsed from the date an appeal was filed with BHR or ECAB until a decision was rendered was 7 months and 18 months, respectively, in fiscal year 2000. Thus, while claimants are provided benefits retroactively to the date of the initial decision when a claim is reversed, they may be forced to go without benefits for what can be extended periods and may have to incur additional expenses, such as representatives’ fees, during appeals that are not reimbursable. New Evidence Submitted after OWCP Rendered Decision Also Resulted in Reversals and Remands We also found that 6 percent of appealed claims decisions were reversed or remanded because of new evidence being submitted by the claimant after the initial decision was made. OWCP regulations allow claimants to submit new evidence to support their claims at any time from the rendering of the initial claim decision until 30 days—or more with an extension—after the BHR hearing or review of the record occurs. Additional evidence could include medical reports from different physicians or new testimonial evidence from coworkers that in some significant way were expected to modify the circumstances concerning the injury or its treatment and make the previous decision by OWCP now inappropriate. Upon appeal of the earlier district office decision, the BHR representative determines whether the new evidence is sufficient to remand the claim back to the district office for further review, or to reverse the initial decision. OWCP Has Taken Some Actions to Identify and Address the Causes of Reversals and Remands OWCP monitors remands and reversals by the BHR and ECAB to identify certain trends in appeals decisions. Steps OWCP says it takes include reviewing ECAB decisions and preparing an advisory calling claims examiners’ attention to selected ECAB decisions which may represent a pattern of district office error or are otherwise instructive. Where more notable problems are identified through ECAB reviews, a bulletin describing the correct procedures may be issued or training might be provided. While OWCP similarly monitors reasons for BHR reversing and remanding claims decisions, this information, or any suggested corrective actions are not disseminated to claims examiners in as systematic a manner as is done for ECAB decisions. Clearly, these actions are providing some information on remands and reversals, which might be helpful to OWCP and its district offices. However, this information is not fostering a full understanding of the underlying reasons for remands and reversals occurring at their current rates and what other actions might be taken to address those factors. For example, OWCP might detect that a district office is failing to appoint referee physicians when required. OWCP might then notify district offices that such a problem was occurring, but with the information currently available, it would not be able to identify how frequently the problem was occuring or the underlying reasons— (1) are inexperienced claims examiners not sufficiently aware of the requirement for a referee physician when a conflict of medical opinions of equal value occurs, or (2) are examiner’s experiencing difficulty in determining whether two physicians’ opinions were of equal value? Without such information on causes, it would be difficult to address these problems. We believe that OWCP needs to examine the steps now being taken to determine whether more can be done to identify and track specific reasons for claims decision remands and reversals. With such information, OWCP may be able to act to address those underlying causes and in so doing, reduce remand and reversal rates. OWCP officials told us that they have not conducted such an overall examination of its current process. Instead OWCP said they continue to adjust their monitoring and communication process (circulars and bulletins) based on available information. Finally, OWCP indicated that the rate of OWCP remands and reversals was similar to that of other compensation organizations. They provided us a comparison of four organizations whose rates were similar or greater than theirs; the four were DOL’s Black Lung Program, the Social Security Administration’s Disability Program, and the North Dakota and Washington states’ workers’ compensation programs. Except for the SSA program, no information was provided nor do we have information concerning how comparable the programs are; thus we cannot determine the validity of such a comparison. Regarding SSA, their reversal rate may not be comparable to OWCP’s because of considerable emphasis on SSA physicians’ testimony for initial claims decisions and the claimants’ and their physicians’ testimony during adjudication hearings, resulting in high reversal rates. OWCP Has Established a Hearing Standard That Allows 110 Days for Claimant Notification FECA requires that OWCP notify claimants in writing of hearing decisions “within 30 days after the hearing ends.” OWCP’s interpretation of the hearing process allows up to 110 days before almost all claimants are to be notified of decisions. In establishing guidelines for meeting this provision of the act, the BHR director told us that the hearing record is not closed until two separate but concurrent processes are completed. 1. Printing and reviewing of hearing transcript: The time needed to print and review the hearing transcript could range from as few as 25 days to as many as 47 calendar days from the hearing date. A contractor prints the hearing transcript, which generally takes from 5 to 7 calendar days. The claimant and the claimant’s employing agency then review the transcript of the hearing for up to 20 calendar days. If the employing agency provides comments, OWCP provides the claimant with the agency's comments and an additional 20 calendar days to respond to those comments. 2. Submitting new evidence: OWCP gives the claimant 30 calendar days from the date of the hearing to submit additional medical evidence. If the claimant needs additional time to provide more medical evidence, the regulations allow the OWCP hearing representatives to use their discretion to grant a claimant a one-time extension period, that may be for up to several months. OWCP officials stressed the importance of all the evidence being considered before a decision is made since if the decision is appealed to ECAB any subsequent review by the ECAB is limited to the evidence in the claim record at the time of the preceding decision. Given the potentially wide variance in the number of days before OWCP can close a hearing record, an OWCP official said they have attempted to establish realistic standards for notifying claimants of hearing decisions. OWCP has established two goals for the timing of notifying claimants of final hearing decisions: (1) notifying 70 to 85 percent of the claimants within 85 calendar days, and (2) informing 96 percent of claimants within 110 calendar days following the date of the hearing. Based upon our review of the applicable legislation, we determined that OWCP has the authority to interpret the FECA requirement for claimant notification in this manner. Of an estimated 2,945 appealed claims for which BHR rendered a decision on a hearing during our review period, notification letters for an estimated 2,256 (or 77 percent) were signed by OWCP officials within 85 days of the date of the hearing and an estimated 2,716 (or 92 percent) of the claims were signed within 110 days of the hearing date. OWCP officials signed an estimated 158 (or 5 percent) of the claimants’ notification letters from 111 to 180 days after the hearing date and 70 claims (or 2 percent) from 181 to more than 1 year after the hearing date. OWCP’s Physicians Were Board Certified, Licensed, and Had Specialties Consistent with the Injuries Examined Our review showed that OWCP referee physicians were board certified and licensed in their specialties. In addition, we found that OWCP’s second opinion and referee physicians had specialties that were appropriate for claimant injuries in nearly all the cases we examined. Most of OWCP’s Physicians Were Board Certified and Had State Medical Licenses Although neither FECA nor OWCP’s procedures manual require second opinion physicians to be board certified, the procedures manual states that OWCP should select physicians from a roster of “qualified” physicians and “specialists in the appropriate branch of medicine.” The manual further requires that for referee physicians “the services of all available and qualified board-certified specialists will be used as far as possible.” The manual allows for using a noncertified physician in special situations, stating “a physician who is not board-certified may be used if he or she has special qualifications for performing the examination,” but the OWCP medical official making that decision must document the reasons for the selection in the case record. Based on our statistical sample, we estimate that at least 94 percent of OWCP’s contracted second opinion physicians and at least 99 percent of the contracted referee physicians were board certified. In making these determinations, we used information from the American Board of Medical Specialties (ABMS), the umbrella organization for the approved medical specialty boards in the United States. In addition, OWCP provided documentation verifying certifications of some of the physicians in our sample. For the remaining 6 and 1 percent of the second opinion and referee physicians in our sample, respectively, we lacked information to determine whether they were or were not certified. Although neither FECA nor OWCP regulations specifically require either second opinion or referee physicians to be licensed by the state in which they practice, OWCP officials stated that OWCP expects that all physicians will have state medical licenses. Based on our sample of physicians, we estimated that at least 96 percent of the second opinion physicians and at least 99 percent of the referee physicians had current state medical licenses. For the 4 and 1 percent of the remaining physicians respectively, we did not have sufficient information to determine whether or in what state they were licensed. Second Opinion and Referee Physicians Had Specialties That Were Relevant to Injuries Evaluated An estimated 98 percent of OWCP’s second opinion and referee physicians appeared to have specialties relevant to the types of claimant injuries they evaluated. While there is no requirement for referee physicians to have specialties relevant to the types of injuries evaluated, OWCP officials told us that a directory is used to select referee physicians—with appropriate specialties—to examine the type of injury the claimant incurred. For the remaining physicians in our sample, that is the remaining 2 percent, the conclusion was that they had specialties which were not appropriate for the type of injuries examined. For example, a cardiologist—acting as a second opinion physician—examined a claimant for residuals of hypertension that were aggravating the claimant’s kidney disease. The claimed injury appeared to be associated with kidney rather than heart disease. Therefore, it would have been appropriate for the claimant to be treated by a nephrologist (kidney specialist). For assistance in reviewing relevancy of physician specialties, we contracted with a Public Health Service (PHS) physician. With that assistance, we were able to review our sample of claimants’ injuries and the board specialties of the physician(s) who evaluated them to determine if the knowledge possessed by physicians with a specific specialty would allow them to fully understand the nature and extent of the type of injury evaluated. OWCP Uses Several Methods to Identify Customer Concerns and Assists DOL’s IG in Addressing Potential Claimant Fraud OWCP uses surveys of randomly selected claimants and focus groups to monitor the extent of customer satisfaction with several dimensions of the claims program, including responsiveness to telephone inquiries. OWCP claims examiners and employing agencies serve as primary information sources for identifying potentially fraudulent claims. When such potential fraud is detected, DOL’s IG investigates the circumstances and, if appropriate, prosecutes the claimants and others involved. Customer Satisfaction with the Claims Process OWCP obtains information concerning customer satisfaction with the handling of claims through surveys of claimants and conducting focus groups with employing agencies. Since 1996, OWCP has used a contractor to conduct customer satisfaction surveys via mail about once each year to determine claimants’ perceptions on several aspects of the implementation of the workers’ compensation program, including overall service, for example, whether claimants knew their rights when notified of claims decisions and the timeliness of written responses to claimants’ inquiries. The questionnaires did not include questions specific to the appealed claims process, but some of the respondents may have based their responses on experiences encountered when appealing claims. In the 2000 survey, customers indicated a 52 percent satisfaction rate with the overall workers compensation program, and a 47 percent dissatisfaction rate. The level of claimant satisfaction indicated in their responses for specific issues in the surveys have been largely mixed (i.e., more positive responses for some questions and more negative responses for other questions). For example, survey responses in fiscal year 1998 showed that 34 percent of the respondents were satisfied with the timeliness of responses to their written questions to OWCP concerning claims, while 63 percent were not, and 35 percent were satisfied with the promptness of benefit payments, while 26 percent were not satisfied. Based on these and previous survey results, OWCP took actions including creating a committee to address several customer satisfaction issues, such as determining if the timeliness of written responses could be improved. In fiscal year 2001, OWCP took two additional steps to measure customer satisfaction. First, OWCP used another contractor to conduct a telephone survey of 1,400 claimants focused on the quality of customer service provided by the district offices. As of March 25, 2002, a contractor was still evaluating the results of this survey. Second, OWCP held focus group meetings with employing agency officials in the Washington, D.C., and Cleveland, Ohio, district offices’ jurisdictions. An OWCP official stated that this effort provided an open forum for federal agencies to express concerns with all aspects of OWCP service. In the Washington D.C. focus group, employing agency officials expressed their belief that some of the claims approved by OWCP did not have merit. The report on that meeting did not specify whether this concern applies to appealed claims decisions. The report documenting the Cleveland focus group effort indicated that employing agencies were frustrated about not being informed of OWCP claims decisions and several agencies said they continued to put through medical bills only to be told by the employees that their claims had been denied. OWCP Examiners and the DOL IG Monitor Claimant Fraud The DOL’s IG—using information from claims examiners and other sources—monitors, investigates, and prosecutes fraudulent claims made by federal workers. The IG’s office provides guidance to claims examiners for identifying and reporting claimant fraud, including descriptions of situations or “red flags” that could be potentially fraudulent claims. Red flags include such items as excessive prescription drug requests and indications of unreported income. DOL’s Audits and Investigations Manual requires claims examiners and other employees to report all allegations of wrongdoing or criminal violations—including the submission of false claims by employees—to the IG’s office. Once a potentially fraudulent claim is identified, the IG will review information submitted by the claimant, coworkers, physicians, and others. The IG may also conduct additional investigations of claimants and medical providers suspected of defrauding the program, such as surveillance of claimants and undercover operations aimed at determining if a physician is knowingly participating in fraudulent claims. For example, an IG agent— wearing a transmitter—might pose as a postal worker and visit a doctor who has been identified as providing supporting opinions for OWCP claimants with questionable injuries. The agent could then tell the doctor that the claim of injury is in fact false but that they need time off for personal reasons, for example to get married. If the doctor agrees to support such a false claim, the doctor would then be charged with fraud. Of approximately 600,000 workers’ compensation claims filed with district offices from fiscal years 1998 through 2001, the IG opened 513 investigations involving potential fraud. Of these, 212 led to indictments and 183 resulted in convictions against claimants and physicians. Conclusions One out of four OWCP initial claims decisions (approximately 25 percent) was either reversed or remanded upon appeal because of questions about or problems with either OWCP’s evaluation of medical and nonmedical evidence or improper management of claims files. For the appealed claims that were eventually reversed because of problems with the initial decision, benefits to which claimants were entitled are delayed. While benefits are usually granted retroactively in such cases, going without those deserved benefits for what might be extended periods might create hardships for claimants. Further, representatives’ fees and some other additional expenses that claimants might incur during the appeals process are generally not reimbursed by OWCP. While OWCP monitors certain information on BHR and ECAB remands and reversals to identify problems in district office decisions, and distributes much of this information to district offices, that information does not fully identify underlying causes of the problems. An examination of the monitoring steps OWCP is currently taking and a determination of what other information could help OWCP and its district offices to address underlying causes could result in a reduction of the rate of remands and reversals. Recommendation for Executive Action We recommend that the secretary of labor require the director of OWCP to examine the steps now being taken to determine whether more can be done to identify and track specific reasons for remands and reversals— including improper evaluation of evidence and mismanagement of claim files—and address their underlying causes. Agency Comments and Our Evaluation We obtained comments on this report from the Assistant Secretary for Employment Standards, Department of Labor. The Assistant Secretary agreed with our conclusions regarding the timing of notifying claimants on hearing results; physician certification, licensing and specialties; and processes used by OWCP to monitor customer satisfaction and potential claimant fraud. The Assistant Secretary raised concerns, however, with our conclusions related to the frequency of and reasons for reversals and remands of initial OWCP claims decisions when appealed by the claimant. Following is a presentation of key comments from the Assistant Secretary and our responses to those comments. A principal comment regarding the report and its conclusions relates to the use of BHR and ECAB decision summaries to determine the rate of remands and reversals due to (a) introduction of new information, (b) mismanagement of case files and (c) district office problems in evaluating claim evidence. In short, OWCP asserts that BHR and ECAB summary decisions are inadequate to make such determinations. The Assistant Secretary also expresses the belief that a “large portion” of decisions that our review showed were reversed or remanded because of questions about or problems with the initial decision (as opposed to new evidence being submitted), were in fact reversed or remanded because of new evidence being submitted. We disagree. Decision summaries we reviewed clearly indicated specific reasons for each reversal or remand and our analysis fully accounted for remands and reversals that were ordered by the BHR and ECAB due to the introduction of new information by the claimant. For example, in the summary of one decision remanded by the ECAB due to an evidence evaluation problem, the BHR had originally decided that a claimant was not entitled to benefits. The BHR decision was based on a second opinion physician’s report and several reports from the claimant’s two physicians’ all of which preceded the BHR decision. The BHR “representative found that the opinions of the (claimants) attending physicians could not be afforded any great weight as their opinions were based on the fact that the (claimant) was performing duties requiring repetitive shoulder movements, and this was not true.” In remanding the decision, ECAB determined that there were “discrepancies between the opinions of the (claimant’s physicians) and the (second opinion physician) that there is a conflict in the medical opinion evidence as to the cause of the (claimant’s) current condition and, therefore, the case will be remanded” for the appointment of a referee physician. An example of a decision where new evidence was submitted, was an ECAB decision summary that stated that the decision was remanded back to the OWCP district office “because (claimant) submitted relevant and pertinent evidence not previously considered by the office.” Cases are frequently reviewed by claims examiners on arrival and may be remanded if late arriving evidence is sufficient to meet the claimant’s burden of proof. These claims examiner remands prior to hearing are frequently based on the review of evidence not available to the district office examiner. It appears that the GAO investigators entirely excluded these cases from their sample. OWCP is incorrect. Our sample, as indicated in our report, was drawn from all appealed case decisions made during a 1-year period and therefore encompassed all affirmations, remands and reversals that were made before hearings, after hearings and those for the record during that 12 month period. The percent of appeals reversed or remanded by the ECAB may be the purest indicator of district office oversight or error. We note that, based on our sample, the rate of ECAB remands and reversals was approximately 23 percent, which closely approximates the composite remand and reversal rate for both BHR and ECAB of 25 percent. The report also conflates its analysis of remands and reversals. Remands and reversals must be distinguished. A remand does not reverse the denial of a claim and direct the examiner to pay the denied benefit. It may, for example, direct the examiner to ask further questions of the reporting physician, after which the district office issues a new decision that considers the doctor’s further response. The new decision may reinstate the original denial or award the benefit. We have added wording to our report to make the distinction clear. However, because our analysis focused on the same issue for both, i.e. questions about or problems with initial claims decisions made at OWCP district offices, we believe it is appropriate to use reversals and remands as a combined indicator. In summary, the report’s presentation of the ratio of remands and reversals caused by new evidence, as opposed to “errors” in the original decision, is seriously flawed. We have attached a chart that provides the actual outcomes from the two appeal bodies for FY 2001. Following the actual procedures we have described, we believe that all (BHR) decisions in which a hearing was held reflect new information to some degree. As for the other categories, our experience is that half the remands/reversals prior to hearing and most of the remands/reversals following reviews on the record are based on the submission of new evidence. This analysis yields the conclusion that well over half of the (BHR) remands/decisions reflect the consideration of new evidence or new argument. We agree that new evidence is submitted and considered in many cases throughout the life of a claim, which may involve a number of separate appeals. However, our review of decision summaries clearly showed the reasons for remand or reversal of initial claims decisions when appealed. Those reasons, which also were provided to claimants in explaining why the decision on their claim was being remanded or reversed, included (1) questions about or problems with the availability or consideration of evidence at the time of the initial decision, or (2) problems with case file management; and (3) new evidence or information being introduced. The chart provided by OWCP does not present any information on such specific reasons for remands and reversals. In fact, in response to our request for such specific information at the end of our review, we were told by OWCP officials that OWCP did not have such information. The report characterizes four percent of cases as due to “mismanagement of claim files.” This phrase is not defined and only one example is offered. With no definition and only one example, the phrase “mismanagement” appears to be unsupported. We believe the discussion concerning “mismanagement of claim files” adequately defines the issue. In addition, the example provided is for illustrative purposes. GAO’s recommendation appears to be based on (1) the substantial overestimation of the contribution of OWCP errors to the remand/reversal rate and (2) a generalization that no systematic study of the “underlying causes” of remands and reversals has been undertaken by OWCP. OWCP explained its many and varied approaches to decision monitoring and quality improvement to the GAO team, and we do not understand the basis for this generalization. In fact, OWCP does react to data showing trends from ECAB decisions and hearing decisions, provides appropriate training to claims examiners, and is fully committed to continuing to monitor the outcomes of appeals. While we agree that OWCP takes a number of actions to monitor decision reversals and remands, and in fact we recognize many of these in our report, our estimates of the rates and reasons for remands and reversals are statistically valid. Our recommendation is based upon (1) the importance of ensuring that claimants receive benefits to which they are entitled as promptly as possible; (2) the level of initial claims decision remands and reversals upon appeal; and (3) our conclusion that there may be opportunities for OWCP to better identify the reasons for and address the underlying causes of remands and reversals. GAO acknowledged the basis for OWCP’s application of a hearing standard which allows for 110 days for hearing decision notification, including time for the claimant’s review of testimony and opportunity to comment. Our report describes how OWCP has interpreted the FECA requirement and established a target of notifying most claimants of the decision on their appeal within 110 days of the date of the hearing. We did not assess whether this is an appropriate target. Finally, DOL indicated that, consistent with our recommendation, they would review and enhance their systems for monitoring results of its claims adjudication process “to better achieve improvements in our claims review.” DOL also provided technical comments which we incorporated in the report as appropriate. DOL’s comments are reprinted in appendix IV. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the ranking minority member and to the secretary of labor. We will also make copies available to others on request at that time. Major contributors to this report were Boris Kachura, Assistant Director; Thomas Davies Jr., Project Manager; Ellen Grady, Senior Analyst; Chad Holmes, Analyst; and Karen Bracey, Senior Operations Research Analyst. OWCP’s Claims Process Based on interviews with OWCP officials and reviews of OWCP operational guidance, when a federal employee is injured at work and becomes disabled, the employee files a claim with the employing federal agency. All claims that involve medical expenses or lost work time or both are then forwarded by the agency to 1 of OWCP’s 12 district offices. Figure 2 characterizes OWCP’s claims process, including the claims adjudication process. Scope and Methodology In your March 2001 letter, you asked GAO to examine several issues related to OWCP’s workers’ compensation claims adjudication process. To meet this objective, we reviewed a probability sample of over 1,200 decision summaries from about 8,100 ECAB and BHR claims appeal decisions made between May 1, 2000, and April 30, 2001, on claimant appeals. As part of our review of the decisions made by BHR and ECAB on appeals, we first categorized the decisions in our sample into three groups: (1) affirmed (the decision made on the initial claim was not changed), (2) remanded (the claim was sent back by either ECAB or the BHR to the cognizant district office for additional review or action and a new decision), or (3) reversed (the initial decision made on the claim by the district office or BHR was determined by BHR or ECAB to be incorrect and was therefore changed—in most cases a claim or portion of a claim that had been denied was changed to an approval). For each claim that had been remanded or reversed, we then analyzed the decision summaries to determine the basis for the BHR or ECAB decision. To determine the extent to which OWCP was complying with FECA’s requirements that (1) a referee physician be appointed to resolve conflicts in medical opinions between claimant physicians and OWCP’s second opinion physicians and (2) claimants be informed of the outcome of hearings in a timely manner, we performed several steps. For the first of these two objectives, we reviewed FECA legislation and OWCP regulations and interviewed OWCP officials to identify the specific requirements related to referee physicians. From our statistical sample of claims appeal decisions, we then identified decisions in which at some point during the history of the claim, there had been a conflict in the medical opinions between the claimant’s attending physician and an OWCP second opinion physician. For this subset, we relied upon the decisions of the BHR and ECAB as reflected in decision summaries to determine the extent to which referee physicians were appointed as required. In addition, we identified the frequency that claims were remanded or reversed by the BHR and ECAB because a referee physician should have been but was not appointed. Regarding the length of time taken by OWCP to notify claimants about hearing outcomes, we reviewed the relevant FECA requirement and OWCP’s guidelines and goals and interviewed OWCP officials. We limited our review on this objective to claims decisions rendered by BHR, because ECAB decision summaries did not contain the dates needed for our analysis. Accordingly, we selected a subset of BHR cases from our sample, and calculated the number of days between the date of the hearing and the date of the final hearing decisions. In making our calculation, we used the date of the BHR decision letter as the claimant notification date. To determine whether the physicians involved in reviewing claims were board certified, we used another subset of claims appeal decisions from our sample, and relied on information from the American Board of Medical Specialties’ (ABMS) website (www.abms.org). ABMS is the umbrella organization for approved medical specialty boards in the United States. We compared the names and specialties of the second opinion and referee physicians to the database to determine whether these physicians were board certified. We looked for an exact or close match of names while allowing for obvious spelling errors in the name or other minor discrepancies, such as missing initials. Although most of the board certification verifications were done by querying the ABMS website and printing copies of the certifications, when necessary we also contacted ABMS by telephone to obtain verbal verification on board certifications or used ABMS’ directory book for calendar year 2002. For those physicians whose certifications we were not able to readily verify, we asked OWCP to provide documentation of the board certifications, which they did for a number of physicians. In determining whether second opinion and referee physicians used by OWCP had state licenses, we used the same sample subset as we used in verifying board certifications. In making the state license determinations, we generally focused on the state in which the employee resided for BHR decisions, and the state in which the employing agency was located in for ECAB decisions. We relied on a variety of resources in that search, including www.docboard.org (a public service site) and individual state medical board web sites for printed documentation. We also phoned staff in various state medical board offices for verbal confirmation for some physicians. We again looked for an exact or close match of name while allowing for spelling and other minor differences. In addition, since physicians are required to have state medical licenses in order to become board certified, any physicians whom we could not verify as licensed through state sources were considered to be licensed if we had determined the physicians were board certified. Also, while the dates of physician involvement on individual cases could have taken place anytime during or even preceding the May 1, 2000, through April 30, 2001 period of our review, we made our determinations for state licenses as of December 31, 2001. We also determined whether second opinion and referee physicians contracted for by OWCP possessed the appropriate medical specialty to evaluate and fully understand the nature and extent of the claimant’s particular illness or injury. To do this, we drew another subset of the appealed claims decisions for which we could determine that a second or referee physician was involved, and that we could identify the nature of the claimant’s injury and the physician’s medical specialty. We contracted with a Public Health Service (PHS) physician to review the injuries of the claimants in this sample and determine whether the board specialties of the physician(s) who evaluated those injuries were appropriate. To determine how OWCP identifies problems with its appeals process, levels of customer satisfaction, and potential claimant fraud, we interviewed OWCP officials—including the deputy director and director of BHR—and reviewed documentation provided by OWCP, including reports from several annual customer (claimant) surveys and focus groups of federal agencies. In addition, we interviewed officials in DOL’s IG, analyzed IG guidance on detecting and investigating potential fraudulent activity, and reviewed IG annual reports that discussed the identification and prosecution of claimant fraud. We did our work in Washington, D.C., from March 2001 through April 2002. Our work was done in accordance with generally accepted government auditing standards. Sampling and Estimation Methods and Sampling Errors To help accomplish some of our objectives we reviewed a probability sample of over 1,200 ECAB and BHR decisions issued between May 1, 2000, and April 30, 2001. This appendix describes how we selected decisions for review and provides the sampling error of estimates presented in this report that we made from our sample. ECAB and BHR cases were sampled separately. We obtained a list of ECAB decisions issued between May 1, 2000, and April 30, 2001. The listed decisions were classified as either remands or nonremands and a simple random sample of each of the two classifications was selected. BHR decision files covering the period of our review were stored in folders in three filing cabinets. Each folder was divided into two compartments. We took separate systematic samples from the front and back compartments of the folders in the cabinets. Since the file cabinets contained some decisions that fell outside our review period, we estimated, based on our sample, the number of decisions in the three filing cabinets that were issued between May 1, 2000, and April 30, 2001. Using these sampling methods described above, we obtained a sample of over 1,200 decisions. Each sampled decision was weighted in our analysis to account statistically for all appealed claims decisions issued between May 1, 2000, and April 30, 2001, including those that were not sampled. The estimates we made from our sample and the sampling errors associated with these estimates are given in the table below. Comments from the Department of Labor GAO’s Mission The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
The Department of Labor's Office of Workers' Compensation Programs (OWCP) paid $2.1 billion in medical and death benefits and received about 174,000 new injury claims during fiscal year 2000. GAO found that (1) one in four appealed claims' decisions are reversed or remanded to OWCP district offices for additional consideration and a new decision because of questions about or problems with the initial claims decision; (2) OWCP set a goal of informing 96 percent of claimants within 110 days of the date of the hearing; (3) nearly all doctors used by OWCP to provide opinions on injuries claimed were board certified and state licensed, and were specialists in areas consistent with the injuries they evaluate; and (4) OWCP has used mailed surveys, telephone surveys, and focus groups to measure customer satisfaction. The Labor inspector general is monitoring fraud within OWCP's workers compensation program and using the claims examiners as one source in identifying potentially fraudulent claims.
Background The primary goal of antisubmarine warfare is to protect U.S. ships and assets from enemy submarines. Undersea surveillance and detection of submarines are a critical part of this mission. During the Cold War, the Navy relied on a combination of fixed, mobile, passive, and active sonar systems to detect enemy nuclear and diesel submarines, particularly those from the Soviet Union. Passive sonar systems “listen” or receive signals, whereas active systems send out signals to search for targets and receive an echo or response. The systems are used on mobile platforms, such as Navy surface ships, submarines, and aircraft, and in fixed arrays that are laid or buried across the ocean floor in various strategic locations. However, because of technology advancements, the Soviet Union and other countries developed quieter submarines. As a result, submarines became harder to detect, and the Navy grew concerned that enemy submarines could get within effective weapons range of U.S. ships and assets. The Navy determined it needed a system that could detect quiet submarines at great distances. In response to this need, the Navy launched the SURTASS/LFA program in 1985, which was originally designed for use in open oceans. The SURTASS/LFA system operates in conjunction with the Navy’s existing passive SURTASS sonar system. The two components, as illustrated in figure 1, make up a mobile acoustic undersea surveillance system that is intended to provide detection, cueing, localization, and tracking information on modern quiet nuclear and diesel submarines for the battle group or other tactical commanders. The passive component detects sounds or echoes from undersea objects through the use of hydrophones on a receiving array that is towed behind the ship. The active or transmitting component of the system sends high-intensity, low frequency sonar from transducers suspended by a cable under the ship. The active signal will produce a return echo that, when received, provides location and range data on submerged objects. The system uses 18 pairs of undersea transducers and 18 shipboard high-power amplifiers. The SURTASS/LFA system is heavy, weighing 327,000 pounds, and requires a specially designed ship to carry and operate it. The Navy plans to use two SURTASS/LFA systems. The first was installed in 1992 on the research vessel Cory Chouest. The other system, completed in 1993, will be installed on the twin-hull auxiliary general-purpose ocean surveillance ship, T-AGOS-23, which the Navy designed to carry the SURTASS system. The ship was originally scheduled for delivery in 1994, but construction was delayed due to the bankruptcy of the contractor and it will not be completed until late 2002. The Navy estimates that it has cost approximately $375 million to develop and produce the two systems and that it will spend an additional $40 million to field and operate the systems through fiscal year 2009. These estimates do not include the cost of the ships. During the course of developing and testing the SURTASS/LFA system, environmental interest groups, including the Natural Resources Defense Council, began to raise concerns that the system may cause harm to marine mammals. Environmentalists were concerned that the high- intensity sound emitted by the system could cause physical damage to marine mammals and adversely affect their behavior. In August 1995, in a letter to the Secretary of the Navy, the Natural Resources Defense Council questioned whether the Navy had complied with all applicable environmental laws and regulations. In response to growing public concerns and recognition that further assessment of the system was needed, the Navy decided to initiate an environmental impact statement process. As part of this process, the Navy conducted a scientific research program from 1997 to 1998 to test the effects of low frequency sonar on a limited number of whale species off the coasts of California and Hawaii. The Navy distributed a draft environmental impact statement for public comment in 1999 and issued a final environmental impact statement in 2001. The Navy concluded in the environmental impact statement that the potential impact or injury to marine mammals from SURTASS/LFA is negligible. As reflected in the environmental impact statement, this is based on using the system with certain proposed geographic restrictions and monitoring to prevent harm to marine mammals. Because there is some potential for incidental harm to marine mammals, the Navy must obtain a Letter of Authorization from the National Marine Fisheries Service before SURTASS/LFA can be used. The National Marine Fisheries Service issued a draft authorization for public comment in 2001, which concurred with the findings of the Navy’s environmental impact statement. If approved, the authorization would allow the Navy to use the SURTASS/LFA system with certain specified mitigation measures and restrictions. These measures include limiting (1) sonar sound levels to 180 decibels within 12 nautical miles of any coastline or in any designated biologically important offshore area and (2) sound levels to 145 decibels in known recreational or commercial dive sites. In addition, the authorization would require the Navy to monitor marine mammals from the ship visually and with passive and high frequency active sonar. If marine mammals were detected, the Navy would be required to shut down LFA operations to prevent, to the greatest extent possible, marine mammals’ exposure to potentially harmful sound levels. The decision on the authorization is expected later in 2002. Notwithstanding the mitigation measures outlined by the Letter of Authorization, environmental organizations are still expected to oppose the use of the SURTASS/LFA. They have indicated that although conclusive evidence has not been established regarding the harmful effects of the SURTASS/LFA on marine mammals, enough is known about the potential adverse effects of sound on marine mammals to warrant no further use of the system. They have also questioned the usefulness of the system to the Navy. The Navy has also recognized gaps exist in scientific knowledge about the impact of the system on marine mammals, but it considers that the risk is minimal and not enough to warrant ceasing its use. In addition, the Navy has stated that it has done an extensive amount of testing, research, and analysis regarding the use of SURTASS/LFA and marine mammals and that current information combined with the planned mitigation and monitoring procedures and ongoing research, support resuming SURTASS/LFA operations. Furthermore, the Navy has emphasized that the need for a long-range detection capability still exists. SURTASS/LFA Increases Antisubmarine Capabilities in Open Ocean, but Its Capabilities Are Unproven in Littoral Waters Based on initial testing conducted to date, SURTASS/LFA appears to provide long-range undersea detection capabilities in the deep, open ocean that surpass any system planned or in existence. However, the system may not be as effective in littoral waters. A final operational evaluation must still be conducted to determine the overall effectiveness and suitability of the system, and while Navy officials are developing a plan to evaluate the system, they have not yet defined what testing will be conducted in littoral areas. Operational Benefits and Limitations The primary benefit of SURTASS/LFA is that it will provide a significant increase in long-range undersea detection capability in the open ocean. Active sonar at low frequencies is more effective and transmits further undersea because its absorption rate in water is relatively low. Because of this, a low frequency active signal can travel several hundreds of miles if unimpeded. In contrast, mid frequency and high frequency sonar transmits on the order of tens of miles. Therefore, low frequency active sonar can potentially cover an area of the ocean vastly greater than sonar at higher frequencies. In addition, a benefit of active sonar is its ability to seek out targets rather than wait passively for a target to approach. As a result, a system such as SURTASS/LFA can provide the means to detect enemy submarines before they can get within the effective weapons range of U.S. ships. Also, because it is mobile, the system provides greater deployment flexibility and can detect target information in areas beyond the reach of fixed sonar systems according to Navy officials. Moreover, the SURTASS/LFA technology can provide long-range detection with less assets and operators than other technologies. SURTASS/LFA also has several operational limitations, including the amount of coverage it can provide and its vulnerabilities. The Navy plans to use a total of only two systems, with one deployed to the Pacific Fleet and the other to the Atlantic Fleet to support antisubmarine missions. Therefore, the amount of area the system can cover will be limited. The Navy recognizes that two systems are not sufficient to meet operational requirements and prefers to have more. In addition, SURTASS/LFA may be vulnerable to attack because the ships carrying the systems will not have onboard defense systems. The ships are also relatively slow and therefore incapable of remaining close enough to the transiting battlegroup to be protected. Furthermore, because SURTASS/LFA transmits an active, high volume signal, it can readily reveal its location, which further increases its vulnerability. However, the Navy concluded that the operational limitations are outweighed by the benefit of long-range detection. Demonstration of Capabilities Results of SURTASS/LFA testing to date show that the system will increase the Navy’s capability to detect modern submarines at long range in deep, open ocean areas. Starting in 1989 through 1992, the Navy conducted a series of developmental tests on SURTASS/LFA that were focused on validating the performance of a demonstration system in these areas. The objectives of these tests were to obtain an increased understanding of technical performance issues such as the long-range transmission of signals and signal processing techniques. Based on the successful results of these tests, the Navy concluded the system performance requirements were achievable and decided to proceed with full-scale engineering development. In 1992, the Navy began conducting operational tests using an engineering development model that more closely represented the operational SURTASS/LFA system. The purpose of these tests was to determine the performance of the system under more realistic at-sea conditions and against more realistic threat scenarios, including quiet submarines. Numerous tests were performed to assess the system’s capabilities in deep waters, such as in the middle of the Atlantic Ocean. These tests concluded that SURTASS/LFA could detect targets at long range and resulted in recommendations that the program continue with its development. In addition, a test in 1994 determined that the engineering development model performed well enough that the system could be introduced to the fleet as an interim capability. However, operational testing revealed some reliability and maintainability problems with critical software. Navy officials told us that they intend to resolve these issues before the overall operational evaluation is complete. While testing has demonstrated that SURTASS/LFA can increase detection in the open ocean, the system has shown limited capability in littoral waters. Tests indicate the system provides some detection capability in littoral waters but at a range that is significantly less than that achieved in the open ocean. Moreover, the effectiveness of SURTASS/LFA generally decreases closer to shore as the water becomes more shallow. Navy officials told us that these results were expected and can be attributed to system design and geographic characteristics. The characteristics of low absorption rate and low frequency signal that make SURTASS/LFA effective for extended ranges in the deep, open ocean are the same characteristics that limit its effectiveness in littoral waters. For example, littoral waters, particularly along coastlines, typically have more complex and prominent floor features than those in the open ocean. In littoral areas, sonar signals may reverberate or rebound off the ocean floor making target detection difficult. The littoral environment is also more acoustically harsh because it has shifting currents, variable water densities, and shallow water depth. As a result, active sonar signals— particularly those at low frequency—reverberate and degrade more than they do in the open ocean. In addition, the littoral environment has more magnetic anomalies, which can severely degrade bearing accuracy. Littoral waters also have more shipping traffic and greater ambient noise, making it much more difficult for the system to distinguish and detect threat submarines from other noise-generating vessels. In addition, the presence of more shipwrecks and near shore debris in these locations increases the number of false targets and, therefore, increases the challenges to detect, locate, and distinguish threat submarines. Although the Navy has largely completed developmental testing and conducted a series of initial operational tests of the SURTASS/LFA system, it must still complete a final operational test and evaluation to establish the operational effectiveness and suitability of the system. Currently, this evaluation is planned for fiscal year 2004, providing the program receives authorization from the National Marine Fisheries Service. The Navy planned for the evaluation to primarily focus on demonstrating the system’s capabilities in the open ocean. Although the test evaluation master plan was updated in 1996, the concept of operations and the original operational requirements have not been updated to reflect the Navy’s shift in focus to littoral threats. In accordance with Department of Defense guidelines, a system should be tested under realistic conditions and in environments where it is intended to be used. In addition, any testing and operations will have to be in compliance with applicable operating restrictions such as the National Marine Fisheries Service Letter of Authorization. Currently, Navy working groups are in the process of updating a concept of operations for the SURTASS/LFA system and developing the test evaluation master plan that will be used to conduct the operational evaluation. However, they have not decided on the extent to which the system will be tested in littoral areas. Other Antisubmarine Warfare Technologies Complement but Are Not Substitutes for SURTASS/LFA Since the beginning of the program, the Navy has considered a number of existing and potential alternatives to SURTASS/LFA, and each time it found that the system provides long-range detection capabilities other systems could not provide. Available technologies offer different capabilities and practical limitations. Although SURTASS/LFA provides increased detection ranges, the Navy advocates a “tool box” approach that uses a mix of complementary technologies to detect enemy submarines. Existing passive, active, and nonacoustic technologies have a limited capability to detect submarines at long range. Passive sensors, for example, are effective at short range but have become more limited in their capability since the development of quieter submarines. Even though recent improvements to passive systems have extended their range, submarine quieting measures have lowered submarine noise levels to nearly the level of the ambient noise of natural sounds in the ocean. As a result, the Navy is concerned that an enemy submarine could get within effective weapons’ range of U.S. forces before passive systems could make contact with an enemy submarine. Passive systems by the nature of how they operate are environmentally benign because they do not transmit sound. Active sensors systems that can be used from aircraft provide extended ranges and large area coverage, but large area coverage requires a high number of assets of both aircraft and sensors to be deployed. Antisubmarine warfare aircraft are expensive to operate, and they require shore-based facilities, which are limited because of continued decreases to the number of these installations. A shipboard system, such as SURTASS/LFA, provides the advantage of extended range and duration of searches, but when it is used in a continuous search mode, it has the drawback of revealing the ship’s position. The Navy determined that nonacoustic technologies, such as radar, laser, magnetic, infrared, electronic, optical, hydrodynamic, and biological sensors, have demonstrated some utility in detecting submarines. Their usefulness, however, is limited by range of detection, unique operating requirements, meteorological/oceanographic disturbances, and/or a requirement that the submarine be at or near the surface for detection. Today, nuclear submarines can remain submerged at considerable depths indefinitely, and new battery technology and air-independent propulsionhave increased the time that diesel submarines can remain at depth. The capabilities of passive, active, and nonacoustic technologies vary depending on whether they are used on fixed, mobile, and deployable platforms. During the Cold War, the Navy relied on a comprehensive system of fixed undersea acoustic sensors as its primary means of initial detection of enemy submarines. In recent years, the Navy’s Submarine Surveillance Program has undergone a major transition from emphasis on maintaining a large, dispersed surveillance force to detect and track Soviet submarines to a much smaller force. As a result, a number of fixed acoustic arrays have been turned off, placed in stand-by status, or damaged and not repaired. Fixed systems have a number of practical constraints such as requiring long lead times to install. They are also expensive, require extensive maintenance, and run the risk of being discovered, avoided, or tapped into. On the other hand, mobile systems are not limited to a specific location and can be deployed to areas of interest to the fleet at any time. Mobile systems also have the benefit of providing coverage in locations beyond the range of fixed systems or augmenting the capabilities of fixed systems. In the late-1990s, the Navy prepared an evaluation of alternatives on the requirements for long-range active undersea surveillance in a white paper. The evaluation examined expanding current technologies, developing new technologies, and improving the LFA system. The paper concluded that increasing the numbers of antisubmarine warfare search, detection, and attack platforms in an attempt to flood the target area with search systems requires a high number of assets and a large number of operators and results in high costs due to the continued use of multiple systems; increasing the number of assets also does not solve the problems of high false contact rates, short detection ranges, and danger to the sensor platform itself because an active signal discloses the ship’s position; developing new passive systems will have a marginal potential to improve sensor detection ranges unless a new technology, yet to be identified, emerges; and improving the performance levels of active sonar systems like LFA addresses the critical issue of the range at which the threat submarine is detected. More recently, in 2001, the Navy conducted a comprehensive evaluation of existing and emerging antisubmarine warfare technologies that involved several expert panels consisting of Navy officials and representatives from the scientific, academic, and intelligence communities. The objective of this evaluation was to assess current and planned detection technologies to determine where the Navy has shortfalls in capability and where to invest future resources. A total of 125 technologies and concepts were initially evaluated and 16 were selected for additional analysis. The 16 technologies and concepts were analyzed against criteria that included robustness, operational suitability, survivability, technical maturity, potential operational effectiveness, deployment flexibility and responsiveness, and potential overall impact and military utility. The SURTASS/LFA program received high ratings for all criteria except for survivability. As a result of the panels’ analyses, the Navy determined that SURTASS/LFA provides the needed extended range coverage and deployment flexibility and reduces the need for multiple assets, all at a comparatively low operational and per unit cost. With fewer assets devoted to submarine detection, naval commanders can use the additional assets to manage and control the undersea battle space. Because of these benefits, the Navy plans to rely on the SURTASS/LFA to detect and locate enemy submarines at greater distances before they get within effective weapons range. While SURTASS/LFA is effective at long range detection, Navy officials still conclude that there is no single system capable of providing all Navy submarine detection capabilities and advocate the use of multiple, complementary systems or a “tool box” approach to meet this need. The most effective approach to conducting antisubmarine warfare operations is a “layered defense” beginning with a long detection range, early warning sensor, followed by short-range tactical active and passive sonars designed to coordinate the engagement of targets detected by the long-range system. The Navy continues to identify and develop new antisubmarine warfare technologies as well as explore new applications of existing technologies. Because no single antisubmarine technology or system meets all of the Navy’s undersea surveillance and detection requirements, the Navy continues acquisition and development efforts to increase detection efficiency and to respond to new threat challenges. A key focus of these efforts has been in developing antisubmarine warfare capabilities for littoral areas. The Navy is in the process of refining and developing a variety of alternatives to take advantage of LFA technology, but without its current limitations. For example, the Navy is exploring a higher frequency, lighter, and compact LFA system design, which incorporates several advantages to enhance performance in shallow water. However, it is too soon to assess whether these new developments will improve submarine detection capabilities. Conclusions Currently, the Navy is preparing for the overall operational evaluation of the SURTASS/LFA but has not developed a test plan or decided on the extent to which the system will be tested in littoral waters. Without testing in littoral areas, the Navy will not know whether the system is suitable and effective where the enemy threat is of increasing concern and detection is more challenging. In addition, testing results would provide users with a better understanding of the system’s capabilities and help the Navy make more informed decisions about investments in future submarine detection efforts. During our review, we noted to Navy officials that if they intend on operating the system in littoral areas, then they should conduct testing to gain a better understanding of the system’s advantages and limitations and how to use it most effectively in the Navy’s “tool box” approach to antisubmarine warfare. In response, Navy officials indicated they would reconsider what testing to include in the operational evaluation. Recommendation for Executive Action Before the Navy operates SURTASS/LFA in littoral areas, we recommend that the Secretary of the Navy direct program officials to establish a test plan and conduct testing of the system to demonstrate its capabilities in those areas. Agency Comments In written comments to a draft of our report, DOD agreed with our recommendation. In addition, DOD also provided technical comments that we incorporated into the report as appropriate. DOD’s comments appear in appendix I. Scope and Methodology To acquire information about the SURTASS/LFA program, including requirements, alternatives, acquisition, development, operations, threat assessments, history, and current status, we interviewed officials and obtained documentation from the SURTASS program office (PMW-182); the Space and Naval Warfare Command’s Intelligence, Surveillance, and Reconnaissance Directorate (PD-18); Office of the Principal Deputy Assistant Secretary of the Navy (Research, Development, and Acquisition); Office of the Deputy Assistant Secretary of the Navy for Mine and Undersea Warfare; Office of the Chief of Naval Operations Antisubmarine Warfare Requirements Division; Office of the Chief of Naval Operations Undersea Surveillance Branch; Office of the Commander Submarines Atlantic; Office of the Commander Undersea Surveillance Operations; Integrated Undersea Surveillance System Command Center; TAGOS project office, Military Sealift Command; USNS Impeccable (T-AGOS-23); Office of Naval Research; Office of Naval Intelligence; Defense Intelligence Agency; and the Naval Undersea Warfare Center. To obtain information about SURTASS/LFA operational testing, effectiveness, suitability, and performance, we interviewed officials and obtained documentation from the Office of the Director of Operational Testing and Evaluation, Office of the Assistant Secretary of Defense; the Office of the Navy Commander Operational Test and Evaluation; and many of the above identified organizations. To obtain information about environmental issues, requirements, assessments, and monitoring and mitigation plans, we interviewed officials and obtained documentation from the Office of the Assistant Secretary of the Navy for Installations and Environment; Office of the Chief of Naval Operations, Environmental Planning and National Environmental Policy Act Compliance Branch; the State of California Coastal Commission; the State of Hawaii Department of Land and Natural Resources; Marine Acoustics Inc; the National Marine Fisheries Service; the Marine Mammal Commission; the Natural Resources Defense Council; Rainbow Friends Animal Sanctuary; and the Keystone Center. We performed our work from July 2001 through March 2002 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Secretary of Defense; the Secretary of the Navy; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4530 or John Oppenheim at (202) 512-3111 if you or your staff have any questions concerning this report. Other major contributors to this report were Dorian Dunbar, Gary Middleton, Adam Vodraska, and Allen Westheimer. Appendix I: Comments from the Department of Defense
For decades, the Navy has been striving to improve its ability to detect potential enemy submarines before they can get within effective weapons range of U.S. forces. In 1985, the Navy established the Surveillance Towed Array Sensor System (SURTASS) Low Frequency Active (LFA) sonar program to develop a long-range capability for detecting a new generation of quieter Soviet nuclear and diesel submarines operating principally in the open ocean. However, as the Navy conducted testing of the system in the mid-1990s, some public interest groups and scientists raised concerns that SURTASS/LFA may cause harm to marine mammals. The Navy discontinued operational testing of the system and initiated an environmental impact statement process. The Navy will not begin testing or operating the system until it receives a Letter of Authorization from the National Marine Fisheries Service. A decision on the authorization is expected later in 2002. SURTASS/LFA will increase the Navy's capability to detect submarines in the open ocean, where the system was originally intended to operate. The Navy has considered a number of existing alternatives to SURTASS/LFA and found that the system provides long-range detection capabilities not available with other systems.
Scope and Methodology To assess DOD’s progress in attaining the knowledge it needs before the start of product development, we examined the resources (technology, communications infrastructure, and funding) committed and planned for the program as well as the users’ needs for an SBR system. We considered DOD’s plans for maturing the critical technologies when we obtained technology-readiness information for each critical technology (as well as its mature backup technology) against best practice standards to determine if they will be sufficiently mature when DOD plans to start product development. We also reviewed the SBR risk management plans and concept development contract information. We discussed these documents and issues with representatives from Air Force Space Command, Peterson Air Force Base, Colorado; and the SBR Joint Program Office, Space and Missile Systems Center, Los Angeles Air Force Base, California. To determine SBR’s role in a larger DOD architecture, we met with officials from the Joint Chiefs of Staff, Washington, D.C.; and the Air Force Directorate of Space Acquisitions, Arlington, Virginia. We also consulted past GAO reports to determine the relationship between SBR and the Transformational Communications Architecture. To determine the scope and completeness of the analysis of alternatives and its follow-on study to identify the optimal ways to gather information on ground moving targets from radars based in space versus air, we met with officials from Air Force Space Command; DOD Office of the Director, Program Analysis and Evaluation, Washington, D.C.; Air Force Directorate of Requirements for Space, Crystal City, Virginia; and the Air Force Studies and Analyses Agency, Arlington, Virginia. We also talked with an official from the Air Force Office of Aerospace Studies, Kirtland Air Force Base, New Mexico. We discussed overarching programmatic issues—including the level of coordination between DOD and the intelligence community—with representatives from the Air Force Directorate of Space Acquisitions. We were not able to obtain meetings with members of the Mission Requirements Board (a board within the intelligence community responsible for approving program requirements) or the intelligence agencies to discuss their stake in the SBR program. We performed our work from November 2003 through June 2004 in accordance with generally accepted government auditing standards. Background SBR represents the first time that DOD has taken the lead on developing a major national security space capability with the intelligence community as a partner. Because of this partnership, SBR’s acquisition process is more complex than that used for typical DOD programs. While DOD and the intelligence community will likely use all the data that SBR produces, their priorities differ. DOD’s warfighting community is particularly interested in tracking targets moving over land or sea as well as other objects of interest. The intelligence community is more focused on obtaining detailed global imagery and combining it with other data for advanced processing. SBR is expected to meet both needs and be fully integrated with other space and non-space systems, including TCA, which is to transmit SBR’s data to receivers in the air, at sea, or on the ground. A key advantage of radar in space is having the ability to “see” through clouds and sand storms and any type of weather, day or night. Radar-equipped aircrafts, on the other hand, require U.S. air dominance to collect radar information and must steer clear of hostile areas—the result being limited radar coverage. The SBR concept offers other added features, including electronic steering of the radar signal toward a particular area and capturing high volumes of very fine resolution radar images of targets and terrain. With the ability to perform these functions almost simultaneously, SBR is expected to help analysts gain a better understanding of what is occurring in specific locations. To help meet some of its goals, DOD plans to leverage key technologies that were developed in the late 1990s to demonstrate a space-based radar capability. According to DOD officials, contractors developed some satellite hardware and prototype components under the Discoverer II program, which began in 1998 and was to identify and validate by 2008 the capability of tracking mobile ground targets from space. Discoverer II, comprising two radar demonstration satellites, was a joint initiative by the Air Force, DOD’s Defense Advanced Research Projects Agency, and the intelligence community’s National Reconnaissance Office. DOD officials told us that the Discoverer II program had reached the preliminary design review phase when it was cancelled in 2000 because of cost and schedule uncertainties, poorly explained requirements, and the lack of a coherent vision to transition the system to operational use. The Secretary of Defense concluded that space-based radar could provide a military advantage and in 2001 approved SBR as a new major defense acquisition program, delegating it to the Air Force. In July 2003, an independent cost assessment team consisting of representatives from DOD and the intelligence community estimated that $28.6 billion would be needed to pay for SBR’s life-cycle costs—development, production, launch, and operation. The program entered the study phase in August 2003. The Air Force has requested $328 million for SBR in fiscal year 2005 and has programmed about $4 billion for the program from fiscal years 2005 to 2009. Given concerns about affordability and readiness, the Fiscal Year 2005 Defense Appropriations Conference Report reduced funding for SBR to $75 million, with the direction to return this effort back to the technology development phase. In 2003, Congress reduced the Air Force’s $274 million budget request for SBR by $100 million due to concerns about technology maturity and schedule. DOD has scheduled the start of product development for mid-fiscal year 2006, with production starting at the end of fiscal year 2008 and the first satellite to be launched at the end of fiscal year 2012. Figure 1 shows SBR’s acquisition schedule in fiscal years. Gaining Knowledge about Requirements and Resources before Product Development Is Important for Space Acquisition Success In the past several decades, DOD’s space acquisitions have experienced problems that have driven up costs by hundreds of millions, even billions, of dollars; have stretched schedules by years; and have increased performance risks. In some cases, capabilities have not been delivered to the warfighter after decades of development. Our reports have shown that these problems, common among many weapon acquisitions, are largely rooted in a failure to match the customer’s requirements (desired capabilities) with the developer’s resources (technical knowledge, timing, and funding) when starting an acquisition program. In particular, our past work has shown that for space systems, product development was often started based on a rigid set of requirements that proved to be unachievable within a reasonable development time frame. Other cases involved unstable requirements. In some cases where requirements had been identified and approved, even more requirements were added after the program began. When technology did not perform as planned, adding resources in terms of time and money became the primary option for solving problems because the customer’s expectations about the product’s performance capabilities already had been set. The path traditionally taken by space programs—and other DOD weapon system programs—stands in sharp contrast to that taken by leading commercial firms. Our extensive body of work shows that leading companies use a product development model that helps reduce risks and increase knowledge when developing new products. This best practices model enables decision makers to be reasonably certain about their products at critical junctures during development and helps them make informed investment decisions. This knowledge-based process can be broken down into three cumulative knowledge points. Knowledge point 1: A match must be made between the customer’s requirements and the developer’s available resources before product development starts. As noted earlier, DOD plans to start SBR product development in 2006. Knowledge point 2: The product’s design must be stable and must meet performance requirements before initial manufacturing begins. Knowledge point 3: The product must be producible within cost, schedule, and quality targets and demonstrated to be reliable before production begins. Systems engineering is a technical management tool that provides the knowledge necessary at knowledge point 1 to translate requirements into specific, achievable capabilities. With systems engineering knowledge in hand, acquisition decision makers and developers can work together to close gaps between requirements and available resources—well before product development starts. Some gaps can be resolved by the developer’s investments, while others can be closed by finding technical or design alternatives. Remaining gaps—capabilities the developer does not have or cannot get without increasing the price and timing of the product beyond what decision makers will accept—must be resolved through trade-offs and negotiations. Effective use of this tool enables decision makers to move on to knowledge point 2 and to produce a stable product design. DOD has recently issued a new acquisition policy for space systems, partly intended to address past acquisition problems and provide capability to users quicker. However, we recently reported that the policy is not likely to achieve these goals because it allows programs to continue to develop technologies after product development starts. Our past work has shown that this approach makes it more difficult to estimate cost and schedule at the onset of product development and increases the likelihood that programs will encounter technical problems that could disrupt design and production and require more time and money to address than anticipated. Over the long run, the extra investment required to address these problems could reduce funding for developing other technological advances, slow the overall modernization effort, delay capabilities for the warfighter, and force unplanned—and possibly unnecessary—trade-offs between space and other weapon system programs. By contrast, DOD’s revised policy for other weapon acquisitions encourages programs to mature technologies to the point of being tested in an operational environment before beginning product development. We recommended that DOD modify its policy to separate technology development from product development so that needs can be matched with available technology, time, and money at the start of a new program. We also reported that DOD’s space acquisition policy does not require DOD to commit to setting aside funding for space acquisitions. Hence, there is no guarantee that the resources needed to meet requirements will be there on any individual program when needed. This makes it difficult for DOD as a whole to make corporate-level and trade-off decisions— which will likely be needed when DOD begins the SBR acquisition because (1) costs are significantly increasing for other critical space systems such as the Space-Based Infrared System High, the Transformational Satellite, and the Evolved Expendable Launch Vehicle and (2) DOD is planning to undertake additional new programs, such as the Space-Based Space Surveillance system and a new version of the Global Positioning System. DOD is revising its new space acquisition policy partly to address these issues; however, the revision was not available for review at the time of this review. DOD Moving Forward on Acquiring Critical Knowledge but Gaps Remain in Approval for SBR Requirements DOD has bolstered the SBR acquisition program by increasing senior leader and stakeholder involvement in setting requirements. However, DOD is not fully documenting commitments made during the requirements approval process before progressing to the next acquisition phase, nor has it established a process to resolve potential disagreements that may occur after approval. Clouding the approval of requirements is that DOD’s current space acquisition policy does not provide specific guidance for acquisitions that involve partnerships between DOD and the intelligence community. SBR Managed by New Executive Oversight Structure Providing senior-level oversight are three new groups created expressly for the SBR program: the Executive Steering Group, which advises the Requirements/Capabilities Group and the Joint Senior Acquisition Group. Members of these groups come from DOD, National Reconnaissance Office, and National Geospatial-Intelligence Agency. All key stakeholders are expected to have open and honest discussions about what can and cannot be done within desired time frames, budgetary constraints, and achievable technologies. Figure 2 shows how these groups work with SBR’s joint program office and requirements review boards for DOD and the intelligence community. A primary benefit of having an oversight structure for the SBR program, which involves many decision makers from across multiple organizations, is that the right people are involved in the decision-making process and can work together to lock in their requirements. The intent is to avoid problems of the past in which a program incurs cost, schedule, and performance risks because decision makers continue to negotiate and make trade-offs even after designers and engineers have started technology development and design work. Figure 3 shows the likely outcomes if requirements are poorly defined and are not approved or, in the case of SBR, if requirements are adequately defined and approved early in the study phase. SBR’s Requirements- Setting Process Lacks Formal Approval and Documentation DOD officials reported to us that the oversight groups have achieved informal consensus on requirements for SBR. However, this approval has not been formalized and it is unclear as to whether and how it might be formalized. Moreover, it is unclear how disagreements that may occur after initial approval will be resolved. Regardless of how many stakeholders have been invited to join in decision making or how much expertise is included in SBR’s oversight function, overall success of the SBR program hinges in part on whether the requirements are clear, stable, and achievable and whether DOD and the intelligence community demonstrate commitment and accountability by formally approving the requirements. In an acquisition decision memorandum, the Under Secretary of the Air Force requested that DOD and the intelligence community approve the initial capabilities document and concept of operations before the request for proposals was released in January 2004 for concept development contracts. DOD officials told us that the Joint Requirements Oversight Council and the intelligence community’s Mission Requirements Board approved the initial capabilities document, and there are memoranda documenting these decisions. The Joint Requirements Oversight Council reviewed the concept of operations, provided comments, but did not approve it. According to DOD officials, during a meeting of the SBR Executive Steering Group, high-level officials from the intelligence community verbally approved the concept of operations, but there is no documentation recording this approval. Agreement is critical because DOD and the intelligence community are placing different emphasis on desired capabilities for SBR. An independent assessment of the SBR program determined that requirements were adequate to enter the study phase, which started in August 2003, but cautioned that the requirements needed to be converged among all stakeholders and users. Table 1 shows the type of knowledge that decision makers expect to gain from the initial capabilities document and the concept of operations. A defined requirements approval process helps decision makers resolve disagreements that may occur and ensure they will remain committed to their decisions after formal approval. Based on our past reports on uncovering problems and our best practice work, we believe that the steps in a formal approval process include: explaining how decision makers’ requirements and comments are obtained and addressed; identifying the officials and/or the organizations responsible for taking specific approval action; establishing a mechanism and time frame for providing approval or establishing a system for addressing unresolved issues as they relate to key program documentation; and assessing changes to approved requirements based on their effect on the program’s cost and schedule. While DOD has taken steps to increase senior leader and stakeholder involvement in setting requirements and addressing acquisition issues, DOD is not fully documenting commitments made during the requirements approval process, nor has it established a process to resolve potential disagreements that may occur after approval. DOD Taking Proactive Steps to Gain Knowledge about Resources, but Critical Gaps May Remain at Product Development DOD is also taking positive steps to attain the knowledge needed to understand what resources will be needed to develop SBR’s capabilities and to mitigate risks. These include: relying on systems engineering to translate requirements into specific, achievable capabilities and to close gaps between requirements and resources; adopting a more comprehensive cost estimating technique to identify SBR’s life-cycle costs; exploring alternatives for SBR if TCA—the infrastructure that DOD is depending on to transmit SBR’s data—incurs schedule slips; and asking two concept development contractors to each propose at least two different operations concepts for SBR with and without TCA. However, the path that SBR is on has potential for knowledge gaps when making investment decisions, the types of gaps that have hampered other space programs in the past. Specifically, it is expected that some critical SBR technologies will not be mature when product development starts, that is, not tested in a relevant or operational environment. Typical outcomes of this lack of knowledge are significant cost and schedule increases because of the need to fix problems later in development. Furthermore, TCA, a new, more robust communications infrastructure that could transmit SBR’s imagery data much more quickly than the current infrastructure, is facing uncertainties. Specifically, one of TCA’s primary components, the Transformational Satellite, may not be ready in time to support SBR. Without mature technologies and faced with a possible slip in the Transformational Satellite’s schedule, DOD will be less able to accurately estimate total system costs before the start of product development. In addition, DOD and the Air Force may not have knowledge needed to make corporate level trade-offs between SBR and other air-based radar systems at the time it plans to make a commitment to invest in the SBR acquisition program. DOD has undertaken an analysis to weigh the merits of space-based radar. At this time, it is not known whether this analysis will be a detailed examination of the capabilities and costs of each individual radar option and combined with other radar platforms or whether the analysis will be a less rigorous examination of the mix of radar options. DOD Taking Positive Steps to Build Foundation of Knowledge about SBR Resources DOD is planning to aggressively address technology, affordability, and integration issues by, in part, instituting robust systems engineering processes and procedures. Systems engineering is a technical management tool for gaining information on a broad array of activities related to the development of a system. For SBR, DOD plans to perform systems engineering work on requirements and their allocation, interface definitions, trade studies, risk management, performance analysis and modeling, environmental and safety planning, test planning, program protection planning, information assurance, and configuration control. Applying systems engineering to these activities would give DOD the insight and knowledge it needs to better manage the program, including ways to reduce risk and ensure the viability of concepts and requirements. DOD has also decided to take a more comprehensive approach to estimating SBR’s life-cycle costs. According to the SBR program director, this marks the first time DOD has willingly presented all related costs to develop, acquire, produce, maintain, operate, and sustain the system. DOD officials stated that they wanted to identify not just direct costs, but also costs for associated infrastructure such as the costs related to modifying the ground system that will be used to support SBR as well as other systems. According to DOD, about $8 billion of the $28.6 billion life-cycle cost estimate represents costs that in the past, would not have been included in space program total cost estimates. Taking steps to more comprehensively identify SBR and SBR related costs is a positive step and will help DOD manage its portfolio of space programs. Although DOD hopes to rely on TCA to support SBR data transmissions, it is taking a proactive approach to identify and assess the viability of TCA alternatives. First, in April 2004, DOD awarded two 2-year contracts for concept development efforts that call for the identification of alternatives to TCA. For each alternative identified, the contractor is to conduct an assessment of the cost, risk, and effect on SBR’s performance. DOD officials told us that when SBR initiates product development in 2006, it would know whether TCA will be available to support SBR or whether to pursue a TCA alternative. In addition, DOD also awarded two contracts totaling $510,000 for a yearlong study to propose several alternatives to TCA capable of supporting SBR’s communications requirements and to analyze the viability of such alternatives. These actions have put DOD in a better position to ensure the program is successful. The two 2-year contracts that DOD awarded in April 2004 also require that at least two different viable SBR operations concepts be proposed. DOD is expecting each contractor to fully develop the alternative operations concepts. These alternative concepts could involve using unique radar processing techniques. According to DOD, it will work with each of the contractors to pare down the alternatives to a single best concept for each contractor. For the remainder of the contract performance period, the contractors would focus their attention on fleshing out the details associated with these concepts. This approach will put DOD in a better position when the time comes to select a single contractor to design the SBR system. Technologies Will Not Be Mature at Product Development Start DOD officials have said that SBR will likely be the most technically challenging, software-intensive, and complex space system ever built by DOD. The two key pieces of hardware needed to give SBR a radar capability from space—the electronically scanned array (which steers the radar signal to an area of interest) and the on-board processor (the radar-processing unit aboard SBR)—face the highest amount of risk. The electronically scanned array can scan multiple areas of interest virtually simultaneously, allowing for simplified satellite design over conventional technology offering mechanical slew radar. The on-board processor is expected to allow the processing radar data to assure the timely and thorough delivery of imagery data that will be downlinked for transmission to the warfighter. To minimize the potential for technology development problems after the start of product development, DOD uses an analytical tool to assess technology maturity for many weapon system acquisition programs. Called Technology Readiness Levels (TRL), this tool associates a TRL with different levels of demonstrated performance, ranging from paper studies to actual application of the technology in its final form. The value of using a tool based on demonstrated performance is that it can presage the likely consequences of incorporating a technology at a given level of maturity into a product’s development, enabling decision makers to make informed choices. Our previous reviews have found the use of TRLs, which range from 1 to 9, to be a best practice. (See app. I for a description of the TRL levels.) The critical technologies that will support the SBR program currently range from TRL 3 to 5. A TRL 3 means that most of the work performed so far has been based on analytical and laboratory studies. At a TRL 5, the basic technology components are integrated and tested in a simulated or laboratory environment. Table 2 shows the current TRL for each of SBR’s critical technologies and the expected TRL at product development start in 2006. In general, the program office’s key risk reduction efforts are scheduled to mature these technologies to TRL 5 by the middle of fiscal year 2006. These efforts include the awarding of research and development contracts to three payload contractors for efforts to continue to develop and mature these components (the electronically scanned array and on-board processor). The period of performance of each contract is about 2.5 years. To mature the electronically scanned array and on-board processor technologies from a TRL 3/4 to 5, the contractors plan to conduct various developmental and integrative tasks in about 3 years. For example, one contractor plans to conduct 18 tasks to develop the electronically scanned array and 8 tasks to integrate the on-board processor with other system components. In addition, the development of the integrated circuits and programmable microcircuits that support the on-board processor requires extensive tests and evaluations and the radiation-hardening requirement further complicates the development. Given the challenges of the state-of-the-art technologies being developed and the algorithms involved, the testing programs must be rigorous and transparent and the results fully documented. We have determined that the time allotted to mature the SBR technologies to TRL 5 is ambitious given the tasks that need to be accomplished. Furthermore, the development of the signal processing algorithms and communications downlink involves significant software development. Based on our past experience of software assessments in other programs, the establishment of a structured testing regime for software development has always been underestimated. By planning to start product development in fiscal year 2006 with technologies at TRL 5, DOD is very likely to continue designing the system and to conduct other program activities at the same time it builds representative models of key technologies and tests them in an environment that simulates space conditions (such as a vacuum chamber). This approach is common with DOD space acquisitions but has a problematic history. Our past work has shown that it can lead to significant cost and schedule increases because of the need to fix problems later in development. A continuing problem is that software needs are poorly understood at the beginning of a program. We have previously recommended that DOD not allow technologies to enter into a weapon system’s product development until they are assessed at a TRL 7, meaning a prototype has been demonstrated in an operational environment. DOD has accepted lower TRL thresholds for space programs because testing in an operational environment—in space, for example, or even in a relevant environment—is difficult and costly. However, DOD’s new space acquisition policy does not identify what the minimum TRL level should be before starting product development for space programs, how risks should be mitigated if technologies are included in programs without full testing, or how lower TRL levels affect the confidence of cost and schedule estimates. Moreover, the policy does not address the option of maturing technologies outside a program and pulling them in once they prove to be viable. One way to mitigate technology risk is to rely on backup technologies, should newer technologies prove to be problematic during product development. According to DOD officials, there are backup technologies that are more mature for each of SBR’s critical technologies. The backups are the same technologies but rely on a previous and more mature version. Using previous versions of these technologies would result in a lower level of desired performance—such as a reduced area collection rate, a reduction in the total number of targets collected per satellite per day, increased product delivery time frames to the user, an increased weight of the spacecraft, and higher cost. For example, more mature versions of the electronically scanned array exist and if used, would result in a reduction in its performance level. In addition, some previous versions of SBR technologies have not been demonstrated or tested in space. But according to DOD officials, even with backup technologies, the total performance of the SBR system can be maintained through systems engineering trades. DOD says it has been able to leverage some of the key technologies (such as the electronically scanned array) that were under development during the previous effort, Discoverer II, to demonstrate a space-based radar capability. Communications Infrastructure May Not Be Ready in Time to Support SBR Current plans call for TCA to transmit SBR’s large volume of data to ground-, air-, ship-, and space-based systems. However, one of TCA’s primary components, the Transformational Satellite—which will use technologies that DOD has never before tried in space—is facing uncertainties in its scheduled 2011 launch. DOD started product development for the Transformational Satellite in December 2003 even though technologies were immature. If the Transformational Satellite falters but SBR launches as expected in 2012, then DOD will have a fully operational, new-generation satellite that is missing its primary means of data transmission. Recognizing the challenges, DOD is to decide by November 2004 whether to move forward or delay the Transformational Satellite’s acquisition program and instead procure another Advanced Extremely High Frequency satellite, which already are under development and are based on mature technologies. Our analysis shows that alternatives to TCA may involve a greater reliance on processing aboard the SBR satellites, thereby increasing software development efforts. This approach would reduce the volume of data requiring transmission, allowing conventional satellite systems, such as the Advanced Extremely High Frequency satellites, to handle the transmission. Another likely alternative is to have SBR satellites transmit only selected portions of data, again, so that the Advanced Extremely High Frequency satellite could handle the lower volume of information. Finally, a dedicated system of satellites could be fielded for the sole purpose of transmitting SBR data, significantly increasing program cost and raising affordability issues. Currently, DOD is working closely with officials from the Transformational Satellite program office to evaluate the relative merits of various alternatives and to document the interfaces needed between SBR and the Transformational Satellite for each alternative. During the course of our audit work, SBR program officials met weekly with the Transformational Satellite program’s integrated product teams and were coordinating efforts on a memorandum of agreement on requirements development, joint engineering practices, and studies of air- and space-based options. SBR’s Cost Estimate Unlikely to Be Realistic Because of Multiple Uncertainties Based on a notional constellation of nine (plus one spare) satellites operating in low-earth orbit, an independent cost assessment in 2003 put SBR’s cost at the $28.6 billion mark, making SBR the most expensive DOD space system ever built. When this initial cost estimate is revised in 2006, before SBR’s product development starts, DOD is to have decided a number of issues, such as how many satellites are to be acquired, what their capabilities will be, and at what altitude(s) the satellites are to operate. This system refinement allows DOD to develop a more realistic total system cost estimate—a critical knowledge point if a successful match between requirements and resources is to be made. However, if DOD begins product development with less than mature technologies and without knowing the availability of TCA, accurate cost estimates for SBR will be much more difficult to prepare. We have previously reported that improving the reliability of cost estimates is critical and affords DOD decision makers with the appropriate information to decide whether a weapon system is worth the overall investment and whether the time is right to proceed with such an investment. Once a total cost is known, DOD needs to secure the funding so it can design, produce, operate, and sustain the system. DOD may also lack knowledge needed to make a corporate-level decision as to how much it should invest in SBR versus air platforms with similar capabilities at the time it begins the SBR acquisition program. In November 2003, the Air Force completed an analysis of alternatives (AOA) for SBR, which was supposed to evaluate whether space- or air-based radar platforms (such as manned and unmanned aircraft with radar capabilities) or a combination of both are better suited for tracking moving targets on land or at sea and analyze the capabilities and costs of each suitable option. However, DOD officials raised a concern that the AOA only weighed the merits of various space-based solutions. The Air Force decided to undertake a follow-on study to explore the optimal ways to gather information on ground moving targets from radars based in space versus air. The plan is to also use this follow-on study as part of DOD’s preparations for submitting a fiscal year 2006 budget to Congress to secure funding for SBR and other radar systems on air platforms. A more thorough AOA, completed before the start of the study phase, might conceivably have determined that air-based radar could provide many or most of the capabilities promised by space-based radar but at a fraction of the cost. Moreover, this type of analysis could help DOD officials better decide whether SBR should be initiated at a later date, when critical technologies will have been matured, or when the communications infrastructure to support SBR will be available. DOD officials have mentioned other ongoing studies that are examining the optimal mix between SBR and other platforms for specific capabilities, such as ground-moving target indication. However, it is unclear as to the extent these studies will be factored into the SBR product development start decision. Conclusions DOD has recently embarked on a discovery and exploration phase for its SBR program. During this period, it is critical for programs to work toward closing knowledge gaps about requirements, technologies, funding, and other resources so they can be positioned to succeed when DOD decides to commit to making significant investments. For SBR, this would mean testing technologies to the point of knowing they can work as intended before starting program development, securing agreement on requirements with the intelligence community, and fully assessing the cost and benefits and risks of relying on TCA and alternatives, including different mixes of air and space-based platforms. DOD is taking positive steps toward this end, but without maturing critical technologies or securing formal commitment on requirements, it will not be able to assure decision makers that the program can be completed within cost and schedule estimates. Should DOD decide to proceed on a path that leaves open important questions, including those about technologies, then it should do so with (1) assessments of technical risks and what additional resources (in terms of time and money) would be needed to address problems that may occur during development as well as what trade-offs would need to be made with other space programs should DOD need to invest additional resources in SBR, and (2) a formal commitment for providing additional resources if problems do occur. Recommendations for Executive Action To better ensure that DOD and its intelligence community partners obtain the additional knowledge they need to determine whether and when to begin the SBR acquisition program, we recommend that the Secretary of Defense direct the Under Secretary of the Air Force to: Direct the SBR Executive Steering Group to ensure that outcomes from the requirements management process are formally approved and documented as the program proceeds through product development before an investment is made beyond technology and concept development for the SBR program. This group should identify how key document review comments are to be obtained and addressed and identify all the officials and/or organizations responsible for taking specific approval action. In addition, the group should establish a mechanism and time frame for providing approval/disapproval. Finally, the group should establish a formal mechanism for addressing unresolved issues as they relate to key program documentation, as well as how changes to approved requirements will be assessed. Modify DOD’s space acquisition policy to reflect protocols for setting requirements when DOD undertakes programs in partnership with the intelligence community. Delay approval to commit funding to product development (key decision point B) for SBR until technologies have been demonstrated in a relevant or operational environment so DOD can more reliably estimate the resources needed to complete the program. If the Under Secretary determines that the program should go forward with less mature technologies, then we recommend that the Under Secretary (1) undertake an assessment of the backup technologies that may lessen capability and add cost to the program and the additional time and money that may be required to meet SBR’s performance objectives to address those risks, (2) undertake an assessment of trade-offs that may need to be made with other space programs to assure SBR’s successful outcome, and (3) secure formal commitments from DOD to provide funding for total estimated costs as well as costs estimated to address potential technical risks. Strengthen the ongoing study of options for tracking ground-moving targets by ensuring this work includes: (1) a full range of air and space options; (2) measures of effectiveness that would help justify choosing SBR over air options; and (3) the possibility of having to rely on TCA alternatives for space options. This work should also consider the results of analyses being conducted by other DOD entities on tracking ground- moving targets. Agency Comments and Our Evaluation We received written comments on a draft of this report from the Deputy Under Secretary of Defense (Programs, Requirements, and Resources) within the Office of the Under Secretary of Defense for Intelligence. DOD generally agreed with our findings and our recommendation to strengthen its study of SBR alternatives. DOD partially agreed with our recommendations to strengthen its requirement setting process for SBR and to demonstrate SBR technologies in a relevant or operational environment before committing to product development. DOD did not agree with our recommendation to modify its acquisition policy to strengthen requirements setting. In commenting on our recommendations, DOD agreed in principle with the need to extensively define, analyze, and validate requirements for SBR, but it did not believe this necessitated a different requirements setting process than the one that is in place for SBR or changes to its space acquisition policy or that additional controls were needed within the program’s study phase. To clarify, our recommendation was not intended to construct a new requirements setting process or supplant activities undertaken by the Joint Requirements Oversight Council or the Mission Requirements Board, as DOD asserts. Rather, we recommend that DOD build on the positive requirements setting procedures it has already put in place by instituting controls and mechanisms that ensure transparency, discipline, and accountability with requirements setting. As noted in our report, while DOD has taken steps to increase senior leader and stakeholder involvement in requirements setting, it is not fully documenting commitments made during the requirements approval process, nor has it established a process to resolve potential disagreements that may occur after approval. It is important that this discipline be instilled in the study phase and throughout the SBR effort. As noted in previous reports, many space programs have not been executed within cost and schedule estimates because of an inability to establish firm requirements and to make and enforce trade-off decisions. For SBR, the potential for difficulty in requirements setting is higher because of the distinct needs of the intelligence community and DOD’s desire to integrate SBR with other radar platforms. Moreover, revising the acquisition policy to clearly communicate protocols that should be followed when DOD undertakes space programs in the future involving diverse users—such as the intelligence community, military services, industry, and/or other agencies—would further help DOD to rationalize requirements setting and to solidify relationships with users, which DOD reported was a top SBR management issue. In regard to our recommendation to delay product development until SBR technologies are sufficiently matured, DOD stated that it has planned for critical and most other enabling technologies to be demonstrated at least at the component level in a relevant environment on the ground. DOD also stated that where technically feasible and fiscally feasible, it planned to pursue on-orbit demonstrations. It also stated it has taken some actions relating to our recommendation such as accounting for technical risks in the costing and budgeting process. DOD asserted, however, that our recommendation encourages pursuit of older, more proven, technologies. We recommended that DOD pursue relevant or operational environment demonstrations of all critical technologies and even an integrated system before committing to a formal acquisition program because this practice enables a program to align customer expectations with resources, and therefore minimize problems that could hurt a program in its design and production phase and drive up costs and schedule. Further, we agree that continuing to develop leading edge technology is important for space system capabilities. However, history has shown and we have repeatedly reported that conducting technology development within a product environment consistently delays the delivery of capability to the user, robs other programs of necessary funds through unanticipated cost overruns, and, consequently, can result in money wasted and fewer units produced than originally stated as necessary. A technology development environment is more forgiving and less costly than a delivery-oriented acquisition program environment. Events such as test “failures,” new discoveries, and time spent in attaining knowledge are considered normal in this environment. Further, judgments of technology maturity have proven to be insufficient as the basis for accurate estimates of program risks as it relates to cost, schedule, and capability. Lastly, our report noted that DOD was taking positive actions to gain knowledge about technology readiness, including strengthening systems engineering, undertaking risk assessments, and assessing various technical concepts. Given the potential cost of the program, our recommendation focuses on taking these steps further by assessing what trade-offs may need to be made with other space programs should the program encounter technical problems that require more time and money than anticipated and securing commitments to provide resources needed to address such problems. DOD’s detailed comments are provided in appendix II. We plan to provide copies of this report to the Secretary of Defense, the Secretary of the Air Force, and interested congressional committees. We will make copies available to others upon request. In addition, the report will be available on the GAO Web site at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me at (202) 512-4841 or Arthur Gallegos at (303) 572-7368. Other key contributors to the report include Tony Beckham, Cristina Chaplain, Lily Chin, Maria Durant, Nancy Rothlisberger, and Hai V. Tran. Appendix I: TRL Scale for Assessing Critical Technologies Appendix I: TRL Scale for Assessing Critical Technologies Lowest level of technology readiness. Scientific research begins to be translated into applied research and development (R&D). Examples might include paper studies. Invention begins. Once basic principles are observed, practical applications can be invented. Examples are still limited to paper studies. Active R&D is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Basic technological components are integrated to establish that they will work together. This is relatively “low fidelity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory. Fidelity of breadboard technology increases significantly. Basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in simulated environment. Examples include “high fidelity” laboratory integration of components. Representative model or prototype system, which is well beyond the breadboard tested for level 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high-fidelity laboratory environment or in a simulated operational environment. Prototype near, or at, planned operational system. Represents a major step up from TRL 6, requiring demonstration of an actual system prototype in an operational environment such as an aircraft, vehicle, or space. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this TRL represents the end of true system development. Examples include Developmental Test and Evaluation of the system in its intended weapon system to determine if it meets design specifications. Actual application of the technology in its final form and under mission conditions, such as those encountered in Operational Test and Evaluation. Examples include using the system under operational mission conditions. Appendix II: Comments from the Department of Defense GAO’s Comments The following are GAO’s comments on the Department of Defense’s letter dated June 29, 2004. 1. DOD stated that DOD does not require formal approval for the concepts of operations from the Joint Requirements Oversight Council or the Mission Requirements Board, but noted that the Joint Requirements Oversight Council communicated agreement in a memo. As we reported, the Under Secretary of the Air Force requested that both DOD and the intelligence community approve the initial capabilities document and concept of operations in light of the complexity of SBR’s acquisition process, the partnership with the intelligence community, and the proposed integration with other radar platforms. 2. DOD stated that it is not engaged in a partnership with the intelligence community on SBR, as our report states. Specifically, DOD stated that SBR is wholly funded in the defense budget and that a programmatic commitment with the intelligence community does not exist. DOD’s SBR System Acquisition Strategy was signed by senior-level officials from DOD, National Reconnaissance Office, and the National Geospatial-Intelligence Agency and approved on January 14, 2004. This strategy states that the Air Force, in close partnership with the National Reconnaissance Office and National Geospatial-Intelligence Agency, is responsible for leading development of an SBR capability. This strategy further identifies the responsibilities related to SBR that each mission partner (National Reconnaissance Office and National Geospatial-Intelligence Agency) is supposed to carry out. We disagree with DOD’s assertion that these organizations must provide funding to SBR in order to consummate a partnership. Because SBR is being justified on the basis of the system’s ability to provide intelligence, surveillance, and reconnaissance products to both DOD and the intelligence community, the part of the budget used is not relevant to our finding. 3. To clarify, we did not recommend that DOD pursue lower risk technologies that would result in lower levels of desired performance. Instead, we reported that DOD might have to resort to using backup technologies if the current ones prove to be problematic during product development. We recommended that DOD should assess the cost to the program of having to use the backup technologies DOD has already identified in terms of time and money. GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
Missing among the Department of Defense's (DOD) portfolio of systems is a capability to track stationary and moving enemy vehicles on land or at sea in any type of weather, day or night, from space. To meet this need, DOD and the intelligence community are collaborating on the ambitious Space-Based Radar (SBR) program. By leveraging the newest generation of radar technologies, the SBR concept promises to deliver high-quality data to a wide array of users. DOD intends to start product development in 2006 and to field SBR satellites as quickly as possible so that warfighters, the intelligence community, and national decision makers can gain a better understanding of what adversaries are doing in specific locations around the world. GAO reviewed the SBR program to assess DOD's progress in attaining the knowledge it needs by 2006 in terms of customer needs (or requirements) and resources. Although SBR is 2 years away from product development, the program already faces major challenges. DOD officials say SBR will likely be the most expensive and technically challenging space system ever built by DOD. The acquisition time frame is much shorter than what has been achieved in the past for other complex satellite systems. Finally, DOD is setting precedence by taking the lead on developing SBR with the intelligence community as a partner. Most DOD space programs that GAO has reviewed in the past several decades were hampered by schedule and cost growth and performance shortfalls. Problems were largely rooted in a failure to match requirements with resources when starting product development. Commitments were made without knowing whether technologies being pursued would work as intended. To avoid these problems, leading commercial firms have adopted a knowledge-based model that enables decision makers to be reasonably certain about their products at critical junctures and helps them make informed investment decisions. Although DOD has taken positive steps to strengthen the involvement of senior leaders within DOD and the intelligence community in setting requirements, SBR's concept of operations has not been approved and signed by requirements boards for either of the two partners. Without documentation and formal approval, it is unclear who will be held accountable for setting requirements or how disagreements among SBR's partners will be resolved when DOD moves SBR into ensuing phases of acquisition. DOD has adopted noteworthy practices to gain knowledge about SBR's resources. These include maximizing the use of systems engineering to close gaps between requirements and resources; estimating all of SBR's costs; exploring alternatives for SBR if the Transformational Communications Architecture (TCA)--the communications infrastructure that is expected to relay SBR data across a network of users--incurs schedule and performance shortfalls; and asking contractors to propose multiple operations concepts for SBR with or without TCA. Despite these accomplishments, DOD is at risk of knowledge gaps. SBR's critical technologies will not be mature when product development starts, as called for by best practices. One of TCA's primary components may not be ready in time to support SBR data. These knowledge gaps make it harder for DOD to reliably estimate how much time and money are needed to complete SBR's development. If TCA is delayed, DOD's alternatives may involve reducing SBR's capabilities or significantly increasing program cost. Without sufficient knowledge, DOD may not be able to determine by the time SBR's product development starts in 2006 whether space-based radar is best suited to tracking moving targets on land or at sea or whether air-based radar would provide enough capabilities at far less cost. More specific analyses would help DOD weigh the merits of various alternatives and assess how much to invest in the SBR acquisition program versus air platforms with similar capabilities.
Background In fiscal year 2011, VA provided about $ 4.3 billion in pension benefits for about 517,000 recipients. These benefits are available to low-income wartime veterans who are 65 and older, or who are under age 65 but are permanently and totally disabled due to conditions unrelated to their military service. Surviving spouses and dependent children may also qualify for these benefits. Average annual payments in fiscal year 2011 were $9,669 for veterans and $6,209 for survivors. VA provides pension benefits through its Veterans Benefits Administration (VBA), and claims processors assess claims at VBA’s three PMCs. VA also accredits representatives of veterans service organizations, attorneys, and claims agents to assist claimants with the preparation and To become accredited, an submission of VA claims at no charge. individual must meet certain requirements set forth in federal law. To qualify for pension benefits, claimants’ countable income must not exceed annual pension limits that are set by statute. These income limits are also the maximum annual pension payment that a beneficiary may receive and may vary based on whether claimants are veterans or survivors, their family composition, as well as whether they need enhanced benefits, such as aid and attendance or housebound benefits.example, to qualify for pension benefits in 2012 a veteran with no dependents and who is in need of aid and attendance benefits cannot have income that exceeds $20,447. In determining if a claimant’s income is below program thresholds, VA includes recurring sources of income such as Social Security retirement and disability benefits, but not income from public assistance programs such as Supplemental Security Income (SSI). See 38 C.F.R. § 3.275. For claimants who are veterans, VA also assesses the net worth of the veteran’s spouse to determine financial eligibility. determine if claimants’ financial resources are sufficient to pay for their expenses without assistance from VA. VA pension claimants may also be eligible for other means-tested programs, such as Medicaid, a joint federal-state health care financing program that provides coverage for long-term care services for certain individuals who meet specific income and resource thresholds. Each state administers its Medicaid program and establishes specific income and resource eligibility requirements that must fall within federal standards. Similarly, the SSI program provides cash benefits to individuals who are 65 or older, blind, or disabled who have limited income and whose resources are $2,000 or less. Organizations Help Veterans Transfer Assets to Qualify for Pension Benefits We identified over 200 organizations located throughout the country that market their services to help veterans and their surviving spouses qualify for VA pension benefits by transferring or preserving excess assets. These organizations consist primarily of financial planners and attorneys offering products and services such as annuities and the establishment of trusts, to enable potential VA pension claimants with excess assets to meet financial eligibility criteria for VA pension benefits. For example, one organization’s website advertised that it develops financial plans which include various insurance products, and that its specific area of expertise is to help VA pension claimants with hundreds of thousands of dollars in assets obtain approval for these benefits. Under current federal law and regulations, VA pension claimants are allowed to transfer assets and reduce their net worth prior to applying for benefits, so services being marketed and provided by these organizations to qualify for VA pension benefits are legally permissible under program rules. In contrast, for Medicaid—another means tested program—federal law explicitly restricts eligibility for coverage for long term care for certain individuals who transfer assets for less than fair market value prior to applying., As a result, when an individual applies for Medicaid coverage for long-term care, states conduct a look-back—a review to determine if the applicant transferred assets for less than fair market value prior to applying. Individuals who have transferred assets for less than fair market value during the 60 months prior to applying may be denied eligibility for long- term care coverage for a period of time, known as the penalty period. For example, gifting assets would generally be considered a transfer of assets for less than fair market value and result in a penalty period. Also, under the SSI program, claimants who transfer assets for less than fair market value prior to applying may become ineligible for these benefits for up to 36 months. During our investigative calls, all 19 organizations correctly noted that pension claimants can legally transfer assets prior to applying. They indicated it is possible to qualify for VA pension benefits despite having excess assets, and almost all provided information on how to transfer these assets. (See figure 1 for transcript excerpts of calls with organizations on services they provide to qualify for VA pension benefits.) A number of different strategies may be used to transfer pension claimants’ excess assets so that they meet financial eligibility thresholds. Among the 19 organizations our investigative staff contacted, about half advised transferring excess assets into an irrevocable trust with a family member as the trustee to direct funds to pay for the veteran’s expenses. A similar number also advised placing excess assets into some type of annuity. Among these, several advised placing excess assets into an immediate annuity that generates income for the client. In employing this strategy, assets that VA would count when determining financial eligibility for pension benefits are converted into monthly income. This monthly income would fall below program thresholds and enable the claimant to still qualify for the benefits. About one-third of the organizations recommended strategies that included the use of both annuities and trusts. For example, one organization we contacted advised repositioning some excess assets into an irrevocable trust, with the son as the trustee, and placing remaining excess assets into a deferred annuity that would not be completely accessible, since most of the funds could not be withdrawn without a penalty. In addition, several organization representatives we interviewed also told us they may advise using caretaker agreements to enable a client to qualify for VA pension benefits. Organizations told us this strategy generally involves the pension claimant transferring assets to family members as part of a contract, in exchange for caretaker services to be provided by these family members for the remainder of the claimant’s lifetime. Some organization representatives we interviewed told us that transferring assets to qualify for VA pension benefits is advantageous for elderly pension claimants because it enables them to have more income to pay for care expenses and remain out of a nursing home for a longer period of time. For example, representatives from one organization said the use of immediate income annuities allows pension claimants to increase their monthly income that, combined with the VA pension, could help pay for assisted living or in-home care costs. Other financial planners and attorneys said if claimants do not conduct financial or estate planning to qualify for the VA pension and instead spend down their assets prior to applying, the monthly amount of the pension benefit they eventually receive may be insufficient to pay for their long-term care. They said that, as a result, these claimants may decide to seek Medicaid coverage for nursing home care because of their lack of financial resources, when they could have remained in an assisted living facility or at home with the aid of the VA pension. Some of these organizations told us that nursing home care financed by Medicaid is more costly for the government than if the veteran had received the VA pension benefit and obtained care in a lower-cost assisted living facility. Many organizations we identified also conduct presentations on VA pension benefits at assisted living or retirement communities to identify prospective clients. According to attorneys and officials from state attorneys general offices we spoke with, managers of assisted living facilities or retirement communities may have an interest in inviting organization representatives to conduct presentations on VA pension benefits because these benefits allow them to obtain new residents by making the costs more affordable. For example, we obtained documentation indicating that one retirement community paid an organization representative a fee for a new resident he helped the facility obtain. Another community in another state paid organization representatives fees to assist residents in completing the VA pension application. Some Products and Services May Adversely Affect Claimants Some products may not be suitable for elderly veterans because they may lose access to funds they may need for future expenses, such as medical care. To help elderly clients become financially eligible for VA pension benefits, some organizations may sell deferred annuities which would make the client unable to access the funds in the annuity during their expected lifetime without facing high withdrawal fees, according to some attorneys we spoke with. An elderly advocacy organization representative we spoke with also noted that elderly individuals are impoverishing themselves by purchasing these products when they may need the transferred assets to pay for their long-term care expenses. As part of our investigative work, one organization provided a financial plan to qualify for VA pension benefits that included both an immediate annuity as well as a deferred annuity for an 86-year-old veteran that would generate payments only after the veteran’s life expectancy. Some organizations that assist in transferring assets to qualify people for VA pension benefits may not consider the implications of these transfers on eligibility for Medicaid coverage for long-term care. Individuals who transfer assets to qualify for the VA pension may become ineligible for Medicaid coverage for long-term care services they may need in the future. For example, asset transfers that may enable someone to qualify for the VA pension program, such as gifts to someone not residing in a claimant’s household, the purchase of deferred annuities, or the establishment of trusts, may result in a delay in Medicaid eligibility if the assets were transferred for less than fair market value during Medicaid’s 60-month look-back period. According to several attorneys we spoke with, some organization representatives are unaware or are indifferent to the adverse effects on Medicaid eligibility of the products and services they market to qualify for the VA pension. As a result, potential pension claimants may be unaware that the purchase of these products and services may subsequently delay their eligibility for Medicaid. In addition to the potential adverse impact of transferring assets, we heard concerns that marketing strategies used by some of these companies may be misleading. According to several attorneys we spoke with, some organization representatives market their services in a way that may lead potential pension claimants and their family members to believe they are veterans advocates working for a nonprofit organization, or are endorsed by VA. As a result, they may fail to realize these representatives are primarily interested in selling financial products. For example, some organization representatives may tell attendees during presentations at assisted living facilities that their services consist of providing information on VA pension benefits and assisting with the application, and do not disclose they are insurance agents selling annuities to help people qualify for these benefits. One elder law attorney we spoke with said many attendees at these presentations may have Alzheimer’s disease or dementia and are not in a position to make decisions about their finances. Therefore, they are vulnerable to being convinced that they must purchase a financial product to qualify for these benefits. Concerns have also been raised that VA’s accreditation of individuals to assist with applying for VA benefits may have unintended consequences. According to attorneys and officials in one state, organization representatives use their VA accreditation to assist in preparing claims as a marketing tool that generates trust and allows them to attract clients. Claimants may not understand that this accreditation only means that the individual is proficient in VA’s policies and procedures to assist in preparing and submitting VA benefits claims and does not ensure the products and services these individuals are selling are in claimants’ best interests. Finally, some organizations may provide erroneous information to clients, or fail to follow through on assisting them with submitting the pension application, which can adversely affect pension claimants. For example, one veteran said he was told by an organization representative to sell his home prior to applying for the VA pension and that he did not have to report the proceeds from the sale on the application. He followed this advice and was approved for benefits, but VA later identified these assets, which caused him to incur a debt to VA of $40,000 resulting from a benefit overpayment. Organizations may also promise assistance with the application process to any interested pension claimant but, unbeknownst to the claimant, may not follow through in providing this service if the claimant does not want to transfer assets. For example, the daughter of a veteran we spoke with, who sought application assistance from an organization representative, told us the representative never submitted her father’s pension claim to VA as promised. She learned of this about a year after she thought the claim was submitted and had to reapply through a county veterans service officer. Her father was approved 2 months later but passed away less than a month after his approval. She believes her father could have received benefits for a year if the representative had submitted the claim, and believes the representative did not do so because she did not want to use his services to transfer assets. Costs for Services to Transfer Assets Varied, but Some Organizations May be Charging Prohibited Fees The costs of services provided by these organizations to assist in qualifying for VA pension benefits varied, but organizations may be charging prohibited fees. Among the 19 organizations our investigative staff contacted for this review, about one-third said they did not charge for their services to help qualify claimants for VA pension benefits. For example, financial planners told us that, generally, there are no direct costs associated with transferring assets into an annuity, but that costs would be included in the terms of the annuity, such as the commission earned by the insurance agent. Among organizations that did charge for services, fees ranged from a few hundred dollars for benefits counseling up to $10,000 for the establishment of a trust. Also, although federal law prohibits charging fees to assist in completing and submitting applications for VA benefits, representatives from veterans advocacy groups and some attorneys we spoke with raised concerns that these organizations may be charging for fees related to the application, or find ways to circumvent this prohibition, such as by claiming they are charging for benefits counseling. For example, one organization our investigative staff contacted charged $850 to have an attorney work on the application process, a $225 analysis fee, and $1,600 for the establishment of a trust. Another organization representative indicated he charged a “long term planning fee” of $1,200 to be paid prior to services being provided. The organization representative asked that someone other than the veteran pay this fee, claiming that only disinterested third parties can be charged fees but not the veteran. Also, in a case identified by officials in one state, a veteran was charged $3,000 by an individual for assistance in applying for VA pension benefits which were ultimately denied. In addition, concerns have been raised that fees charged may be excessive for the services provided. For example, in July 2011, California enacted a law generally prohibiting unreasonable fees from being charged for these services. Concluding Observations The VA pension program provides a critical benefit to veterans, many of whom are elderly and have limited financial resources to support themselves. Current federal law allows individuals to transfer assets prior to applying for VA pension benefits and still be approved. As a result, claimants who have sufficient assets to pay for their expenses can transfer these assets and qualify for this means-tested benefit. This arrangement circumvents the intended purpose of the program and wastes taxpayer dollars. A number of organizations help claimants with these asset transfers, but some of the products and services provided may have adverse implications for the claimant such as delaying eligibility for Medicaid coverage for long-term care or causing claimants to lose access to their financial resources. Accordingly, we asked Congress to consider establishing a look-back and penalty period for pension claimants who transfer assets at less than fair market value prior to applying for pension benefits, similar to other federally supported means- tested programs. Chairman Kohl, Ranking Member Corker, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. GAO Contact and Staff Acknowledgements For further information about this testimony, please contact Daniel Bertoni at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this testimony include Jeremy Cox, Paul Desaulniers, Alex Galuten, Nelson Olhero, Martin Scire, and Walter Vance. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testinony discusses the Department of Veterans Affairs’ (VA) pension program, which provides economic benefits to wartime veterans age 65 and older or who have disabilities that are unrelated to their military service, as well as to their surviving spouses and dependent children. To qualify for VA pension benefits, a claimant must have limited income and assets. Recently, concerns have been raised that some organizations are marketing financial products and other services to help individuals whose assets exceed the program’s financial eligibility thresholds qualify for these benefits. These organizations may charge substantial fees for products and services that may not always be in claimants’ best long-term interests. In our report released today on VA’s pension program, we identified vulnerabilities in VA’s procedures for assessing financial eligibility. We also found that there is no prohibition on claimants transferring assets prior to applying for benefits, and some claimants do so. Other means-tested programs, such as Medicaid, conduct a look-back review to determine if an individual has transferred assets for less than fair market value, and if so, may deny eligibility for benefits for a period of time. This control helps ensure that only those in financial need receive benefits. This testimony is based on our report and focuses on what is known about organizations that are marketing financial products and services to veterans and survivors to enable them to qualify for VA pension benefits. In summary, we identified over 200 organizations that market financial and estate planning services to help pension claimants with excess assets qualify for pension benefits. These organizations consist primarily of financial planners and attorneys who offer products such as annuities and trusts. All 19 organizations our investigative staff contacted said a claimant can qualify for pension benefits by transferring assets before applying, which is permitted under the program. Two organization representatives said they helped pension claimants with substantial assets, including millionaires, obtain VA’s approval for benefits. Some products and services provided, such as deferred annuities, may not be suitable for the elderly because they may not have access to their funds within their expected lifetime without facing high withdrawal fees. Also, such asset transfers may result in ineligibility for Medicaid coverage for long-term care for a period of time. The majority of the 19 organizations contacted charged fees, ranging from a few hundred dollars for benefits counseling to $10,000 for establishment of a trust. In our report we asked Congress to consider establishing a look-back and penalty period for pension claimants who transfer assets prior to applying for pension benefits, similar to other federally supported means-tested programs, such as Medicaid. We also recommended that VA obtain timely information on asset transfers, strengthen income and asset verification processes, and provide clearer guidance to claims processors. VA concurred with our recommendations and agreed that a look back and penalty period for asset transfers was needed.
Background The Department budgets billions of dollars each year to purchase and repair spare parts and has established various programs to help ensure product quality throughout the acquisition and repair processes. According to Navy and Defense Logistics Agency quality assurance officials, although it is normal in manufacturing processes that some deficient parts will be delivered to the end users, the rate of deficiencies is to be managed and controlled to acceptable levels. In fact, according to these officials, spare parts quality is of greater importance in today’s environment of increased deployments, extended preventive maintenance cycles, and just-in-time deliveries of parts. A Defense Logistics Agency joint-service directive for quality deficiency reporting regulates Department activities. The directive establishes a Department-wide system for the military services and other activities to report, investigate, and correct the causes of individual problems and identify trends and recurring deficiencies. The system is to be used to document the quality of spare parts delivered for use in the maintenance and repair processes and is the key program for documenting quality deficiency data Navy-wide. When deficient parts are delivered and are detected by end users, such as military supply or maintenance personnel, these users are required to report the details of the deficiencies under a uniform set of guidelines. Each military service and agency manages its own quality deficiency reporting program. The Navy’s Product Quality Deficiency Reporting Program is designed to document and report unsatisfactory material, initiate corrective action to fix or replace deficient items, grant credit or compensation for items, and take actions to prevent recurrence. The program applies to all Navy activities. Navy activities report quality deficiency data both internally and to a broader program, known as the Product Data Reporting and Evaluation Program, where it is managed as one of several databases. This broader program is a Navy-wide automated system for collecting data on the quality of material and products furnished by contractors. The quality deficiency report is also one of several records used in the Navy’s Red/Yellow/Green Program, another Product Data Reporting and Evaluation Program application used to reduce the risk of receiving nonconforming products and late shipments. The Assistant Secretary of the Navy for Research, Development, and Acquisition has program authority and sets policy, and the Naval Sea Systems Command administers the Product Quality Deficiency Reporting Program. Navy program executive officers, program managers, and commanders of the naval systems commands are to process quality deficiency reports for their systems and equipment in accordance with instructions and to ensure people are properly trained in reporting deficiencies. Quality deficiency reporting data are generally sent to a screening point, such as the Navy’s Inventory Control Point, for review and forwarding to an appropriate item manager or action point to determine the causes and who is responsible for the deficiencies. Once a determination is made and final disposition occurs, the quality deficiency reporting database is to be updated to reflect the results. The Navy’s Deficiency Reporting Program Is Largely Ineffective in Gathering Needed Data The Navy’s Product Quality Deficiency Reporting Program has been largely ineffective in gathering the data needed for analyses so that Navy managers can accurately report on and correct deficiencies in the quality of spare parts being provided to end users, including maintenance personnel in field activities. Specifically, the program data were incomplete and of limited value because they were underreported, did not include information on parts that failed prematurely, and omitted key data on the causes of failures. To a large extent, the program’s ineffectiveness can be attributed to limited training and incentives to report deficiencies, lack of management emphasis, and competing priorities for the staff resources needed to carry out the program. In addition, a contributing cause of the program’s ineffectiveness may be a lack of Navy-wide visibility into program results. Quality Deficiencies Are Underreported Many deficiencies in the quality of new and rebuilt parts occur that have not been reported to the program. As a result, program data are unreliable and can be misleading in determining the significance of deficient parts and conducting trend analyses. Using information gathered under the program, the Navy produces an annual report summarizing quality deficiency data, such as the number of deficiency reports, the value of deficient material, and related information on new and reworked material, including rebuilt parts. Data system managers said this report is the most comprehensive attempt to collect and analyze Navy-wide quality deficiency data. The May 2000 report showed that during calendar years 1997-99 the number of deficiency reports averaged about 6,500 per year, representing all parts that were reported as deficient at the time of installation by the Naval Sea Systems Command, the Naval Air Systems Command, and the Naval Supply Systems Command. Each report identified one or more deficient parts to be evaluated for possible quality problems. During our visits to a number of ship and aircraft maintenance and operating units, we were told that not all deficiencies on parts that failed at the time of installation were reported, and we found that the estimated extent of underreporting varied. For example, one ship maintenance unit that handled munitions systems reported most deficiencies. However, an aircraft squadron commander said his unit documented quality deficiency problems and knew of the reporting requirements, but the unit rarely reported these problems to the quality deficiency program because there was no incentive to do so. In addition, Navy supervisor of shipbuilding officials at one facility told us that the full-time position dedicated to quality deficiency reporting had been eliminated and the work was reassigned to another staff member as a collateral duty. As a result, the number of quality deficiency reports dropped from about 200 each year in calendar years 1997 and 1998 to 34 in calendar year 2000. Also, Navy officials said several major Navy program offices have their own internal systems for handling parts quality deficiencies and do not share this information with the Navy’s quality deficiency reporting program. While these examples are not intended to reflect the experience of all Navy units, they indicate how the data gathered by the program can vary among units based on factors other than the actual numbers of quality deficiencies encountered by maintenance personnel. Various unit commanders indicated that their efforts to report spare parts quality deficiencies were limited because accurate and complete data needed for the reporting process were not available on the failed parts and increased resources would be needed to make meaningful improvements in the availability of these data. Due to the underreporting, the interpretation of program data can be misleading. For example, in a March 2001 two-page report to the Congress on how the Department planned to improve its quality assurance program, the Principal Deputy Under Secretary of Defense, Acquisition, Technology, and Logistics stated that the Department used product quality deficiency reports as a key method to measure the quality of the products it purchased. The Department stated that it had no evidence of a systemic quality problem and that a decreasing number of investigated quality deficiency reports recorded in recent years may indicate that product quality is getting better. However, after we discussed our findings with the Department about the underreporting of quality deficiencies in the Navy, the Department stated in its report that if user feedback is not input into the system and investigated, due to reduced staffing, this metric fails to be a valid measure. According to Navy and Defense Logistics Agency quality assurance officials, the trend in the number of deficiencies reported may reflect the level of resources dedicated to filling out and analyzing deficiency reports rather than the actual number of deficiencies found by maintenance personnel. The Program Does Not Report Parts That Fail Prematurely The Navy does not attempt to use the Product Quality Deficiency Reporting Program to collect a major category of quality deficiencies— those involving parts that failed prematurely, that is, after some operation but before the end of their expected design life. Premature failure data are important to determine if there are problems with particular parts or suppliers of parts. Consequently, not knowing the full extent of deficiencies can prevent meaningful analyses of quality problems on a systemwide basis. Under the May 1997 joint-service directive on quality deficiency reporting, the Navy could capture data on premature equipment failures, including parts, as part of its program, as the Army, the Air Force, and the Marine Corps agreed to do. Although the joint-service directive states that it is applicable to a broad range of deficiencies, including premature failures, the Navy did not agree to this application. It has instead used deficiency reporting to identify mainly new or newly reworked parts that fail, as specified in a 1993 version of the joint-service directive. Navy quality assurance officials said the program was limited to capturing data on those parts because some type of warranty might be involved or they might obtain replacement parts or reimbursements from the suppliers. However, after we discussed this issue with Assistant Secretary of the Navy, Research, Development, and Acquisition officials, they said that premature failures should be included in the Navy program. They said that in April 2001 they approved the Navy’s use of a new draft version of the joint-service directive that includes the requirement to report premature failures as quality deficiencies. While the Navy has not captured data on premature parts failures under the Product Quality Deficiency Reporting Program, Naval Air Systems Command officials said they have attempted to identify these deficiencies through their Engineering Investigation Request Program. However, they said that resource shortages have limited this program mainly to the analysis of mission-critical and safety-of-life requests for engineering investigations, and that as a result, many premature parts failure requests were not analyzed. Without validation and analysis of the causes of these failures, managers are not in a position to take corrective or preventive action. Furthermore, Naval Air Systems Command-wide data on engineering investigation requests were not available, and reports have not been combined with quality deficiency reports to provide managers with a systemwide view of spare parts problems. According to Navy and Defense Logistics Agency quality assurance officials, premature parts failures are an important aspect of quality, and without this data the Navy is not in a position to completely identify its problems with parts and suppliers. Premature parts failures could indicate problems in design, manufacture, or installation that, if not corrected, could lead to unanticipated parts shortages and increased costs. Deficiency Reports Omit Key Data on Causes of Failures Quality deficiency reporting data that were collected under the program have been of limited value because they frequently lacked key information on the causes of the parts failures and identification of whom was responsible. Navy and Defense Logistics Agency quality assurance officials said that identifying whether a contractor is responsible for deficiencies is important for obtaining credits or refunds and preventing recurring problems. We reviewed the Navy’s systemwide database of quality deficiencies for the Naval Air, the Naval Sea, and the Naval Supply Systems Commands. We reviewed the number of reports submitted during calendar years 1997-99 (19,124 reports) and determined their status as of September 20, 2000. We found that in most cases the causes of problems had not been identified in the database and responsibility for the deficiencies had not been determined. Specifically, we found that about 72 percent (13,675) of the reports did not identify the specific causes for the failures, which is information needed for effective corrective and preventive actions, and about 70 percent (13,287) of the reports did not identify who—a private contractor or a Navy or other government activity—was responsible for the deficiencies. According to Navy and Defense Logistics Agency quality assurance officials, analyzing product quality deficiencies to determine causes and responsibilities for failures is often difficult, time-consuming, and staff intensive. They said that the information needed to fill out the deficiency form is not always available, the deficient part might have been damaged or misplaced, or the part may not be forwarded for analysis to determine the cause of the deficiency. Also, a supplier might have provided a replacement part and not identified the cause of the problem. They said problems were more likely among deployed units. Officials at units we visited identified similar problems. According to an Assistant Secretary of the Navy, Research, Development and Acquisition official, when the Navy screening points receive a quality deficiency report, they work with the originators to provide the missing descriptive data and decide if an in-depth analysis is needed. If an analysis is needed, other problems may arise such as a lack of access to contractor information, which can prevent a complete analysis of the cause of a quality deficiency. The need for completed quality deficiency reporting to identify cause and responsibility for the deficiencies is important to achieving program results and is especially important for the Navy and the Defense Logistics Agency to obtain some type of credit for deficient parts, where allowed by the parts contract. Credit can involve a contractor providing replacement parts or monetary consideration. Also, determining responsibility is the first step in providing feedback to the supplier that deficient material has been provided, manufacturing or repair processes must be reviewed, and corrective actions may be necessary. Without such a determination, neither credit nor quality parts may be obtained. According to Navy quality assurance officials, this information is also needed for other Navy programs. For example, the Navy’s Red/Yellow/Green Program is designed to help reduce the risk of receiving products that do not conform to requirements and late shipments. This program uses quality deficiency reports as one of the key inputs to the evaluation, but only when such reports show that a specific contractor was found to be responsible for the deficiencies in parts. However, since the quality deficiency data we examined were so often incomplete, many of the reports lacked the needed information. According to Navy and Defense Logistics Agency quality assurance officials, the lack of complete and reliable quality deficiency data limits their usefulness in identifying suppliers of deficient parts. In addition to the problems of incomplete and underreported data, concerns about using the data to determine the extent of spare parts quality deficiencies are twofold. First, to be meaningful, the number of quality deficiencies identified should be compared with the total number of parts used, but the data to compute such a deficiency rate are often not available. Second, even if a deficiency rate could be determined, criteria for determining what constitutes a reasonable deficiency rate for a particular part have not been established for the quality deficiency reporting program as it has been, for example, for the Red/Yellow/Green Program. Program Ineffective Due to Limited Emphasis, Training, Incentives, and Visibility Navy officials at many levels—maintenance personnel, program officials, quality assurance managers, and Command-level staff—indicated that program ineffectiveness is to some extent the result of a lack of action by Navy management to emphasize the importance of the program to the Navy. In addition, Navy officials identified the following causes of program ineffectiveness: Limited training to supply, maintenance, and key command personnel explaining reporting procedures, types of quality deficiencies to be reported, and benefits to the Navy to be derived from the program. Limited incentives and competing priorities for available staff resources to fill out the quality deficiency reports and do all their other work as well. Lack of Navy-wide visibility into the results being derived from the program. According to Navy quality assurance officials, some training that covers filling out the standardized quality deficiency reporting form has been given, but it has not been effective in encouraging additional quality deficiency reporting. Navy and Defense Logistics Agency quality assurance officials said that their sailors and their commanders do not receive training on the importance of the quality deficiency reporting program to the Navy. Without such training, reporting quality deficiencies does not become a priority within units. Officials within the Naval Systems Commands told us that compliance with quality deficiency reporting has diminished because fewer resources have been available to carry out tasks that are deemed more essential. They indicated that if maintenance personnel must choose between repairing needed equipment and filling out quality deficiency reports, they choose the repair work to support their primary mission. Limited incentives exist at the unit level to encourage compliance with program reporting requirements because financial credits for deficient parts are often not returned to the reporting units, so unit commands may not see the credits as an incentive to expend resources for quality deficiency reporting. They also said that new automation and decision support tools might help facilitate the deficiency reporting and analysis processes and therefore improve compliance with reporting requirements. Navy and Defense Logistics Agency quality assurance officials said that clearly reported program results could stimulate greater management emphasis and staff support for the program. However, while the program is designed to provide a basis for reporting, correcting, and preventing spare parts quality deficiencies, the results are not always measured or clearly reported. For example, a Navy-wide annual report shows that during the 3-year period 1997-99, quality deficiency reports submitted from the Naval Air, Sea, and Supply Systems Commands identified about $466 million in rejected material, mostly on aircraft parts. The meaning of these data, however, is unclear because information is not available on what portion of the rejected material was investigated, was found deficient, and yielded some type of credit or reimbursement from contractors to the government. Also, the data did not include any reporting of parts design modifications, manufacturing changes, or other actions taken based on quality deficiency reporting in order to prevent recurrence of problems. Without such reporting of results, program weaknesses such as incomplete data may not appear to be important issues to Navy and Department managers. According to Navy quality assurance officials, although one Navy instruction governs the analysis of quality deficiency reports, the level of execution can vary significantly among units, depending on the program emphasis assigned and resources made available. For example, Navy Inventory Control Point officials who manage ships’ quality deficiency reporting told us they track the quality deficiencies and identify costs avoided and recovered from contractors. Their internal metrics for fiscal years 1998 through 2000 identified several million dollars in replacement items, refunds, and credits received from contractors that they said were directly attributable to quality deficiency reports. They said that cost avoidance is an important measure of results and should be tracked and reported Navy-wide along with other metrics. However, they said that such reporting is not required and the extent of reporting varies significantly among the Naval Air, Sea, and Supply Systems Commands and their subordinate units. Conclusions Without an effective Navy-wide program to document feedback from users of parts on the quality deficiencies they encounter, Navy managers lack the data needed to fully assess the extent and seriousness of spare parts quality problems. Furthermore, to the extent that the data are incomplete or are not analyzed to determine the causes of and accountability for deficiencies, managers cannot effectively correct quality problems, address supplier issues, or ensure high quality when buying or rebuilding spare parts. Such activities are important because they can affect the safety, readiness, mission performance capabilities, and support costs of military forces. The Navy’s Product Quality Deficiency Reporting Program has been largely ineffective in meeting these management needs due to weaknesses in program implementation, including insufficient training, limited incentives and automation support, competing priorities for staff resources, and a lack of Navy-wide measurement of program results. A stronger quality deficiency reporting program would better enable management to take corrective and preventive actions that over time can result in both mission performance improvements and cost reductions. Recommendations for Executive Action Given the importance of high quality spare parts to safety, readiness, mission performance, and support costs, and the role that an effective Product Quality Deficiency Reporting Program can play in helping to ensure high quality, we recommend that the Secretary of Defense direct the Secretary of the Navy to increase the program’s levels of (1) training, describing what quality deficiencies to report, how to report them, and why it is important to the Navy; (2) incentives, including financial credits back to the reporting unit where appropriate to encourage participation; (3) automation support, to simplify and streamline reporting and analysis; and (4) management emphasis provided to the program, as necessary, to determine the causes, trends, and responsibilities for parts failures and achieve greater compliance with joint-service requirements, including reporting on parts that fail before the end of their design life, and require program officials to measure and periodically report to the appropriate Defense and Navy managers the results of the program in such areas as actions taken to correct parts quality deficiencies, prevent recurrences, and obtain credits or reimbursements from suppliers for deficient products. Agency Comments In written comments on a draft of this report, Department of Defense concurred with our recommendations. It stated that the Navy would initiate the recommended enhancements to the Navy Product Quality Deficiency Reporting Program no later than September 15, 2001. In addition, the Navy will provide a status update to the Office of the Secretary of Defense on accomplishments and remaining challenges no later than March 15, 2002. The Department’s comments are presented in their entirety in appendix I. Scope and Methodology To determine whether the Navy’s Product Quality Deficiency Reporting Program has been effective in gathering the data needed for analyses, correction, and prevention of deficiencies in spare parts, we analyzed the completeness of the quality deficiency data contained in the Navy’s Product Deficiency Reporting and Evaluation Program database. These data cover aircraft, ships, and other spare parts in the Navy, including parts managed for the Navy by the Defense Logistics Agency. For comparison purposes, we also analyzed data from the Navy’s Maintenance and Material Management System, which is designed to track maintenance support data on how often a part failure is involved. We discussed the causes of underreporting of quality deficiencies, the lack of reporting of premature failures, the omission of data in deficiencies being reported, and the causes of program weaknesses with Department, Navy, and Defense Logistics Agency officials. Specifically, work was conducted at headquarters offices, including the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Office of the Assistant Secretary of the Navy (Research, Development, and Acquisition); and the Naval Air Systems, the Naval Sea Systems, and the Naval Supply Systems Commands. Work was also conducted at the Naval Inventory Control Point activities in Mechanicsburg and Philadelphia, Pennsylvania, who have assumed responsibility for quality deficiency reporting functions. In addition, work was done at Navy field activities in different geographical areas, including Lemoore and San Diego, California; Norfolk, Virginia; and Portsmouth, New Hampshire. The Marine Corps aircraft quality deficiency reporting is managed by the Naval Air Systems Command and was included in our review. The Marine Corps operates a separate program for ground equipment that was not included in this review. We conducted our work between August 2000 and June 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Navy; the Commandant of the Marine Corps; the Director, Defense Logistics Agency; the Director, Office of Management and Budget; and appropriate congressional committees. We will also make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions regarding this report. Key contributors to this report were Allan Roberts; Lionel Cooper; Gary Kunkle; Jean Orland; Lawson Gist, Jr.; Robert B. Brown; and Nancy Ragsdale. Appendix I: Comments From the Department of Defense Related GAO Products Army Inventory: Parts Shortages Are Impacting Operations and Maintenance Effectiveness (GAO-01-772, July 31, 2001). Navy Inventory: Parts Shortages Are Impacting Operations and Maintenance Effectiveness (GAO-01-771, July 31, 2001). Air Force Inventory: Parts Shortages Are Impacting Operations and Maintenance Effectiveness (GAO-01-587, June 27, 2001). Major Management Challenges and Program Risks: Department of Defense (GAO-01-244, Jan. 2001). High-Risk Series: An Update (GAO-01-263, Jan. 2001). Defense Inventory: Improved Management Framework Needed to Guide Navy Best Practice Initiatives (GAO/NSIAD-00-1, Oct. 21, 1999).
The Department of Defense (DOD) budgets billions of dollars each year to purchase and repair the spare parts needed to maintain its weapons systems and support equipment. The quality of the spare parts can greatly determine if the Department's investment of funds is effective, efficient, and economical. This report examines the Navy's Product Quality Deficiency Reporting Program and the extent to which the program has gathered the data needed for the analysis, correction, and prevention of deficiencies in spare parts. GAO found that data on parts defects identified at the time of installation were underreported. Data on parts that failed after some operation but before their expected design life were not collected as part of this program. In the quality reports GAO reviewed, some key information was omitted on the cause of the parts' failures and some reports did not identify who was responsible for the defects. To a large extent, the program's ineffectiveness can be attributed to lack of management, limited training and incentives to report deficiencies, and competing priorities for the staff resources needed to carry out the program.
Background The federal EEO complaint process consists of two stages, informal, or precomplaint counseling, and formal. Appendix II contains information on EEO laws applicable to federal employees. Informal Stage, or Precomplaint Counseling Under existing regulations, before filing a complaint, an employee must consult an EEO counselor at the agency in order to try to informally resolve the matter. The employee must contact an EEO counselor within 45 days of the matter alleged to be discriminatory or, in the case of a personnel action, within 45 days of the effective date of the action. Counselors are to advise individuals that when the agency agrees to offer alternative dispute resolution (ADR) in the particular case, they may choose to participate in either counseling or in ADR. Counseling is to be completed within 30 days from the date the employee contacted the EEO office unless the employee and agency agree to an extension of up to an additional 60 days. If ADR is chosen, the parties have 90 days in which to attempt resolution. If the matter is not resolved within these time frames, the counselor is required to inform the employee in writing of his or her right to file a formal discrimination complaint with the agency. Formal Stage After a complainant files a formal discrimination complaint, the agency must decide whether to accept or dismiss the complaint and notify the complainant. If the agency dismisses the complaint, the complainant has 30 days to appeal the dismissal to EEOC. If the agency accepts the complaint, it has 180 days to investigate the accepted complaint from the date the complaint was filed and provide the complainant with a copy of the investigative file. Within 30 days of receipt of the copy of the investigative file, the complainant must choose between requesting (1) a hearing and decision from an AJ or (2) a final decision from the agency. When a hearing is not requested, the agency must issue a final agency decision (FAD) within 60 days on the merits of a complaint. A complainant may appeal an agency’s final decision to EEOC within 30 days of receiving the final decision. In cases where a hearing is requested, the complaint is assigned to an EEOC AJ, and the AJ has 180 days to issue a decision and send the decision to the complainant and the agency. If the AJ issues a finding of discrimination, he or she is to order appropriate relief. After the AJ decision is issued, the agency has 40 days to issue a final order notifying the complainant whether the agency will fully implement the decision of the AJ, after which the employee has 30 days to file an appeal with EEOC of the agency’s final order. If the agency issues an order notifying the complainant that the agency will not fully implement the decision of the AJ, the agency also must file an appeal with EEOC at the same time. Following an appeal decision, both the complainant and the agency have 30 days in which to request reconsideration of EEOC’s appeal decision. Decisions on appeals are issued by EEOC’s Office of Federal Operations (OFO), on behalf of the commission. A complainant may file a civil action in federal district court at various points during and after the administrative process. The filing of a civil action will terminate the ongoing administrative processing of the complaint. A complainant may file a civil action within 90 days of receiving the agency’s final decision or order or EEOC’s final decision. A complainant may also file a civil action after 180 days from filing a complaint with his or her agency or after filing an appeal with EEOC, if no final action or decision has been made. Figure 1 shows the EEO complaint process. EEOC Management Directives Related to the Complaint Process In addition to regulations governing the EEO complaint process, EEOC has issued guidance in the form of management directives (MD) to help agencies process complaints and create a model EEO program. MD-110, revised in November 1999, provides federal agencies with policies, procedures, and guidance relating to processing EEO complaints, including, among other things, the authority of the EEO director and the director’s reporting relationship to the agency head, mandatory training requirements for EEO counselors and investigators, procedures for counseling and ADR, and the role of the AJ. In 2003, EEOC issued MD-715 which, among other things, establishes requirements for federal a gencies to create model EEO programs, including guidance for proactive prevention of unlawful discrimination. Under MD-715, each agency is to have an efficient and fair dispute resolution process and effective systems for evaluating the impact and effectiveness of its EEO program and us e a complaint tracking and monitoring system that permits the agency to identify the location, status, and length of time elapsed at each stage of the process and othe r information necessary to analyze complaint activity and identify trends. Timeliness o Processing Among other requirements, EEOC regulations generally provide that agencies are to complete investigations of formal complaints within 180 days of their receipt and issue FADs within 60 days for those cases where a hearing is not requested. When a hearing is requested, AJs are to issue decisions within 180 days of receiving the complaint files from an agency. EEOC regulations do not set time frames for resolving appeals, but in its most recent strategic plan, EEOC has set an annual performance measure thin that by 2012 70 percent of federal sector appeals are to be resolved wi180 days. From fiscal years 2005 through 2007, appeals closures have averaged from 194 to 230 days. Table 1 shows that although federal agencies have made improvements in the time it takes to process formal EEO complaints, they are still not meeting the deadlines in regulation. The table includes data from EEOC’s annual reports on the federal workfo on average processing days for investigations and FADs on merits of complaints, both including the U.S. Postal Service—which has the larges number of EEO complaints—and without it, because the Postal Service complaint volume affects average processing times. These data show in fiscal year 2007, the Postal Service completed investigations in an average of 106 days and FADs in 28 days. Table 1 also shows average that processing days for EEOC hearings decisions, which on average have exceeded requirements. EEO Practitioners Identified Factors That Impede the Prompt, Fair, and Impartial Processing of EEO Complaints When asked to identify factors that impeded the prompt, fair, and impartial processing of EEO complaints at their agencies and describe how those factors impeded the process, selected EEO practitioners provided hundreds of responses. Because these practitioners represent different parts of the complaint process and some of the practitioners may only be familiar with their part of the process, we could not tally the number of responses under each factor. While recognizing that the factors the practitioners identified are not necessarily discrete, we analyzed and grouped them into eight broad categories of factors and then asked those same EEO practitioners to rank them in terms of their importance for improving the federal EEO complaint process. These factors and their rankings are (1) lack of accountability on the part of some agency management officials and EEO practitioners in carrying out their responsibilities; (2) insufficient resources for some agency EEO offices and EEOC to fulfill their responsibilities; (3) lack of independence concerning the potential conflict of having agencies conduct their own EEO complaint investigations and the undue influence of some agency legal counsel and human resources officials on the EEO process; (4) insufficient knowledge and skills by some agency officials, complainants, and EEO practitioners to fulfill their responsibilities; (5) lack of authority by some EEO officials to dismiss cases that have no merit and lack of subpoena power by EEOC AJs; (6) lack of clarity in regulation and some guidance and consistent decisions from EEOC; (7) lack of effective communication by some EEO practitioners of relevant oral and written information to participants in the process and that ADR (e.g., conciliation, facilitation, or mediation) is available; and (8) lack of a firm commitment by some agency management and EEO officials to the EEO process. In our discussions with stakeholders, they generally concurred with these eight factors. In addition, a few stakeholders identified the perception of unfairness as an overarching theme. These stakeholders commented that without the perception that the complaint process is fair, people may be frustrated and choose to not participate in it. We discussed with these stakeholders that fairness is indeed one of the goals of the EEO complaints process, along with promptness and impartiality. The perception that the system is not fair, among other issues, has led to calls for reform and is directly related to our effort in this review to identify factors that need to be addressed. We agree that this concern is important and believe it has been accounted for within the context of the discussion on factors related to accountability; independence; and clarity in regulation, guidance, and consistent EEOC decisions. The eight factors are consistent with concerns raised previously about problems with the federal sector EEO complaint process. For example, in November 2002, EEOC held an open meeting to address issues with the EEO complaint process amid concerns from stakeholders representing both complainants and federal agencies that the process is “much too slow, far too expensive, unnecessarily cumbersome, and given to potential conflicts of interest.” In March 2003, a coalition of civil rights employee advocates and other stakeholder groups submitted a seven-step proposal to EEOC commissioners to improve the federal sector EEO complaint process. This proposal included steps to make ADR mandatory for managers in the informal and formal stages of the administrative process and have EEO directors report directly to the agency head, and a recommendation that EEOC adopt a uniform standard for what states a claim of employment discrimination. Additionally, in September 2006, the commission held a meeting where stakeholders discussed the practices that work in obtaining a timely, thorough, and complete investigation as well as the barriers that prevent such investigations. One issue raised in that meeting was the lack of consequences (related to accountability) for agencies that do not comply with the 180-day requirement to complete investigations. A commissioner noted that a double standard exists, because a complaint would be dismissed if a complainant missed any of the deadlines. The quality of investigations was another issue—for both in- house and contract investigations. One participant stated that clear benchmarks need to exist with respect to the quality of the report of investigation, noting that in the end a poor investigation hurts the agency as well as the employee. Lack of Accountability EEO practitioners indicated that there appear to be no consequences for some agency officials or EEOC practitioners for not adhering to time frames established in EEOC regulations. For example, if an employee files a formal discrimination complaint, an agency must decide whether to accept the complaint and has 180 days to investigate an accepted complaint. In cases where the complainant requests a hearing from an EEOC AJ, the AJ generally has 180 days to issue a decision after the complaint file has been received from the agency. While many respondents cited agencies and EEOC for not adhering to time frames, some also said that a lack of timely cooperation by complainants delayed the processing of complaints. Insufficient Resources for Agency EEO Offices and EEOC Many EEO practitioners across the various practitioner groups identified a lack of resources—staff and funding at some agency EEO offices and at EEOC—as impeding the timely processing of federal EEO complaints. For example, an EEO director and an investigator stated that because of understaffing, regulatory time limits are often exceeded. Another agency practitioner said that internal delays are also caused by a lack of resources in agency EEO offices. Several EEO practitioners stated that retirements and reassignments have made it difficult to keep counselors in agency EEO offices. Others mentioned problems with staffing levels at EEOC. One AJ noted that because of a lack of support staff, AJs spent time copying files, filing documents, making all of their own travel arrangements, and preparing closed case files for mailing. Figure 2 shows that as of fiscal year 2008, the number of AJs has decreased by about 13 percent since fiscal year 2005 while EEOC’s hearings inventory has increased by 10 percent, and EEOC’s appropriations have generally remained constant, increasing by less than 1 percent. Lack of Independence EEO practitioners raised concerns regarding the potential conflict of having agencies conduct their own EEO complaint investigations and agency offices or functions improperly interfering in the EEO complaint process (i.e., legal counsel and human resources professionals). For example, an AJ stated that agency-conducted investigations remain an impediment to impartiality and characterized such investigations as the “fox guarding the henhouse.” EEO practitioners specifically raised concerns regarding the perceived conflict of agency offices or functions improperly interfering in the EEO complaint process. For example, a plaintiff’s attorney stated that one problem that impedes the prompt and fair processing of complaints is the intrusion of agency defense counsel into the EEO process, despite EEOC’s guidance that agencies’ EEO functions and defense functions must remain separate. This practitioner further stated that many EEO offices get legal advice from the same agency component that defends the agency—a direct conflict of interest. An AJ added that although human resources and EEO offices should work together in resolving complaints and grievances, they should be totally separated in the processing of EEO complaints because often allegations about personnel actions, including promotions, reassignments, and h involve the human resources office as the responsible management official. Insufficien and Skills Many EEO practitioners expressed concerns that some participants EEO complaint process, including EEO investigators, agency EEO directors, agency management, and EEO counselors, are not sufficiently knowledgeable of EEO regulations and guidance and of their responsibilities within the process. For example, several respondents cited a general lack of knowledge of EEO or employment discrimination l aw by agency personnel processing complaints (e.g., EEO counselors and investigators). Many respondents cited poor quality of EEO investigations and several cited investigators’ lack of skills. As one AJ remarked, when investigations are of poor quality, the parties are required to engage in discovery at the EEOC level, which takes time and delays processing. Several practitioners raised concerns about AJs as well, including that some of them have insufficient experience and training and that the qual of the work they perform is not systematically monitored or addressed. Insufficient Authority EEO practitioners cited several concerns that some individuals and organizations responsible for carrying out federal EEO programs did not have sufficient authority to fulfill their responsibilities, including the of sufficient authority to dismiss cases that do not meet criteria for discrimination and the lack of subpoena power to compel witnesses to egal testify and provide requested documents. According to an agency l counsel, an AJ, and plaintiffs’ attorneys, the inability to subpoena witnesses during EEOC hearings can delay fact-finding and presents a difficulty to complainants in trying to prove their claims when witnesse federal witnesses (e.g., witnesses are reluctant to testify, including non who are no longer with the agency). Lack of Clarity of EEOC Regulations and Guida and Consis tent EEOC Decisions Some EEO practitioners indicated that more guidance is needed from EEOC on regulations and MDs and that EEOC decisions need to follow judicial precedent and need to be consistent. Respondents provided numerous comments about the lack of consistency of AJ decisions with case law, and EEOC officials also acknowledged this concern. An AJ said that inconsistent EEOC OFO appellate decisions make it difficult for AJs and other parties to know what to do in certain situations. Another practitioner, an EEO investigator, stated that EEOC OFO appellate decisions can be inconsistent and unclear, making procedural decisions (e.g., dismissals) a “gamble.” Lack of Communication According to EEO practitioners, some individuals and organizations responsible for carrying out federal EEO programs sometimes do not effectively communicate relevant oral and written information on the EEO process to participants in a timely and effective manner. Also, as an agency legal counsel stated, it can be difficult at times to understand the actions alleged in a complaint, which results in further follow-up (sometimes more than once) with complainants to get the necessary information. Further, an AJ stated that an impediment to the early resolution of complaints is the lack of a requirement for managers to participate in ADR or mediation. Lack of Commitment EEO practitioners stated that some agency management and other individuals responsible for carrying out federal EEO programs lack a firm commitment to fair and timely processing of complaints. The lack of top management commitment to the EEO program can have a cascading effect on other officials and staff. For example, one practitioner stated that if executive management does not support the EEO complaint process, other management officials give it little importance or priority. An EEO investigator cited a lack of urgency at most agencies in resolving and investigating EEO complaints. EEO Practitioners and Other Stakeholders Proposed Solutions That They Believe Address the Identified Factors EEO practitioners and other stakeholders provided potential solutions that they believe address the factors they identified as well as information on changes their agencies had made to the EEO complaint process. These practitioners also raised other options, beyond the potential solutions, for changing the EEO complaint process. EEOC has several initiatives under way or proposed for improving equal opportunity in the federal workforce. Strengthening Accountability EEOC regulations require federal agencies to provide for the prompt, fair, and impartial processing of complaints and for the review and evaluation of managerial and supervisory performance to help ensure vigorous enforcement of equal opportunity. Further, according to EEOC’s MD-715, a model EEO program will hold managers, supervisors, EEO officials, and personnel officers accountable for the effective implementation and management of the agency’s program. A majority of the respondents from the agencies and EEOC, as well as plaintiffs’ attorneys, identified agency management, agency EEO directors, EEO investigators, and EEOC management as the top four groups of EEO practitioners for which they believed that it was very or extremely important to strengthen accountability. Measures of accountability outlined in MD-715 include evaluating managers and supervisors on efforts to ensure equality of opportunity for all employees and routinely reviewing personnel policies to ensure that they are consistently applied and fairly implemented. For fiscal year 2007, EEOC reported that in fiscal year 2006 117 of the 167 agencies that submitted MD-715 reports, or 70 percent, indicated that managers and supervisors were rated on their commitment to EEO. For strengthening accountability, EEO practitioners suggested ways to better hold accountable (1) agency management and EEO staff, including directors, counselors, and investigators; (2) EEOC management, AJs, and appellate attorneys; and (3) EEO complainants. For example, an EEO director suggested that implementing performance-based accountability measures for EEO directors could improve the timeliness and quality of complaint processing, which could enhance the fairness and impartiality of the EEO complaint process. Another practitioner advocated adopting measurable EEO performance standards for managers and supervisors at the GS-13 level and above. In its June 2008 notice of proposed rulemaking, EEOC included a requirement that an agency that has not completed an EEO investigation within the 180-day time limit is to notify the complainant in writing that the investigation is not complete, when it will be completed, and that the complainant has the right to request a hearing or file a lawsuit. EEOC stated its belief that such a requirement may shorten delays in agency investigations by providing an incentive for agencies to complete investigations in a timely manner. Several EEO practitioners stated that just as accountability within agencies is important, EEOC should also be held more accountable for adhering to time frames for steps in the process, such as issuing a hearing decision. As for holding complainants more accountable, one practitioner felt that a complaint should be dismissed if the complainant fails to cooperate if the agency has met its responsibilities. The practitioner, an investigator, offered that complainants should be accountable for participating in a requested EEOC hearing after discovery and depositions have been conducted. According to this practitioner, EEOC should not allow the complainant to withdraw and request a FAD at this stage—if the complainant withdraws from the hearing at this stage, the complaint should be dismissed with no further action. Some respondents said that their organizations established time thresholds and quality standards for internal processes. For example, an EEO investigator reported that the timely processing of complaints has been tied to performance standards to help ensure that cases are promptly processed. Another EEO investigator’s agency established goals and measures for timeliness according to EEOC regulations and instituted quality standards for each centralized EEO process. Further, the agency established timelines and quality standards for both contractors and agency EEO professionals, and the agency developed measures in internal databases to track and monitor timelines and quality on daily, weekly, monthly, quarterly, and annual bases. An EEO director from another agency also reported that the agency achieved success in processing complaints by implementing performance-based accountability measures (i.e., internal timeliness and quality controls), including the following: standard operating procedures, stringent internal deadlines, timeliness and quality assurance review processes, timeliness and quality elements in results-based performance standards, management oversight, and EEO staff training. Finally, an EEO director reported that his agency had put in place a departmental accountability policy to track disciplinary and corrective actions taken as a result of discrimination-related misconduct. Because of numerous concerns raised both before and during the commission’s September 2006 meeting and subsequent focus group discussions, EEOC officials stated that the commission performed a limited assessment of the quality of agency investigations by having AJs complete surveys during selected periods from 2005 to 2007. Overall, from the limited assessment, the AJs reported that most of the reports of investigation were complete and well organized, containing enough evidence to allow the AJ to proceed with the hearing process. However, the AJs reported that several agencies routinely submitted reports of investigation that were particularly lacking and described common deficiencies, including reports being disorganized and containing duplicative materials, being incomplete and always late, and containing an investigator’s statement of the claim that was legally insufficient. EEOC officials noted that the commission is considering developing a formal Quality Control Evaluation system that would rate the quality of agency investigations. However, EEOC officials did not provide a proposed time frame for this effort. Respondents also reported their agencies’ making greater use of information technology to process and track complaints. One practitioner noted that his agency had automated several features of the EEO complaint process, including the format of decisions through use of boilerplate language that can be selected for routine matters; parts of decision writing with its forms, such as coversheets, code sheets, and envelopes; and storage of case files that are scanned into the Adobe Acrobat program, thereby expediting the reviewing, bookmarking, and searching of these files. One EEO director reported that her agency standardized EEO complaint forms, installed the forms on compact discs that were furnished to all counselors, trained the counselors in the use of the electronic forms, and purchased an automated complaints tracking system to simplify and standardize EEO-related reports. Several practitioners (EEO directors, an EEO counselor, agency legal counsel, and an investigator) indicated that their agencies had put in place a complaints tracking system, which helps in the preparation of standardized reports. Without a system like this one, several reported, much time is consumed finding the information that needs to be in such reports. An official from EEOC’s Office of Field Programs indicated that EEOC has begun piloting an electronic case management system to provide more expeditious hearings case processing. Additionally, a senior official from MSPB described several actions that MSPB has taken to improve its operations, including establishing an electronic filing program and a repository of electronic documents that are available to the parties in cases. Providing Sufficient Resources at the Agency and at EEOC Regulations and EEOC MD-715 state that agencies should allocate sufficient resources to their EEO programs to, among other things, ensure that unlawful discrimination in the workplace is promptly corrected and addressed. More than three-quarters of the respondents from the agencies and EEOC as well as plaintiffs’ attorneys stated that it would be very or extremely important to improve the current allocation of resources for EEOC AJs, while about three-quarters of respondents felt that improvement in the current allocation of resources for EEO investigators, agency EEO directors, and EEOC management was very or extremely important. Although it is important for agencies to provide sufficient resources for EEO programs, it is equally important for those programs to use those resources efficiently. One practitioner, an EEO investigator, reported two ways her agency uses resources efficiently. First, the investigator stated that her agency was shifting away from staff investigators and FAD writers to greater reliance on contractors and that the two firms her agency used delivered good quality products and were faster and more cost-effective than agency staff. Second, the investigator also reported that her agency was engaged in an activity-based costing exercise, so staff must account for all complaint-processing-related tasks, which her office can then cost out to the bureau where the complaints arose, allowing the bureaus to focus on early resolution to keep costs down. In addition, the greater use of information technology by some agencies, which was cited earlier as assisting agencies in saving time, can also help them in keeping costs down. An EEO director stated that EEOC should have the capacity to process workloads and accept evidence, records, and files electronically. At EEOC, where its hearings inventory has increased but its appropriations have generally remained constant, EEOC officials said that as of April 2009, the agency was in the process of completing draft instructions to implement the pilot “Three-Track Case Management Process” system for hearings that the agency expects will result in quicker resolutions and shorter processing times through expedited discovery and hearing time frames using its existing resources. Under this process, AJs would prioritize their cases based on complexity, using one of three tracks: fast, regular, or complex. Further, it is necessary that agencies assess the quality as well as the costs associated with contracted investigations and proposed FADs. EEOC’s 2004 report on federal sector investigations and costs found that some agencies were incurring additional costs when they had to supplement the investigative report or require the contractor to conduct additional work, which could contribute to delays in meeting time frames. Several practitioners mentioned that agencies need to have better reviewers for sufficiency of investigations and to do quality control. For example, an agency legal counsel stated that at his agency, the EEO office reviews contracted reports of investigation and draft FADs but that reports of investigation were not always reviewed for completeness and relevance before being provided to the complainants. This practitioner pointed out that because of the lack of a quality review, often the agency or the complainant needed to get additional documents in discovery, although the agency had already paid for the preparation of a report of investigation. Strengthening Independence Within agencies, EEOC regulations and MD-110 require that EEO directors should be under the immediate supervision of the head of the agency. Placing the EEO director in this position underscores the importance of equal opportunity to the mission of the agency and helps ensure that the EEO director is able to act with the greatest degree of independence. In its fiscal year 2007 report on the federal workforce, EEOC reported that 61 percent of the EEO directors reported to the agency head. In addition, EEOC’s MD-110 states that to maintain the integrity of the EEO investigative process, it should be kept separate from the agency’s personnel function, to avoid conflicts of interest or the appearance of such conflicts. Moreover, MD-110 states that separating the agency’s representatives and the offices responsible for defending the agency against EEO complaints from those responsible for conducting EEO complaint investigations enhances the credibility of the EEO office and the integrity of the EEO complaint process. At least three-quarters of plaintiffs’ attorneys and respondents from EEOC indicated that strengthening independence for EEO directors and EEO investigators was very or extremely important. Further, several EEO practitioners believe that agencies should adhere more clearly to existing EEOC requirements on delineating the roles of the agency general counsels in the EEO complaint process. For example, an EEO director stated that EEO legal advisors should be separate and distinct from the agency’s legal office and should report to the head of the civil rights office instead of to agency legal counsel. Several EEO practitioners also stated that agency human resources offices should be required to avoid activities or actions that may be construed as having undue influence. An AJ favored having clear firewalls between the human resources and EEO offices when investigating complaints. In its March 2003 proposal, the coalition of civil rights employee advocates and other stakeholder groups recommended that EEOC’s regulations and MD-110 be changed to clearly prohibit agency actions that interfere with the independent judgment of the EEO investigator. Noting that stakeholders have complained of intrusion in the operations of the agency EEO office by staff responsible for defending the agency against complaints of discrimination and that such intrusion could affect the impartiality of the investigation, EEOC officials stated that EEOC has draft guidance on the intrusion into the EEO process by agency counsel, especially in the informal part of the process, which is being reviewed by the commissioners. Because of the concern that the practice of allowing an agency to investigate a complaint against itself can represent either a clear conflict of interest or the appearance of such conflict, practitioners cited filing complaints directly with EEOC as a means of avoiding such conflicts. Allowing such filings would alter the current administrative complaint process. Stakeholders cited several advantages to having EEOC conduct investigations. One advantage would be its potential to reduce concerns regarding independence, conflicts of interest, and perceptions of unfairness surrounding the existing federal EEO complaint process. Another advantage stakeholders cited was EEOC’s potential to leverage its expertise, which in addition to administering the federal sector EEO process, promulgating regulations, providing EEO training, and collecting governmentwide data on EEO activities, also includes investigating private sector complaints of discrimination. According to stakeholders, potential disadvantages of transferring investigations to EEOC included adding an immense burden along with insufficient resources for EEOC to handle the larger workload, which would add more time to the complaint process and compound the time it takes EEOC to make decisions in EEO complaint processing, and creating tension between the various roles the agency is responsible for when one agency is afforded too many functions (e.g., investigations, decisions, and appeals) under the EEO process, which may impair neutrality, fairness, and accountability. EEOC officials noted that an overwhelming number of stakeholders who testified at the September 7, 2006, commission meeting or participated in focus groups conducted after that meeting recommended that EEOC take over the investigative function in its entirety from the agencies or that some type of independent body apart from EEOC assume this function. According to EEOC officials, stakeholders cited the conflict of interest perception and agencies’ failure to complete their investigations in a timely manner as the principal reasons. EEOC also noted its belief that having it conduct the federal sector investigations would also bring efficiency, uniformity, and quality to the process as the commission would either hire a cadre of investigators dedicated to the federal sector or possibly act as a conduit for contract investigations. In the past, EEOC stated that fiscal realities have prevented it from assuming responsibility for all federal sector investigations, noting that in fiscal year 2008, agencies conducted over 11,000 investigations at a cost of a little more than $36 million. Thus, according to EEOC officials, the resource implications of EEOC assuming the investigative function would be considerable, and the various ways of funding investigations by EEOC need further study. Several EEO practitioners mentioned addressing independence through the use of contractors for conducting investigations and drafting FADs. The Postal Service’s Office of Inspector General reported that the Postal Service contracts investigations to enhance the independence and neutrality of the EEO administrative process and to improve the overall quality and efficiency of investigations. The report states that a single office from the Postal Service National EEO Investigative Services Office oversees investigations and contract FAD writers. This report did not address the quality of the investigations. As mentioned earlier, it is important that agencies review the quality of contract investigations. Enhancing Knowledge and Skills In its 2004 report on federal sector EEO investigations and cost, EEOC cited the importance of federal agencies having EEO programs staffed with employees who have the necessary knowledge, skills, and abilities to help reduce the time it takes to conduct investigations. More than three- quarters of our survey respondents from the agencies and EEOC as well as plaintiffs’ attorneys pointed to the importance of investigators enhancing their current level of knowledge and skills in the federal EEO complaint process. Almost three-quarters of respondents cited enhancing the knowledge and skills of EEO directors and agency management as very or extremely important, and about two-thirds of respondents cited enhancing the knowledge and skills of EEO counselors as very or extremely important. Several EEO practitioners offered suggestions for enhancing the knowledge and skills of EEO staff. For example, a plaintiffs’ attorney offered that counselors should be required to spend at least 8 hours observing an experienced counselor before providing counseling. EEOC’s MD-110 requires at least 32 hours of counselor training before providing counseling as well as 8 hours of continuing annual training. As for investigators, another plaintiffs’ attorney, noting that the minimum requirements in EEOC guidance for investigators is insufficient, stated that EEOC should expand the minimum training and experience requirements and require additional annual continuing education. Similar to the training required for counselors, MD-110 also requires at least 32 hours of investigator training before conducting investigations as well as 8 hours of continuing annual training. Several practitioners and stakeholders suggested that investigators should receive some kind of certification. One practitioner recommended that EEOC certify individual investigator credentials through a combination of agency-provided training or by licensing training programs that meet EEOC-established minimum requirements, and require every investigator, whether in-house agency employee or contract investigator, to apply for and be certified as meeting the minimum requirements. Some respondents said that their organizations had improved training for EEO staff. For example, an EEO director reported that her agency has standardized its basic and advanced EEO counselor training class. The director’s office has coordinated with the agency’s ADR office and office of inspector general to participate in the training. All bureaus send counselors to the same course, and counselors are issued credentials at the end of the training by the agency. An EEO counselor reported that her agency trained all EEO specialists to be EEO counselors and investigators and to write dismissals and FADs. This practitioner noted that providing all EEO staff with all available EEOC training can enhance their understanding of the process from start to finish, thereby increasing completeness, accuracy, and effectiveness of complaint processing. An EEO counselor from another agency reported that at her agency there is a focus on developing the legal analysis skill set of EEO specialists who process complaints. During team meetings, the EEO specialists review intake decisions and FADs that they have prepared, and the specialists brief the team on the legal analysis conducted and the rationale for decisions. Counselors attend these meetings to increase their understanding of the bases for dismissal, the types of questions that need to be asked during the counseling inquiry, and the legal implications of new case decisions. Increasing Authority of EEO Directors and AJs Almost all EEOC practitioners and plaintiffs’ attorneys and a majority of agency respondents indicated that it would be important to increase the current level of authority of EEOC AJs, and most respondents cited increasing authority for agency EEO directors as very or extremely important. EEO practitioners cited a need for subpoena power for AJs, who currently do not have this authority. In addition, EEO practitioners expressed the desire for expanded authority for (1) EEO directors to dismiss complaints of discrimination and (2) EEOC to order discipline against managers who discriminate. Practitioners also expressed a desire for EEOC to make sufficient use of its authority to sanction agencies that do not complete investigations on time. Several EEO practitioners felt that allowing AJs to subpoena witnesses would improve the EEO complaint process. An agency legal counsel cited cases where the agency and complainant suffer when potential witnesses, such as those who are no longer with the agency, refuse to testify. Until AJs are given such power, a plaintiffs’ attorney felt that the administrative complaint process cannot serve its intended purpose as a viable alternative to litigation in federal courts. While EEOC AJs have authority to sanction an agency for failure to produce an approved witness who is a federal employee, they do not have the authority to subpoena the statements of individuals and therefore have no mechanism with which to compel the testimony of witnesses who are not current federal employees. With respect to subpoena power, according to MSPB officials, the board has delegated to its regional directors/chief administrative judges and AJs the authority to subpoena witnesses. EEOC officials also favor granting EEOC’s AJs subpoena power, noting that AJs have often voiced the belief that their lack of subpoena power is a significant defect in the hearings process, in many cases hindering their ability to conduct full and fair hearings. For instance, without subpoena authority, it is often difficult for AJs to compel a potential witness for the complainant, such as an agency’s outside medical personnel or a contractor employee, to testify on the complainant’s behalf. Although AJs use a variety of means to try to persuade former employees, contractors, and outside medical personnel to testify, it would be more efficient if AJs possessed subpoena authority. EEOC officials stated that having subpoena authority would further ensure that AJs have access to all relevant evidence. However, according to EEOC’s Office of Legal Counsel, granting subpoena power to AJs would potentially require a statutory change. According to a senior EEOC official, EEOC has not sought such a statutory change. An example of expanded authority for EEO directors relates to the dismissal of complaints of discrimination. EEOC regulations set out circumstances under which complaints can be dismissed, including complaints that fail to state a claim of discrimination. In this regard, EEOC has consistently reversed agencies’ dismissals for failure to state a claim where the agency dismissal is based on the agency’s view of the ultimate merit of the complaint allegations. An EEO investigator stated that EEO directors should be given the authority to make a merit analysis to dismiss those claims that are frivolous and show self-defeating evidence to ensure quicker and less costly processing of cases. In cases that are dismissed, complainants could still appeal such decisions to EEOC. As for the authority to order discipline for managers who have been found to have discriminated, EEOC’s practice is to advise rather than direct agencies to consider disciplining such managers. In addition, the No FEAR Act requires agencies to report information annually on disciplinary actions taken. The act also requires the President’s designee, the Office of Personnel Management (OPM), to undertake a study of best practices among agencies for taking disciplinary action for conduct inconsistent with antidiscrimination laws and whistleblower protection laws. OPM issued the advisory guidelines in September 2008; agencies have not yet reported actions they have taken consistent with these guidelines. EEOC regulations provide for sanctions against parties for failure (without good cause shown) to respond fully and in a timely fashion to an order of an AJ, to discovery requests, or to requests for the attendance of witnesses. Sanctions include the drawing of adverse inferences against, or exclusion of other evidence offered by, the noncomplying party, issuing a decision fully or partially in favor of the opposing party, or such other actions as appropriate. Specifically, AJs may impose monetary sanctions where the agency has failed to complete an investigation that is timely, adequate, or both, including requiring agencies to bear the costs for the complainant to obtain depositions or other discovery. EEOC’s OFO can also sanction agencies at the appellate level. Some practitioners stated that EEOC does not make sufficient use of its sanctioning authority. On the matter of sanctioning authority, EEOC officials stated that AJs are guided by OFO decisions on sanction authority, that the agency is considering issuing further guidance to AJs, and that it will include training on the appropriate use of sanctions in federal sector training of AJs to be held later in 2009. Increasing the Clarity of EEOC Regulations and Guidance and Consistency of EEOC Decisions In commenting on the importance of increasing the clarity and consistency of antidiscrimination laws (e.g., Title VII of the Civil Rights Act), EEOC regulations and guidance (e.g., MD-110 and MD-715), and EEOC decisions (e.g., decisions by AJs and appellate attorneys), the majority of EEO practitioners responding felt that it was most important to increase the clarity and consistency of EEOC decisions. Primarily, practitioners indicated that increasing the consistency of decisions at several levels within EEOC was very or extremely important: decisions from EEOC’s OFO appellate attorneys, decisions by AJs, and decisions resulting from requests for reconsideration of appeals decisions. An EEO investigator suggested that EEOC’s OFO should index its decisions and cross-check them for consistency so that only those decisions that express a cogent, correct application of the law should be indexed and made available as precedent. According to a senior EEOC official, EEOC began to conduct quality reviews of AJ decisions in fiscal year 2007 by reviewing a sample of files from all offices to assess the legal adequacy of decisions and the consistency with case law as well as to determine whether time frames were met. The official said that EEOC officials share the results of the reviews with AJs through monthly conference calls and quarterly video conferences. In addition, according to EEOC, through a technical assistance group, EEOC staff visit selected field offices to review files for cases and decisions. Noting the importance of the AJ position, one practitioner stated that EEOC should establish better qualifications for its AJs, including a minimum of 5 years litigation or related EEO or civil rights experience and that the position should be given a higher grade to make the position more competitive. MSPB, which also employs AJs to hear and decide appeals from former and current federal employees, applicants for federal employment, and federal annuitants concerning any matter over which the board has appellate jurisdiction, hires AJs in the range of GS-13s through GS-15s and has established timeliness, quality, and production standards for their performance. At EEOC, AJs can be hired at the GS-11 to the GS-13 level with promotion potential to GS-14. EEOC officials stated that the agency recognizes that a range of experience is important to adjudicate complex federal employment cases. Some practitioners indicated that they would like EEOC to make changes to its regulations or guidance. For example, one practitioner, a plaintiffs’ attorney, stated that EEOC should review its federal sector regulations with the aim of identifying and eliminating (or modifying) those provisions that undermine effectiveness and fairness. An agency legal counsel stated that EEOC must establish clear guidelines for the conduct of agency counsel and their role in the EEO process. Practitioners and stakeholders expressed the need for clarification regarding the dismissal of complaints, specifically addressing dismissals for (1) failure to state a claim (including complaints alleging a hostile work environment), (2) abuse of the process, and (3) failure to cooperate. Also, in its March 2003 proposal, the coalition of civil rights employee advocates and other stakeholder groups recommended that EEOC adopt uniform standards for what states a claim of employment discrimination. Under this recommendation, complaints could be dismissed on these grounds either at the agency, before the complaints are investigated, or after a hearing request is submitted. While noting that its regulations provide standards for dismissing complaints that do not state a claim and that based on case law, EEOC has also broadly construed what actions may constitute a claim, EEOC officials stated that the commission is considering recommendations by internal and external stakeholders to provide additional guidance. Improving Communication throughout the Complaint Process EEOC MD-110 states that in the precomplaint process, counselors should create an atmosphere that is open to good communication and dialogue. EEOC regulations require agencies to establish ADR programs, and EEOC MD-715 encourages the widespread use of an ADR program that facilitates the early, effective, and efficient informal resolution of disputes. According to EEOC, such programs can help agencies to avoid the time and costs associated with more formal dispute resolution processes and improve workforce communication and morale. Almost all respondents indicated that improving communication during the informal or precomplaint phase, claim acceptance/dismissal, and complaint investigation was very or extremely important. Also, about three-quarters of respondents indicated that improving communication during ADR was very or extremely important. Several EEO practitioners suggested that ADR should be used more often in disputes or even made mandatory. For example, a plaintiffs’ attorney offered that for ADR to be successful, agencies need to ensure that officials do not merely “go through the motion” on ADR but that an official at an appropriate level of authority represents management and that this official has settlement authority. In addition, in its March 2003 proposal, a coalition of civil rights employee advocates and other stakeholder groups recommended making ADR mandatory for managers in the informal and formal stages of the administrative process and for EEOC hearings. Several counselors reported that their agencies gave employees the option of using ADR in the informal and formal stages of the EEO complaint process as a means for resolving an EEO concern. According to one counselor, using ADR in this way focuses both parties on the objective of resolving the conflict rather than defending their respective positions. Another counselor reported that at her agency, at the time that contact is made with the informal EEO process, her agency gives employees the option—explained verbally and in writing—of traditional counseling or mediation (i.e., a type of ADR) when they initially begin the informal EEO process. Mediation is offered 100 percent of the time at initial contact, and ADR may be offered again in the formal stage of the process if the case proceeds. This practitioner found that offering ADR services is helpful in resolving complaints at the lowest possible level. Almost universally, stakeholder groups believed that counseling should be done at agencies, and EEOC also favors leaving the counseling responsibilities with the agencies. Two stakeholders explained that EEO counselors who work in an agency possess a familiarity with the organization’s operations, culture, and leadership and that keeping counseling at the agencies enables counselors to see problems firsthand while giving agencies opportunities to correct problems and demonstrate some commitment to EEO principles. EEOC officials stated that stakeholders have recommended that the commission ensure that during counseling, agencies provide better, more understandable, and more consistent information describing the EEO process and complainants’ rights and responsibilities therein. In its 2008 performance and accountability report, EEOC noted that precomplaint EEO counseling and ADR programs addressed many employee concerns before they resulted in formal complaints. Of the 37,809 instances of counseling in fiscal year 2007, about 56 percent did not result in a formal complaint because of either settlement by the parties or withdrawal from the EEO process. According to EEOC’s 2007 report, agencies’ ADR offer, participation, and resolution rates varied widely. For example, the Postal Service offered ADR in about 93 percent of precomplaint counseling, while the other agencies’ offer rate was about 71 percent, with some agencies not offering ADR in any counseling sessions. The governmentwide ADR participation rate in fiscal year 2007 was 48 percent. The Postal Service, which requires management to participate, reported the highest rate of ADR participation (about 76 percent) compared with the average participation rate of about 25 percent among other agencies. According to EEOC’s 2007 annual report, complainants rejected ADR offers 10 times more often than agencies. Similarly, the Postal Service had an overall resolution rate of about 75 percent, while the rate for other agencies was about 46 percent. EEOC officials reported taking a number of actions to encourage more use of ADR, such as updating EEOC’s federal sector ADR Web page to improve the delivery of information on the benefits of ADR and ADR best practices; providing technical assistance through e-mail, telephone contacts, and on-site visits, as requested; and participating in federal ADR work groups and agency conferences. The commission also reported establishing the Federal Appellate Settlement Team (FAST) Program to utilize ADR techniques to resolve EEO appeals that have been filed in OFO. The FAST Program focuses on appeals that have been decided based on FADs on the merits. According to EEOC, qualified EEOC staff, who are experts in federal sector EEO law, conduct ADR to assist parties in reaching a mutually satisfactory agreement. Participation in the FAST Program is voluntary for both parties. Two practitioners made suggestions that would further communication outside of an ADR program. One described a precomplaint resolution program to address all issues involving the terms and conditions of employment, including EEO complaints. This practitioner stated that the program generally has been successful in resolving issues that do not belong in the EEO process, addressing matters before they become formal EEO complaints, and correcting situations that could result in a hostile environment or harassment claims. An EEO counselor suggested increased training in conflict management and effective communication for employees and supervisors as well as including conflict management in both performance plans to focus the responsibility for resolving everyday conflicts on the parties themselves, rather than bringing in a third party. Reinforcing Commitment at All Levels in the EEO Complaint Process Our prior work has shown that commitment from top management is key to successful management improvement initiatives. For example, our work on leading diversity management identified top management commitment as a fundamental element in the implementation of diversity management initiatives. Similarly, EEOC MD-715 emphasizes the importance of demonstrating commitment to equality of opportunity for all employees and applicants for employment that is communicated throughout the agency from the top down. Agency heads have many ways to demonstrate commitment to equal opportunity and a workplace free of discriminatory harassment, but one important way is to provide the EEO director with “a seat at the table,” that is, access to the agency head. Having the EEO director report to the head of the agency sends a message to employees and managers about the importance of and commitment to the EEO program. An EEO practitioner stated that agencies should adhere more clearly to existing EEOC requirements on delineating the reporting lines of authority for EEO directors. EEOC advises that following each yearly submission of the MD-715 report to EEOC, EEO directors should present the “state of the EEO program” to the agency head outlining, among other things, the effectiveness, efficiency, and legal compliance of the agency’s EEO program. EEOC reported in its fiscal year 2007 annual report that 63 percent of EEO directors presented such a report. EEOC also emphasized that ensuring that the EEO professionals are involved with and consulted on management and deployment of human resources, providing managers with training in EEO-related matters, having managers and employees involved in implementing the EEO program, and informing employees of the EEO program are other important aspects of demonstrating commitment to the EEO program. A majority of respondents indicated that it would be very or extremely important for agency management, agency EEO directors, and EEO investigators to reinforce their current level of commitment to the federal EEO complaint process. According to one EEO practitioner, agencies need to make the EEO function a priority, in terms of importance, expectations, and oversight. Another demonstrated means of support from the agency head, as one practitioner stated, is adequate funding and staffing of the EEO function within the agency. For example, an EEO counselor indicated that agencies have to move away from “dumping” agency employees in EEO offices and instead staff those offices with individuals who have the appropriate skill sets, perhaps even legal backgrounds, to develop credible programs. According to a plaintiffs’ attorney, EEO must receive support from agency heads, and EEOC’s most recent federal workforce report shows that a significant percentage of agency heads did not issue an annual statement supporting EEO as recommended by EEO guidance. The practitioner suggested that agency heads who could not bother to issue a statement certainly could not be bothered to make EEO an agency priority. In its fiscal year 2007 annual report, EEOC reported that of the 167 agencies and subcomponents that submitted fiscal year 2006 MD-715 reports, 68 percent of the agencies issued EEO policy statements, an increase over the 50 percent of the 158 agencies and subcomponents that submitted MD-715 reports in fiscal year 2005. Raising Other Options for Changing the EEO Complaint Process Stakeholders raised other options for changing how EEO complaints are processed that were outside of the eight factors that we used to group participant and stakeholder responses and solutions. For example: Some stakeholders noted the considerable amount of time that can elapse from the filing of a formal EEO complaint through the administrative process to the potential conclusion of the matter in federal court and suggested that complainants be given the choice of using the administrative or the judicial process but not be permitted to use both. Under this option, stakeholders provided that the administrative process could afford the right to a judicial appeal of that administrative decision to a U.S. federal court of appeals. Other stakeholders, concerned with the multiple forums that complainants have available, suggested an administrative tribunal, which could handle all variety of issues, including discrimination, prohibited personnel practices, and unfair labor practices. Stakeholders indicated that this could avoid the problem of a matter going to more than one forum and could avoid the difficulty encountered (and mistakes made in assessing the nature of a complaint) by a complainant when faced with making a forum choice at the outset. Some stakeholders raised concern over the number of complaints accepted into the process that should not be (i.e., frivolous, not discrimination) and supported having EEO complaints go through a similar process as unfair labor practice allegations. With unfair labor practice allegations, an investigation by an independent third party serves to eliminate matters that should not go forward before a full- scale hearing is afforded. Some stakeholders observed that under options in which an individual goes directly to a third party with allegations, the adversarial nature of the process could potentially increase. One stakeholder observed that such options could require mandatory ADR to minimize this potential effect. Concern was also raised by another stakeholder that some options may serve to preclude lower-graded employees from pursuing claims where the option does not provide for a cost-free investigation. Improving Equal Opportunity in the Federal Workforce Through the use of several initiatives introduced in fiscal year 2008, EEOC is seeking to help federal agencies achieve model EEO programs where they can make employment decisions that are free from discrimination and that remove barriers to free and open workplace competition. One such tool is EEOC’s EEO Program Compliance Assessment (EPCA), a type of scorecard that is divided into two sections. In the EEO program activities section, EEOC evaluated agencies on selected indicators under each model element of MD-715 using fiscal year 2006 data and reports. Among the indicators measured were timeliness of investigations, FADs, and submission of complaint files for hearings and appeals. EEOC also measured agencies’ use of ADR. EPCA does not evaluate agencies on the quality of their investigations, but according to EEOC officials, the Commission is currently examining how to incorporate agencies’ quality of investigations as a performance measure under EPCA. In the EEO program outcome indicators section, EPCA includes selected responses from OPM’s fiscal year 2006 Federal Human Capital Survey to five survey questions as “proxy outcome indicators” to gauge each agency’s progress in creating a fair and inclusive workplace. The outcome indicators section also includes workforce analyses based on race, national origin, gender, and targeted disabilities that show how a particular agency’s workforce is composed by major occupation and compare it to the civilian labor force; provides an odds ratio analysis on promotions in the senior grade levels; and shows agencies how they compare to the federal government as a whole on various climate and other issues. During our audit work, agencies’ EPCA results were available to the public on EEOC’s Web site; however, EEOC has since removed the results. According to a senior EEOC official, EEOC is evaluating the appropriate use of the program indicators in EPCA in an attempt to ensure that the indicators chosen are accurate measures of the performance of agency EEO programs. The official did not provide a time frame for this evaluation. In addition to EPCA, EEOC stated in its fiscal year 2008 performance and accountability report that a key strategy in its efforts to be more responsive to federal agencies was the continued development of its relationship management pilot. This initiative was first piloted in fiscal year 2004 and involves EEOC personnel partnering with EEO staff in 11 agencies in a consultative relationship to improve customer service and help them successfully implement the essential elements of MD-715’s model EEO program. In addition to these activities, EEOC staff provide trend analysis feedback to selected agencies on their MD-715 submissions, and EEOC is conducting on-site reviews of five agencies with high underrepresentation of racial minorities at the Senior Executive Service level and of another agency to investigate a spike in retaliation complaints. Finally, in June 2008, EEOC announced a proposal that brought together previous EEOC commissioners’ efforts. Among the changes contained in the notice, are the following: A requirement that agency EEO programs comply with EEOC regulations, MDs (MD-110 and MD-715), and management bulletins and that EEOC will review agency programs for compliance. Permission from EEOC for agencies to conduct pilot projects—usually for not more than 12 months—for processing complaints in ways other than those prescribed in EEOC regulations (Part 1614). A requirement that an agency that has not completed an EEO investigation within the 180-day time limit notify the complainant in writing that the investigation is not complete and when it will be completed and that the complainant has the right to request a hearing or file a lawsuit. The proposals for EEOC to review compliance with its regulations, MDs, and other guidance and to provide additional notification to complainants have the potential for an immediate impact on the EEO complaint process. By reviewing compliance, EEOC could address several of the factors that EEO practitioners indicated impede the timely processing of complaints and independence. For example, requiring agency compliance with regulations and MDs delineating the reporting lines of authority for EEO Directors and the roles of agency offices of general counsel in the EEO complaint process could help strengthen the independence of EEO professionals to fulfill their responsibilities. As we stated earlier, EEOC stated its belief that a requirement to notify the complainant in writing about complaints that have not been investigated within 180 days may provide an incentive for agencies to complete investigations in a timely manner. Pilot projects could provide helpful data with which EEOC could make decisions about future improvements to the federal sector EEO complaint process. For example, the Department of Defense (DOD) had the authority to operate pilot programs outside of the procedural requirements prescribed by EEOC to improve processes for the resolution of EEO complaints by civilian employees of DOD. DOD operated three such programs between 2005 and 2007, although only one of the three DOD pilot programs met the criteria of “operating outside of EEOC regulations.” The other two operated within the framework of EEOC regulations by increasing the use of ADR to informally settle disputes before they became formal complaints. Our prior work on the DOD pilot programs showed the importance of having a sound evaluation plan, including key features that are essential for assessing the performance of the pilot programs and making determinations regarding the wider applications of the pilot programs. Some key features of a sound evaluation plan include the following: Well-defined, clear, and measurable objectives. Measures that are directly linked to the program objectives. Criteria for determining pilot program performance. A way to isolate the effects of the pilot programs. A data analysis plan for the evaluation design. A detailed plan to ensure that data collection, entry, and storage are reliable and error free. In addition to the importance of having a strong evaluation program, our work on the DOD pilots also identified lessons learned that can be instrumental for EEOC and potential pilot program officials as they consider whether to institute pilot projects to address concerns that have been identified with the EEO complaint process. For example, it is important to (1) involve senior management and stakeholder groups in designing, implementing, and evaluating the pilot program to help with buy-in; (2) emphasize the importance of customer feedback; and (3) include mechanisms to solicit such feedback. As of May 2009, EEOC had not issued its notice of proposed rulemaking outlining such specific features as the number of pilot projects, how they will operate, or how they will be evaluated. The solutions that EEO practitioners and others have offered to improve the quality and timeliness of investigations may provide candidates for the pilot projects, allowing EEOC to make data-driven decisions about changes to the federal EEO complaint process. Conclusions Equal opportunity in the federal workplace is key to enabling federal agencies to meet the complex needs of our nation. Agencies must make a firm commitment to the principles of equal opportunity and make those principles a fundamental part of agency culture so that all employees can compete on a fair and level playing field and have the opportunity to achieve their potential, without regard to race, color, religion, national origin, age, gender, or disability. Holding agencies accountable for adhering to EEOC regulations and guidance will help EEOC to ensure that the EEO complaint process is operating as intended. EEO practitioners and others have identified shortcomings in the operation of the federal EEO process at both the agencies and EEOC. Some of these shortcomings could potentially be addressed through additional guidance that EEOC has stated it intends to issue in such areas as the appropriate relationship between EEO offices and offices involved in defending the agencies against discrimination complaints as well as what constitutes a claim; it will be important for the commission to follow through with this guidance. Additionally, EEOC is considering allowing agencies to conduct pilot projects for processing complaints outside of EEOC regulations. If agencies were to participate in pilot projects, it would be important for them to have well-developed evaluation plans that include key evaluation features. Pilots that are undertaken without sound evaluation plans increase the likelihood of insufficient or unreliable data, limiting confidence in pilot project results. Without confidence in pilot project results, EEOC will be limited in its decision making regarding the pilot projects, and to the extent that proposed changes in the federal EEO complaint process require congressional action, Congress will be limited in its decision making about the pilot projects’ potential broader application. Recommendations for Executive Action If pilot projects are approved by EEOC, we recommend that the Acting Chairman of EEOC take the following two actions: Direct pilot project officials to develop for each pilot project an evaluation plan that includes key features to improve the likelihood that pilot project evaluations will yield sound results, such as well-defined, clear, and measurable objectives; measures that are directly linked to the program objectives; criteria for determining pilot program performance; a way to isolate the effects of the pilot programs; a data analysis plan for the evaluation design; and a detailed plan to ensure that data collection, entry, and storage are reliable and error free. Direct commission staff to review and approve pilot projects’ evaluation plans to increase the likelihood that evaluations will yield methodologically sound results, thereby supporting effective program and policy decisions. Agency Comments We provided a draft of this report to EEOC for review and comment. In a June 24, 2009, letter, EEOC’s Acting Chairman agreed with our recommendations and stated that EEOC plans on implementing them. The Acting Chairman further stated that EEOC is committed to improving the timeliness of complaint processing, enhancing the quality of the investigative reports as well as the hearing and appellate decisions, and ensuring greater accountability by all parties in the federal sector complaint process. EEOC’s letter is reprinted in appendix III. We are sending copies of this report to the Attorney General; the Acting Chairman, Equal Employment Opportunity Commission; and interested congressional committees and subcommittees. The report also is available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-6806 or [email protected] if you or your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology As agreed with interested congressional committees, this report provided the results of our analysis of (1) factors that practitioners identified that they believe impede the prompt, fair, and impartial processing of federal equal employment opportunity (EEO) complaints and (2) actions that practitioners and other stakeholders think could be taken to address those factors. We also included information on what the Equal Employment Opportunity Commission (EEOC) is doing to improve equal opportunity in the federal workforce. Objectives 1 and 2 For the purposes of this review, we surveyed individuals whose work roles and responsibilities put them in regular contact with the federal EEO complaint process, thereby ensuring their familiarity with and knowledge about the process. Based on prior GAO work on the EEO process, we identified seven categories of individuals familiar with the federal EEO complaint process. We termed these individuals “EEO practitioners” and collected their informed views concerning the EEO complaint process. We derived our seven categories of practitioners from three sources: individual agencies, EEOC, and the plaintiffs’ attorney community. Agency practitioners we surveyed included the EEO directors responsible for administering agency EEO programs, EEO counselors responsible for reviewing the complainants initial allegations and advising them on their roles and responsibilities in the EEO process, EEO investigators responsible for investigating EEO complaints, and legal counsels responsible for advising and defending agencies against EEO complaints. EEOC practitioners included the EEOC administrative judges (AJ) responsible for adjudicating complaints, conducting hearings, and issuing decisions on EEO complaints, and EEOC appeals attorneys responsible for processing appeals of decisions. The plaintiffs’ attorneys represent individual employees who filed EEO complaint cases. We obtained e-mail addresses, physical addresses, and telephone numbers for all EEO practitioners in order to contact them. Agency Selection To attain a wide representation of agencies, we selected agency-level EEO practitioners from 17 agencies based on agency size, complaint activity, and investigation source (in-house versus contractor) as of fiscal year 2005. In an effort to obtain a sufficiently representative and diverse group of large, medium, and small agencies from which to begin our selection process, we focused on agencies that had reported at least 50 complaints filed in fiscal year 2005, the number of employees at agencies, and the mechanism the agencies used to investigate complaints—primarily agency employees, contract investigators, or a mix of in-house and contract investigators in fiscal year 2005. The 17 agencies that we selected on the basis of the number of complaints filed and the mechanism for EEO investigations were the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, the Interior, Justice, Transportation, the Treasury, State, and Veterans Affairs; the Environmental Protection Agency; the General Services Administration; the Office of Personnel Management; and the U.S. Postal Service. EEO complaints filed at the selected 17 agencies in the aggregate represented 91 percent of EEO complaints filed governmentwide in fiscal year 2005. We decided against including the Department of Justice (Justice) after agency officials said that their practitioners’ survey responses first would have to undergo vetting within the agency. Under those conditions, we could not allow Justice practitioners to participate in the survey. We so advised Justice representatives during a telephone conference, during which we also proposed interviewing Justice officials later in the engagement about possible changes to the EEO complaint process. Justice representatives agreed to participate under those conditions. The decision not to include Justice left us with 16 agencies that in the aggregate represented 87.5 percent of EEO complaints filed governmentwide in fiscal year 2005. Finally, for a total of 17 agencies, we included EEOC because of the roles that its AJs play in adjudicating hearings of EEO complaints and its appeals attorneys play in adjudicating appeals of decisions on those complaints. EEO Practitioner Selection To recruit EEO practitioners from the 16 agencies, we contacted EEO directors at these agencies by telephone and e-mail, informed them about the nature of our review, requested their participation in the survey, and asked them to nominate EEO counselors, investigators, and agency counsel. We contacted EEOC officials to recruit EEOC AJs and appeals attorneys. We also contacted plaintiffs’ attorneys from the private sector. The selected practitioners represent different parts of the complaint process, and some of the practitioners may only be familiar with their part of the process. We recruited an equal number of individuals from each category of EEO practitioners to attain a wide representation of agencies and reduce possible bias in the final results. To achieve a more independent distribution of agency practitioners, we selected our final list of practitioners from 16 agencies (not including EEOC) in an effort to reduce the risk of collaborative responses caused by horizontal integration. We recruited no more than 3 practitioners for each of the four categories of agency practitioners (i.e., directors, counselors, investigators, and agency counsels) to lessen the likelihood that any of the agencies would have all categories of practitioners and ensure a broader perspective on the issues. In all, we selected 36 practitioners, 9 from each group of agency practitioners. We did not select a member of every practitioner group from every agency. In addition to the agency practitioners, we also sought the perspectives of practitioners from EEOC, which administers and provides guidance and oversees the federal EEO complaint process. We asked EEOC supervisors and nonsupervisors to nominate EEOC AJs and appeals attorneys to participate in the survey. To recruit AJs, we also considered recommendations from EEOC management and from an organization representing EEOC AJs, contacted nominees and asked them to participate in the survey and to recommend other AJs for participation, and then contacted the nominees to request their involvement. We selected nine EEOC appeals attorneys and nine EEOC AJs. Finally, we selected nine plaintiffs’ attorneys after considering relevant information from other EEO practitioners and people in the EEO community. To address our objectives, we primarily used two Web-based surveys to systematically collect and distill knowledge from the EEO practitioners we had selected. Phase I Survey Our first Web-based survey consisted of open-ended questions that were designed to capture the practitioners’ narrative responses. Specifically, we asked practitioners three questions: (1) Based on your experience as an EEO practitioner, what are the most important factors you have observed that materially impede the prompt, fair, and impartial processing of complaints at your agency, or at EEOC, and how have those factors impeded complaint processing? (2) What specific changes could be made to address the factors you listed above, in order to promote the prompt, fair, and impartial processing of federal EEO complaints? (3) What changes have been made to the EEO complaint process at your agency? What effects did these changes have on the prompt, fair, and impartial processing of EEO complaints at your agency? Before launching each survey, we conducted a series of pretests with internal and external EEO practitioners, including some actual survey respondents. The goals of the pretests were to check that (1) the questions were clear and unambiguous and (2) the terminology was used correctly. To conduct pretests, we selected representatives from several practitioner categories, provided them with survey drafts for their review, and interviewed them either in person or by teleconference to obtain their opinions about the language, format, and tone of questions in the survey. Based on the reactions of practitioners, we changed the survey content and format during pretesting as necessary. We also conducted usability tests that entailed checking each practitioner’s password, user name, and link to ensure their operability before we launched the Web survey. To activate the survey, we posted it to the Internet. We notified the 63 EEO practitioners of the availability of the questionnaire with an e-mail message that contained a unique user name and password that allowed each respondent to log on and fill out a questionnaire while preventing respondents from gaining access to the surveys of others. Using their access information, practitioners could access the survey on the Internet at any time and could complete it at their convenience. If practitioners did not respond to the confidential link we provided, we accepted official submissions for responses in another format (e.g., e-mail). Access to the Phase I survey formally began on April 9, 2007, after which practitioners had approximately 8 weeks from April 2007 through May 2007 to complete the survey. While the survey was ongoing, we wrote follow-up e-mails and made telephone calls to practitioners who did not initially respond to the survey to ensure that we made every effort to reach them. Of the 63 practitioners to whom we made the Phase I survey available, 1 practitioner informed us that she did not work in one of our practitioner categories. As she was the only respondent from her agency, we sent the survey to another EEO practitioner at that agency. When we did not receive a response from another practitioner at another agency, we then sent the survey to an official from her agency, because that official’s office is in charge of the discrimination complaint counseling and investigation processes and alternative dispute resolution. Thus, we selected 65 practitioners to participate in the Phase I Web-based survey. By June 2007, of the 65, 62, or about 95 percent of the EEO practitioners, had completed the Phase I survey. Responses to the survey express only the views and attitudes of the practitioners. Once the Phase I survey was complete, we conducted a content analysis of practitioners’ open-ended narrative responses to that survey. We developed a coding system that was based on the type of practitioner, the individual respondent, sequential numbers to identify the response, and the type (solution or factor) of response. We assigned individual codes to each sentence or paragraph provided by each practitioner. Based on our content analysis of Phase I responses, we developed a list of eight broad categories of factors—accountability, knowledge and skills, authority, independence, commitment, resources, communication, and laws and guidance—into which we grouped the responses. We also included “Other” and “Not applicable” categories where we placed that very small number of responses that did not fit under the eight factors. Some Phase I survey responses may have addressed multiple issues and so may have been classified into more than one of these factors. We did not assess the validity of the practitioners’ views of impediments or solutions to the EEO complaint process or evaluate the effectiveness of initiatives that agency EEO practitioners said their agencies had implemented to improve their complaint processes. We report the views of practitioners who are knowledgeable of the federal EEO complaint process, but these views do not represent the official views of the 17 agencies. In addition, the practitioners’ views cannot be generalized to all federal agencies and EEO practitioners for some or all of the factors identified. Phase II Survey After categorizing all responses according to the eight broad factors, we used the results as a basis for developing the closed-ended questions that made up the Phase II survey and asked practitioners to rank on a scale of 1 through 8 the solutions they considered to be most important for improving the current federal EEO complaint process. As we had done for the Phase I survey, we conducted pretests of the Phase II survey with practitioners to ensure that our questions were clear and unambiguous and that the terminology was being used correctly. For pretest subjects, we selected representatives from each of the practitioner categories and included some actual survey respondents. We provided them with survey drafts for their review and interviewed them in person or by telephone. We modified the draft survey to address feedback we received from pretesters. The Phase II survey formally began on January 10, 2008. We sent the survey to the 62 EEO practitioners who responded to the Phase I survey. Survey respondents took approximately 8 weeks, from January 2008 through February 2008, to complete the second phase of the survey. We wrote follow-up e-mails and made numerous telephone calls to contact practitioners who did not initially respond to the survey to ensure that we obtained responses from as many practitioners as possible. In all, 56, or about 90 percent, of the 62 practitioners completed the Phase II survey, which refines the results of the Phase I survey by asking respondents to provide their views as to where directed improvements in the EEO complaint process for each of the eight broad factors from the Phase I survey could have the greatest effect. The Phase II survey asked respondents to rank each of the eight factors identified in the Phase I survey from highest to lowest in terms of importance for improving the federal EEO complaint process. Once respondents completed the Phase II survey, we computed overall rankings of the factors according to the order of frequency in which respondents ranked them as most, second most, or third most important. The views expressed by the survey respondents do not represent the views of GAO. Discussions with Stakeholders We also gathered information to address our second objective by interviewing representatives from a variety of stakeholder organizations in the federal EEO complaint process, including federal employee unions, federal executive and managers associations, agency attorneys’ associations, and federal employee organizations, to obtain their views regarding possible changes that could be made to the federal EEO complaint process and the advantages and disadvantages of implementing such changes. We selected these stakeholder organizations based on a literature search, recommendations from EEOC, and our professional judgment in an effort to compile a diverse list of organizations with involvement in EEO activities or that represented specific groups protected by EEO laws. The stakeholder organizations we contacted for this review do not represent all of the potential stakeholder organizations from specific groups protected by EEO laws. Using a preliminary list we developed, we obtained the names, street addresses, and e-mail addresses of officials from these organizations and conducted interviews with representatives from these organizations in their headquarters offices and in facilitated group meetings at GAO headquarters. Before conducting the stakeholder interviews, we e-mailed representatives a document that contained preliminary information from our Phase I survey and descriptions of several possible options for reassigning responsibilities for operating federal EEO investigations, counseling, hearings, and appeals to EEOC, another agency, or a hypothetical entity. We provided this information to enable stakeholders to review the document before interviews where it would serve as a point of discussion. During these interviews, we asked stakeholder organization representatives whether they thought our eight broad factors adequately captured the complex issues in the federal EEO complaint process and to identify the advantages and disadvantages of implementing the structural options that we had described for changing the EEO complaint process. We analyzed the views of these stakeholder organization representatives by reviewing their observations concerning our eight broad factors as well as their observations on the possible options for making changes to the EEO complaint process. Actions Taken by EEOC to Improve Equal Opportunity in the Federal Workforce To identify actions taken by EEOC to improve the federal EEO complaint process, we reviewed EEOC documents and interviewed commission officials. We did not evaluate the effectiveness of actions EEOC reported taking. We conducted this performance audit from May 2006 through August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: EEO Laws Applicable to Federal Employees Title VII of the Civil Rights Act of 1964, as amended, makes it illegal for employers, including federal agencies, to discriminate against their employees or job applicants on the basis of race, color, religion, sex, or national origin. The Equal Pay Act of 1963 protects men and women who perform substantially equal work in the same establishment from sex- based wage discrimination. The Age Discrimination in Employment Act of 1967, as amended, prohibits employment discrimination against individuals who are 40 years of age or older. Sections 501 and 505 of the Rehabilitation Act of 1973, as amended, prohibit discrimination against qualified individuals with disabilities who work or apply to work in the federal government. Federal agencies are required to provide reasonable accommodation to qualified employees or applicants for employment with disabilities, except when such accommodation would cause an undue hardship. In addition, a person who files a complaint or participates in an investigation of an EEO complaint or who opposes an employment practice made illegal under any of the antidiscrimination statutes is protected from retaliation. The EEOC is responsible for enforcing all of these laws. Appendix III: Comments from the U.S. Equal Employment Opportunity Commission Appendix IV: GAO Contact and Staff Acknowledgments Acknowledgments In addition to the contact named above, Anthony Lofaro, Belva Martin, and Kiki Theodoropoulos, Assistant Directors; Gerard Burke; Jeff Dawson; Brandon Haller; Karin Fangman; Jeff Niblack; and Greg Wilmoth made major contributions to this report.
Delays in processing federal equal employment opportunity (EEO) complaints, apparent or perceived lack of fairness and impartiality in complaint processing, and fear of retaliation in the workplace have been long-standing concerns of the Equal Employment Opportunity Commission (EEOC), other federal agencies, and Congress. Based on a Notification and Federal Employee Antidiscrimination and Retaliation Act mandate, GAO analyzed (1) factors that EEO practitioners have identified as impeding the fair, prompt, and impartial processing of federal EEO complaints and (2) actions that EEO practitioners and other stakeholders think could be taken to help address those factors. GAO also identified actions that EEOC is taking to improve the federal complaint process. GAO surveyed 65 EEO practitioners representing a wide cross section of professionals knowledgeable about the federal EEO complaint process, who were selected from 16 federal agencies that accounted for about 88 percent of complaints filed in fiscal year 2005, EEOC, and private sector attorneys' offices. GAO did not assess the validity of practitioners' views or evaluate the effectiveness of initiatives. GAO analyzed and grouped into eight, the factors that EEO practitioners identified as those they believed impeded the fair, prompt, and impartial processing of federal EEO complaints: (1) lack of accountability by some agency officials and EEOC practitioners in carrying out their responsibilities; (2) lack of sufficient resources by some EEO programs and EEOC to fulfill their responsibilities; (3) lack of independence by some agency officials, including undue interference by some agency legal counsel and human resources officials in EEO matters; (4) insufficient knowledge and skills by some agency officials and EEO practitioners; (5) lack of authority by some EEO officials to dismiss cases that have no merit and lack of subpoena power by EEOC administrative judges (AJ); (6) lack of clarity in regulation and some guidance and consistent decisions from EEOC; (7) lack of effective communication by some EEO practitioners of relevant oral and written information to participants and that alternative dispute resolution is available; and (8) lack of a firm commitment by some agency management and EEO officials to the EEO process. The practitioners' views do not represent the official views of the selected agencies and should not be generalized to conclude that all federal agencies and EEO practitioners are deficient in all factors identified. Also, a few stakeholders GAO contacted stated that without the perception that the complaint process is fair, people may choose to not participate in it; GAO believes this concern is important and has been accounted for within the discussion of several of the factors. EEO practitioners surveyed and stakeholders suggested potential solutions to address the factors practitioners identified and provided information on relevant changes their agencies had made to the process. For example, to strengthen accountability, practitioners reported establishing measures for timeliness and quality for agency EEO professionals and those contracted to perform EEO complaint functions. To strengthen EEO staff's independence, several practitioners and stakeholders offered that agencies should adhere more clearly to existing EEOC requirements on delineating the roles of the agency general counsels in the EEO process. Stakeholders offered potential advantages and disadvantages to allowing complainants to file directly with EEOC as a means to avoid real or perceived conflicts of allowing an agency to investigate a complaint against itself. Several practitioners and EEOC officials stated that providing subpoena authority to AJs could help improve the efficiency of the EEO complaint process by compelling witnesses to testify. To help agencies achieve model EEO programs, EEOC has begun to measure agencies' progress in such areas as the timeliness of investigations. In June 2008, EEOC announced a proposal that includes provisions that may address some of the factors that practitioners identified. The proposal would require that agency EEO programs comply with EEOC regulations and other guidance and that EEOC review those programs for compliance. The proposal also would permit agencies to conduct pilot projects to test new ways to process EEO complaints that are not presently included in existing regulations.
Introduction The mutual fund industry’s growth in the last decade has been phenomenal. At the end of 1984, mutual funds managed assets totaling about $371 billion. By 1990 managed assets had grown to over $1 trillion, and by December 1994 this figure had doubled to about $2.2 trillion, second only to the $2.4 trillion in total deposits held in U.S. commercial banks. According to the Investment Company Institute (ICI), the national trade association of the mutual fund industry, there are a number of reasons for the industry’s growth. These include appreciation in the value of assets held by the mutual funds; additional purchases by existing shareholders; the introduction of new types of products and services; the growth of the retirement plan market; increased investment by institutional investors; the introduction of new distribution channels—such as banks; and a shift by individual investors from direct investments in stocks, bonds, and other securities to investment in securities through mutual funds. Also, in recent years investors have shifted to mutual funds in an attempt to obtain a better investment return than has been available through alternative investments, such as certificates of deposit. Background A mutual fund, formally known as an open-end investment company, pools the money of many investors. These investors, which can be either individuals or institutions, have similar investment objectives, such as maximizing income or having their investment capital appreciate in value. Their money is invested by a professional manager in a variety of securities to help the investors in the fund reach their objectives. By investing in a mutual fund, investors are able to obtain the benefits of owning a diversified portfolio of securities rather than a limited number of securities. This can lessen the risks of ownership. In addition, investors gain access to professional money managers, whose services they might otherwise be unable to obtain or afford. Each dollar that an investor puts into a mutual fund represents some portion of ownership in that fund. Funds must calculate their share price on days on which purchase or redemption requests have been made based on the market value of the assets in the fund’s portfolio, after expenses, divided by the number of shares outstanding. This leads to a figure known as the net asset value (NAV). Per-share values change as the value of the assets in the fund’s portfolio changes. Investors can sell their shares back to the fund at any time at the current NAV. Many newspapers carry the purchase and redemption prices for mutual funds on a daily basis. A mutual fund is owned by its hundreds or thousands of shareholders. A board of directors is responsible for overseeing the fund’s investment policies and objectives. The board generally does not do the work of the fund itself, but instead contracts with third parties to provide the necessary services. The Investment Company Act of 1940 requires that at least 40 percent of the board of directors be independent of the fund, its adviser, and underwriter. One of the functions of the board is to approve the mutual fund’s contracts with its investment adviser. The investment adviser plays a key role in the operation of a mutual fund. The investment adviser manages the fund’s investment portfolio by deciding what securities to buy and sell in accordance with the fund’s stated investment objectives. Other functions of the board include choosing the administrator, who generally acts as the fund’s manager by keeping the books and records, filing necessary reports with the Securities and Exchange Commission (SEC), and calculating NAV; the distributor or “underwriter,” who either sells the fund’s shares directly to the public or enters into agreements with broker-dealers or banks that will in turn sell them to retail customers; the transfer agent, who keeps track of fund shareholders and maintains information about the number of shares owned by investors; and the custodian, who is responsible for safeguarding the cash and securities assets of the fund, paying for securities when they are purchased by the fund, and collecting the income due when securities are sold. Mutual funds are sold to the public in two basic ways: directly to the public or through a sales force, such as a broker. With direct marketing, funds often solicit customers through newspaper, television, and magazine advertising or direct mail. These funds typically have low or no sales fees, or “loads.” Funds that are marketed primarily through a sales force are usually available through a variety of channels, including brokers, financial planners, banks, and insurance agents. These sales people may be compensated through a load, which is included in the price at which the fund’s shares are offered; through a distribution fee paid by the fund; or both. Regulation of Bank and Thrift Mutual Fund Activities The mutual fund activities of banks and thrifts are subject to a number of federal and state securities and banking laws and regulations—and to the shared oversight of a variety of federal and state securities and banking regulators. In general, the mandate of securities laws is to protect investors through full and timely disclosures, while many banking laws are geared to protecting depositors and ensuring bank safety and soundness. Federal Securities Laws Apply to Mutual Fund Activities The principal securities laws that apply to mutual funds are the Investment Company Act of 1940; a companion law, the Investment Advisers Act of 1940; and the Securities Act of 1933. These laws are intended to foster full disclosure of the risks involved in buying mutual funds and to protect investors. The Investment Company Act requires all mutual funds to register with SEC. The act contains numerous requirements relating to the operation of funds, including rules on the composition and election of boards of directors, disclosure of investment objectives and policies, approval of investment advisory and underwriting contracts, limitations on transactions with affiliates, permissible capital structures, custodial arrangements, reports to shareholders, and corporate reorganizations. Investment advisers to mutual funds, except banks, are subject to the Investment Advisers Act, which requires that any firm in the business of advising others as to the value of securities register with SEC. The Advisers Act also imposes reporting requirements on registered investment advisers and subjects them to restrictions against fraudulent, deceptive, or manipulative acts or practices. The Securities Act of 1933 requires that all publicly offered shares of any issuer, including mutual funds, be registered with SEC. In addition, SEC has adopted rules under that act to require extensive disclosures in a fund’s prospectus, including information about the fund’s investment objectives and policies, investment risks, and all fees and expenses. The act also regulates mutual fund advertising. In addition to the above laws, another securities law, the Securities Exchange Act of 1934, regulates how shares in mutual funds are sold. This act requires that persons distributing shares or executing purchase or sale transactions in mutual fund shares be registered with SEC as securities broker-dealers. SEC oversees the regulation of mutual funds and their investment advisers under the Investment Company Act and the Investment Advisers Act. SEC reviews disclosure documents, such as prospectuses, and inspects mutual fund operations. It also registers and inspects investment advisers. Broker-dealers who sell mutual funds are regulated and examined by SEC and the National Association of Securities Dealers (NASD). NASD was established pursuant to the Securities Exchange Act of 1934 as a self-regulatory organization for brokerage firms, including those that engage in mutual fund distribution, and is itself subject to SEC’s oversight. SEC and NASD regulate broker-dealers by regularly examining broker-dealer operations on-site and investigating customer complaints. NASD has also established Rules of Fair Practice, which govern standards for advertising and sales literature, including filing requirements, review procedures, approval and recordkeeping obligations, and general standards. In addition, NASD tests individuals to certify their qualifications as registered representatives and has primary responsibility for regulating advertising and sales literature used to solicit and sell mutual funds to investors. Banks Are Exempt From Requirements to Register as Broker-Dealers and Investment Advisers The Securities Exchange Act of 1934 exempts banks from its broker-dealer registration requirements. As a result, banks may choose to have their own employees sell mutual funds and other nondeposit investment instruments without the need to be associated with an SEC-registered broker-dealer or subject to NASD oversight. In those instances, the banking regulators, instead of NASD, are responsible for overseeing the sales activities of bank employees. Banks are also exempt from being defined as investment advisers under the Investment Advisers Act. As a result, banks may serve as investment advisers to mutual funds without registering with SEC. However, some banking organizations place their advisory activities in a nonbank subsidiary of a bank holding company. In these cases, the subsidiary is required to register as an investment adviser and is subject to SEC oversight under the Investment Advisers Act. Federal Banking Laws and Regulation In addition to the oversight provided by securities regulators, a number of banking laws apply to banking organizations’ mutual fund activities. One such law is the Glass-Steagall Act, which was enacted in 1933. The Glass-Steagall Act was designed to curb perceived securities abuses and speculative investments by banks that were thought to have contributed to the collapse of commercial banking in the early 1930s. The act prohibits certain securities activities by banks and their affiliates. For example, the act generally prohibits all banks from underwriting (publicly distributing new issues of securities) and dealing (trading for its own account) in certain securities directly. It also prohibits Federal Reserve System member banks from purchasing certain securities for the bank’s own account and from having interlocking management relationships with firms that are engaged primarily or principally in underwriting securities.Until the early 1980s, Glass-Steagall was viewed as prohibiting banks from engaging in most mutual fund activities. Since then, a series of federal banking agency decisions and court rulings have eroded the Glass-Steagall restrictions and allowed banks to engage in a wide variety of mutual fund activities that had not been permitted previously. These include serving as the investment adviser to a mutual fund, selling mutual funds to retail and institutional customers, and offering various administrative services, such as recordkeeping and custodial functions. Essentially, banks can now do everything but underwrite a mutual fund. While the Glass-Steagall Act restricts banks’ mutual fund activities, the actual powers granted banks to engage in these activities and the framework for their regulation and oversight are found in other laws. The powers of national banks to engage in mutual fund activities are contained in the National Bank Act, which is administered by the Office of the Comptroller of the Currency (OCC). State-chartered banks derive their powers from the state laws under which they are chartered, subject to restrictions imposed by the Federal Reserve Act if they are members of the Federal Reserve System. State-chartered banks that are members of the Federal Reserve System are supervised by the Federal Reserve Board as well as by state-level banking authorities. Federally insured state-chartered banks that are not Federal Reserve members are subject to regulation and oversight by the Federal Deposit Insurance Corporation (FDIC) under the Federal Deposit Insurance Act and state banking authorities. The powers of bank holding companies are found in the Bank Holding Company Act of 1956, which is administered by the Federal Reserve Board. The authority for thrifts and their affiliates to engage in mutual fund activities is contained in federal and state laws applicable to savings and loan associations, particularly the Home Owners’ Loan Act and the Federal Deposit Insurance Act. Thrifts are supervised by the Office of Thrift Supervision (OTS). Objectives, Scope, and Methodology This report was prepared in response to requests from the former Chairmen of the House Committee on Banking, Finance, and Urban Affairs and the Subcommittee on Oversight and Investigations of the House Committee on Energy and Commerce that we examine the disclosure and sales practices of banks with respect to mutual funds. Our objectives were to (1) determine the extent and nature of bank and thrift involvement in mutual fund sales, (2) assess whether the sales practices followed by banks and thrifts provide bank customers adequate disclosures of the risks of investing in mutual funds, and (3) analyze whether the existing framework for regulation and oversight of bank and thrift mutual fund sales practices and proprietary fund operations adequately protects investors. To determine the extent and nature of bank and thrift involvement in mutual fund sales, we gathered and analyzed information on the size of the mutual fund market, the level of bank and thrift participation in it, and the methods by which banks and thrifts market mutual funds to their customers. To do this, we obtained data from Lipper Analytical Services, Incorporated, a well-known source of data on the mutual fund industry. Lipper maintains a database of information on bank-related mutual funds and publishes a semiannual report, “Lipper Bank-Related Fund Analysis,” which we used as a source for some of the information in this report. We also used other information accumulated by Lipper on the mutual fund industry as a whole. We did not verify the data we obtained from Lipper; however, we asked Lipper to provide us a detailed description of the methods it uses to accumulate data and the internal controls it employs to ensure its accuracy. We used the description to determine that these methods and controls would provide reasonable assurance that the information supplied by Lipper was accurate. In addition, we determined that Lipper’s mutual fund data are widely used in the financial services industry and are considered reliable by those who use it. Because Lipper’s database does not cover the extent to which nonproprietary funds are sold through banks, we also surveyed a random sample of 2,610 banks and 850 thrifts to obtain comprehensive data on which of these institutions offer mutual funds for sale, the types of funds they sold, and their recent sales data. In addition, to obtain additional information on the characteristics of mutual fund sales through banks and thrifts, we reviewed pertinent regulatory and industry studies, particularly a survey of mutual funds that was released by ICI in November 1994. To determine whether the sales practices followed by banks and thrifts provide bank customers adequate disclosures of the risks of investing in mutual funds, we developed and mailed a questionnaire to a random sample of banks and thrifts asking these institutions to provide information concerning how and by whom mutual funds are sold to retail customers, how sales personnel are compensated, whether written policies and procedures have been established, and the types of disclosures that are made to retail customers. Posing as bank customers interested in a mutual fund investment, we also visited a randomly selected sample of 89 central offices of banks and thrifts in 12 cities. The purpose of these visits was to observe and document the sales practices of institutions selling mutual funds and to test whether salespersons were following the federal banking regulators’ guidance concerning mutual fund sales programs. A detailed explanation of the methodology we used in our survey questionnaire and in our visits to banks and thrifts is contained in appendix I. To gain an understanding of how the existing regulatory framework for overseeing bank sales of mutual funds protects investors, we (1) interviewed selected bank and thrift regulators, securities regulators, bank and thrift officials, and industry representatives in Washington, D.C.; New York, New York; Boston, Massachusetts; Philadelphia, Pennsylvania; San Francisco, California; and Chicago, Illinois; (2) reviewed relevant literature, congressional testimony, studies, regulations, and laws; and (3) reviewed and analyzed financial regulators’ examination policies, procedures, reports, and workpapers. We did our work between May 1993 and December 1994 in accordance with generally accepted government auditing standards. We provided a draft of this report to FDIC, the Federal Reserve, NASD, OCC, OTS, and SEC for comment. Their comments are presented and evaluated in Chapters 3 and 4, and their letters are reprinted in full in appendixes IV through IX, along with our additional comments. The organizations also suggested several technical changes to clarify or improve the accuracy of the report. We considered these suggestions and made changes where appropriate. Banks and Thrifts Have Rapidly Expanded Their Participation in the Mutual Fund Industry Since restrictions on banks’ mutual fund activities were liberalized in the 1980s, banks and thrifts have rapidly expanded their participation in the mutual fund industry. Many institutions have established their own families of mutual funds, called proprietary funds, and the sales of these funds as a percentage of total industry sales have grown sharply. Banks and thrifts have also become major sales outlets for nonproprietary mutual funds. Institutions with assets greater than $1 billion are more likely to sell mutual funds than are those with less assets, but most of the institutions that have begun selling mutual funds since the end of 1991 are the smaller ones. Most banks and thrifts reported that they sell mutual funds for two reasons: to keep their customers and to increase their fee income. Growth of Proprietary Funds Has Outpaced the Industry as a Whole As of December 31, 1993, about 114 bank and thrift companies had established proprietary mutual funds. These are funds for which a bank or one of its subsidiaries or affiliates acts as the investment adviser and that are marketed primarily through the bank. Most of these have been established by very large banking organizations; of the 114 companies that had proprietary funds as of December 31, 1993, 79 were among the top 100 bank holding companies in the United States. During the 5 years between the end of 1988 and the end of 1993, the growth of bank proprietary funds in terms of the value of assets managed by the funds has been much greater than the growth of the industry as a whole. As shown in table 2.1, between December 31, 1988, and December 31, 1993, the value of assets managed by bank proprietary funds grew from about $46 billion, or 6 percent of the industry total, to about $219 billion, or 11 percent of the industry total. Although banks greatly increased their sales of mutual funds to retail customers during the 5 years, the increase in their sales to institutional customers was even greater. Retail sales grew by 324 percent, and institutional sales grew by 443 percent. Nearly $119 billion of the $219 billion (54 percent) of the assets in bank proprietary funds at year-end 1993 were in funds marketed primarily to institutional customers. In contrast, only about 10 percent of the assets in nonproprietary funds were marketed primarily to institutional customers. The number of proprietary funds offered by banks also grew faster than the industry as a whole. As shown in table 2.2, as of December 31, 1988, there were 317 bank proprietary funds, representing about 13 percent of the 2,372 total mutual funds in existence at that time. By December 31, 1993, banks offered 1,415 proprietary mutual funds, or 24 percent of the industry total of 5,851 funds. Table 2.2 also shows that, while more than one-half of the assets in bank funds were in money market funds, the greatest growth, both in the number of funds offered and assets managed, was in taxable fixed-income (bond) funds, followed by equity funds. Banks and Thrifts Have Become Major Retailers of Nonproprietary Funds In addition to sales of proprietary funds, banks and thrifts also have become major sales outlets for nonproprietary funds. These are funds managed by an independent fund company and sold by the bank or bank affiliate. Nonproprietary funds are also available to investors outside the bank through an unaffiliated broker-dealer. According to data compiled by ICI, 1,780 nonproprietary funds were available through the bank channel (banks and thrifts) at the end of 1993. This represents a 62-percent increase from the 1,100 funds available as of the end of 1991. ICI’s data also show that new sales of fixed-income and equity funds through the bank channel rose from $28.1 billion in 1991 to $67.5 billion in 1993. However, because sales of fixed-income and equity funds through nonbank channels also rose considerably, the percentage of sales through the bank channel to total sales increased only slightly, from 13 percent in 1991 to 14 percent in 1993. In addition, ICI found that about 55 percent of banks’ sales of fixed-income and equity funds in 1991 was attributable to sales of nonproprietary funds. This figure rose to 59 percent in 1992, then dropped to 51 percent in 1993. An ICI official told us that the increase in bank sales of nonproprietary funds in 1992 was caused by a strong demand for equity funds—a segment of the market in which bank proprietary funds were not as well represented as were nonproprietary funds. In 1993, there was a surge in demand for fixed-income funds, a segment in which bank funds were well represented. Also, by 1993 banks had introduced more equity funds. As a result, bank sales of proprietary funds increased in 1993 compared to their sales of nonproprietary funds. ICI’s data show that, as of the end of 1993, the assets of funds attributable to bank sales were about $298 billion, or 14.2 percent of the $2.1 trillion in total mutual fund assets. This figure includes both proprietary and nonproprietary funds sold through banks and thrifts. In 1991, the comparable percentage was 11.6 percent, and in 1992, 13.8 percent. In breaking down the total between money market funds and long-term funds (fixed-income and equity funds), ICI found that in 1993, nearly 29 percent of all money market fund assets were attributable to bank and thrift sales. Less than 9 percent of long-term fund assets were attributable to bank and thrift sales, indicating the relatively strong presence of banking institutions in the money market fund area. However, the 9 percent of long-term fund assets attributable to bank and thrift sales at year-end 1993 represents a near doubling of the comparable figure at year-end 1991. Larger Institutions More Likely to Have Fund Sales, but Smaller Institutions Are Increasingly Entering the Market On the basis of the responses to the questionnaire we sent to a nationwide sample of banks and thrifts, we estimate that nearly 17 percent, or about 2,300 of the approximately 13,500 banks and thrifts in the United States, were offering mutual funds for sale as of the end of 1993. The results also indicated that the larger the size of the institution, the greater the likelihood that it had a mutual fund sales program. About 74 percent of the banks with $1 billion or more in assets had mutual fund sales programs, but only 11 percent of the banks with assets less than $150 million sold mutual funds. Similarly, about 60 percent of thrifts with assets of $1 billion or more had sales programs, but only about 3 percent of thrifts with assets less than $100 million sold mutual funds. Our data also show that while larger institutions are more likely to sell mutual funds, smaller institutions are increasingly introducing mutual fund sales programs. We estimate that about 49 percent of all banks and thrifts that had mutual fund sales programs began selling funds within the last 2 years. About 74 percent of those banks entering the mutual fund sales arena over the previous 2 years had assets of less than $250 million. Customer Retention and Fee Income Are the Main Reasons Banks and Thrifts Said They Sell Mutual Funds About 94 percent of the banks and thrifts responding to our questionnaire cited retention of customers as of great or very great importance in their decisions to begin selling mutual funds to their retail customers, and about 49 percent reported that fee income was of great or very great importance in their decisions. However, fee income may become more important after sales begin. Discussions we had with bank officials showed that once the mutual fund sales program is established, the objectives of the program may broaden to include the generation of fee income. For example, an official of one very large bank with a large and widely marketed family of proprietary funds told us that initially the bank began offering uninsured investments, including mutual funds, as a defensive measure to retain customers. The bank now offers a full line of investment products and recognizes this as a key customer service and revenue-generator. A July 1994 survey by Dalbar Financial Services, Inc., of Boston, Massachusetts, a mutual funds research and consulting firm, confirmed that fee income becomes more important as a mutual fund sales program matures. Dalbar’s survey of over 200 bank executives indicated that while almost half of the respondents said they got into the business to retain customers, only 36 percent said that was still their top priority. In contrast, about 30 percent of the bankers said that they were now in the business to increase fee income, up from 19 percent who said that was the reason they got into the business in the first place. Inadequate Disclosure of Risks Associated With Mutual Fund Investing In response to the rapid growth of sales of mutual funds by banks and concerns that bank customers may be confused or ill-informed about the differences between mutual funds and traditional bank products, the federal banking regulators have increased their regulatory and supervisory oversight of banks’ mutual fund sales activities. Our visits to banking institutions demonstrated the need for this increased emphasis as well as for continued vigilance by regulators. We estimate that about one-third of the institutions that sold mutual funds in the 12 metropolitan areas we sampled fully complied with the bank regulators’ guidance on disclosing the risks of investing in mutual funds. Further, about one-third of the institutions did not clearly distinguish their mutual fund sales area from the deposit-taking areas of the bank as stated in the guidance. Many banking institutions paid employees in the deposit-taking areas to refer customers to mutual fund sales representatives. During our visits to the institutions we found that these employees complied with statements in the guidance about not providing investment advice. However, some of the sales literature we were provided did not clearly and conspicuously disclose the risks of investing in mutual funds. Concerns About Bank Sales of Mutual Funds As banking institutions have become major retailers of mutual funds, regulators, Congress, and the public have become increasingly concerned that investors who are likely to purchase mutual funds through a bank or thrift may not fully understand the differences between these investments and traditional bank savings products, such as certificates of deposit and money market deposit accounts. In particular, there is concern that sales of mutual funds in a bank lobby or through bank employees may mislead customers into believing that mutual funds are federally insured. Further, bank customers may not understand that even mutual funds that appear to be conservative investments, such as government bond funds, may be subject to fluctuations in value and could involve loss of principal. In November 1993, SEC released the results of a survey taken to determine the degree to which investors understand the risks associated with mutual funds. The survey was limited to 1,000 randomly selected households, 47 percent of which reported that they owned shares of mutual funds. The survey results indicated that confusion about the risks of investing in mutual funds was not limited to those who purchased mutual funds through a bank. For example, 66 percent of the investors in the survey who bought money market mutual funds through a bank and 41 percent of all holders of mutual funds incorrectly believed that these funds are federally insured. In addition, 39 percent of all mutual fund holders and 49 percent of those who purchased a mutual fund through a bank incorrectly believed that mutual funds purchased from a stockbroker are federally insured. Federal Banking Regulators Have Issued Guidance on How Mutual Fund Sales Are to Be Conducted Between June and October 1993, each of the four banking regulators—OCC, FRS, FDIC, and OTS—issued guidance to the institutions they regulate concerning how sales of mutual funds and other nondeposit investment products should be conducted. In February 1994, the four regulators jointly issued the “Interagency Statement on Retail Sales of Nondeposit Investment Products.” This new guidance superseded the guidelines previously issued and unified the guidance to banks and thrifts on the policies and procedures that they should follow in selling mutual funds and other nondeposit investment products. The interagency statement contains guidelines on disclosures and advertising, the physical setting and circumstances for bank sales of investment products, qualifications and training of sales personnel, suitability and sales practices, compensation practices, and internal control systems. In particular, it emphasizes that banking institutions are to ensure that bank customers are made aware that the products (1) are not FDIC-insured; (2) are not deposits or other obligations of the institution; (3) are not guaranteed by the institution; and (4) involve investment risks, including possible loss of principal. The statement applies to bank employees as well as employees of either an affiliated or unaffiliated third-party broker-dealer when the sales activity occurs on the premises of the institution. It also applies to sales resulting from a referral of retail customers by the institution to a third party when the institution receives a benefit from the referral. With regard to qualifications and training of sales personnel, the guidance states that if bank personnel sell or recommend securities, the training they receive should be the substantive equivalent of that required for personnel qualified to sell securities as registered representatives under the securities laws. Some Institutions Did Not Adequately Disclose the Risks of Mutual Fund Investing Oral disclosure of the risks of mutual fund investing is very important in making sure that customers fully understand the nature of these investments. On the basis of our visits to 89 banking institutions in 12 metropolitan areas, we estimate that about 32 percent of the institutions in these areas that sell mutual funds completely disclosed the risks associated with investing in mutual funds in accordance with the banking regulators’ guidance. In addition, disclosure of the risks of investing in bond funds was inadequate, when compared to the guidance, at about 31 percent of the institutions. However, there were no misleading references to Securities Investor Protection Corporation (SIPC) insurance during our visits. Sales Personnel at Many Institutions Did Not Orally Disclose the Risks Associated With Investing in Mutual Funds The most important difference between a bank’s mutual fund investments and deposits is the risk to the investor. Bank depositors’ accounts are insured up to $100,000 by FDIC. Mutual funds are not insured against market loss and, consequently, are more risky to the investor. The guidance issued by the banking regulators states that retail customers must be clearly and fully informed about the nature and risks associated with nondeposit investment products. Specifically, when nondeposit investment products are either recommended or sold to retail customers, the disclosures must specify that the product is (1) not insured by FDIC; (2) not a deposit or other obligation of the institution; (3) not guaranteed by the institution; and (4) subject to investment risks, including possible loss of the principal amount invested. The interagency guidance states that these disclosures should be provided to the customer orally during any sales presentation, orally when investment advice concerning nondeposit investment products is provided, orally and in writing prior to or at the time an investment account is opened to purchase these products, and in advertisements and other promotional materials. In addition, guidance issued by NASD in December 1993 stated that its bank-affiliated members must develop procedures that require registered sales persons to reiterate to customers, in all oral and written communications, the material differences between insured depository instruments and investments in securities that carry risk to principal. The NASD guidance specifically noted that advertising and sales presentations should disclose that mutual funds purchased through banks are not deposits of, or guaranteed by, the bank and are not federally insured or otherwise guaranteed by the federal government. The interagency guidance emphasizes that bank customers should clearly and fully understand the risks of investing in mutual funds. Therefore, we tested whether the sales representative made the disclosures called for in the interagency guidance without our prompting. We found that sales personnel at many of the institutions we visited in our survey of bank sales practices did not fully comply with the disclosure requirements. As shown in figure 3.1, sales personnel at an estimated 32 percent of the banks and thrifts we visited disclosed all four of the critical facts concerning the nature and risks associated with mutual fund investments during their sales presentations, and less than half (43 percent) disclosed at least three of the four risks. At the other end of the disclosure spectrum, sales personnel at 19 percent of the institutions did not mention any of the four risks. Neither Bank Employees Nor Broker-Dealer Employees Adequately Disclosed Risks Because of the small size of our sample, none of the differences in performance between bank and broker-dealer employees that could be identified are statistically significant. Less than half of each group made all the disclosures called for in the guidance. For example, an estimated 44 percent of bank employees disclosed all four risks, as compared to 32 percent who were employees of broker-dealers. Similarly, about 6 percent of bank employees failed to make any of the disclosures called for in the guidance, and 18 percent of the broker-dealers made no disclosures. Disclosure of Bond Fund Risks Was Inadequate at Over 30 Percent of the Institutions Investors who purchase mutual funds through a bank or thrift are likely to be more conservative than those who purchase mutual funds elsewhere and to be more interested in purchasing a bond fund because they believe these funds are relatively safe. However, the prices of bonds and bond mutual funds are affected by changes—or the expectations of changes—in interest rates. In general, the value of bonds and bond mutual funds moves in the opposite direction of interest rates. If interest rates rise on new bonds, the prices of older ones decline. Thus, investors who own shares of bond mutual funds could find that the value of their investment is worth less than they paid for it if interest rates go up after they purchased the funds. NASD has recognized that investors with deposits, such as maturing certificates of deposit, may be interested in purchasing bond mutual funds because of their higher yields, but they may not be aware of the risks posed by these investments. In December 1993, NASD told its members that they “...have a significant obligation in their oral as well as their written communications to provide customers, seeking non-depository alternatives to depository accounts, with full and fair disclosure of the material differences between the products, especially the greater degree of risk to capital that the customer may experience.” With regard to bond funds, NASD stated that investors should receive clear disclosures that although such funds may pay higher rates than certificates of deposit, their NAVs are sensitive to interest rate movement, and a rise in interest rates can result in a decline in the value of the customer’s investment. In our visits, we wanted to determine whether the salesperson fully explained the effect of interest rate fluctuations—either up or down—on the value of the underlying bonds in a bond mutual fund and, consequently, the value of the fund shares. We estimated that at 66 percent of the institutions sales personnel explained the effect of interest rate movement on the value of the underlying bonds in the fund and the value of the fund shares. At 31 percent of the institutions the explanations were either nonexistent or unclear to us. About 3 percent of the institutions visited did not sell bond funds. “He explained that although bond funds are more conservative, they are still exposed to risk, especially as interest rates rise, prices of bond funds decline. He stressed that before recommending any particular fund he would need to discuss our personal financial information.” “Early in his presentation, the representative discussed in general terms the impact of the movement of interest rates on bond values. When asked about the safety of bonds, he provided more detail on the relationship of bond values and changes in interest rates. He used an example that clearly illustrated the relationship along with discussing the impact of the recent decision by the Federal Reserve to raise interest rates on current bond values.” “The sales representative did a very good job of explaining the effect of interest rate fluctuations on the value of bonds. The representative put no particular emphasis on either stock or bond funds. She clearly explained the difference in terms of risk and discussed the effect of interest rate fluctuations on bond funds early on.” “He said a lot about bond funds, but was very unclear. If he made this relationship, I missed it. He said bonds had higher yield, but were more volatile. He also said that they were FDIC insured ’like CDs’.” “The salesperson provided very little information on bond funds other than they are “safe” relative to stocks. She stressed that all of the bank’s funds are safe because they are relatively conservative funds.” There Were No Misleading References to SIPC Insurance A critical part of the disclosure issue is the use of potentially misleading or confusing information concerning FDIC insurance coverage for mutual fund investments. The banking regulators’ guidance statement states that when any sales presentations involve reference to insurance coverage by any entity other than FDIC, such as SIPC, a state insurance fund, or a private insurance company, the customer must be provided with clear and accurate explanations to minimize any possible confusion with FDIC insurance. Further, the guidance states that such representations should not suggest or imply that any alternative insurance coverage is the same as or similar to FDIC insurance. In our visits, we did not observe any instances of inadequate or confusing or misleading references to SIPC during the sales presentations. Distinction Between Mutual Fund Sales Areas and Deposit-Taking Areas Was Not Always Clear Selling or recommending mutual funds or other nondeposit investment products on the premises of a depository institution may give the impression that the products are FDIC-insured or are obligations of the depository institution. To minimize confusion, the guidance states that sales or recommendations of nondeposit investment products on the premises of the institution should be conducted in a physical location distinct from the retail deposit-taking area. In situations where physical considerations prevent sales of nondeposit products from being conducted in a distinct area, the institution has a responsibility to ensure that appropriate measures are in place to minimize customer confusion. In our visits to banking institutions, we evaluated what measures had been taken to clearly separate their retail deposit-taking areas, such as teller windows and new account desks where accounts and deposits could be taken, from the area where nondeposit investment products were sold. As part of our visits, we observed the physical layout of the bank to ascertain whether the bank clearly distinguished its mutual funds/investment services sales area from its traditional banking activities area. In some cases, we were directed to separate offices located in another building, where we also evaluated the physical layout of those offices. We looked for such things as partitions, roping, separate cubicles, floor space, and glass walls. We also looked for signs and other means of visible communications to differentiate the area from the traditional banking activities. At the end of the visit, we evaluated the extent to which the facilities appeared to clearly separate the areas for mutual funds sales activities from traditional banking activities. “There was no separation of mutual funds and banking activities. The sales representative sat at an unmarked desk in the middle of the bank floor. There were no signs present. However, her business cards were on the desk. All brochures of mutual fund activities were in her desk drawer. No signs indicating non-FDIC insured, non-bank product or potential loss of principal.” “There was nothing at all to indicate mutual fund sales. No signs, no posters, no brochures, nothing. In fact, we thought the sales area would be in the adjacent loan section, where we were told to enter. But there were no signs there. The only clue was a sign on the person’s desk saying that he was a registered representative for a company. For the entire time we were in the bank until we met with him, we could not have known that they sold mutual funds. The mutual fund sales desk was co-located in an area offering traditional banking activities such as new accounts and customer service. The mutual fund sales desk did not contain any signs or displays to distinguish it from other banking activities. In fact, the mutual fund sales desk was located next to the main bank reception desk near the front door of the bank. (All desks were separated by 3-foot partitions).” “The bank floor space was extremely limited. Desks were fairly close together—all bank activities were in close proximity to one another. Although the space was limited, there was a hanging sign clearly marked “Investment Services.” There was one sign on the desk approximately 10” x 12” displaying the proprietary fund which stated non-FDIC insured and not guaranteed by the bank. This information was at the bottom of the sign and readable, the size of the print was fine. A kiosk next to the desk also identified the same information. Overall, disclosure was fairly clear to a new customer.” “Two desks were located at the far end of the lobby approximately 25 feet from the teller windows. They were the only 2 desks in that space—one desk belonged to the mutual fund sales representative and the other to his assistant. Both desks faced the lobby with the representative’s desk on the right if one were looking at the mutual fund area. To the right of the representative’s desk was a very large (3’x 5’), lighted sign indicating the sale of mutual funds. A rack of mutual fund brochures was to the right of the sign. No other bank activities were near the mutual fund area.” Roles and Responsibilities of Employees in Deposit-Taking Areas Generally Complied With Regulatory Guidance The banking regulators’ guidance states that “in no case” should tellers and other employees, while located in the routine deposit-taking area, such as the teller window, make general or specific investment recommendations regarding nondeposit investment products, qualify a customer as eligible to purchase such products, or accept orders for such products, even if unsolicited. However, tellers and other employees who are not authorized to sell nondeposit investment products may refer customers to individuals who are specifically designated and trained to assist customers interested in the purchase of such products. Most of the banks and thrifts that sold mutual funds indicated in their responses to our questionnaire that they limited their employees—tellers, other branch employees, and bank and branch managers—to referring customers to designated nondeposit investment sales personnel. The activities of bank tellers were the most restricted—only about 3 percent were permitted to do anything other than refer customers to designated investment sales personnel. Other bank branch employees who were not licensed to sell securities, such as those who open new accounts and process loan applications, and branch managers were less restricted—about 13 percent of bank branch employees and 18 percent of branch managers were allowed to perform sales activities other than refer customers to designated sales representatives. With regard to the specific activities that bank and thrift employees are allowed to perform, about 1 percent of the banks and thrifts that sold mutual funds reported that they allowed tellers to discuss the investment needs of the customer and noninsured products available through the institution; about 8 percent of other branch employees and 12 percent of branch managers were permitted to perform this function. Almost none of the institutions reported that they permitted tellers or other branch employees to suggest that a customer should invest in a specific investment product. Less than 2 percent reported that they allowed branch managers to offer specific investment advice to customers. NASD noted that if these activities were permitted to occur in the deposit-taking area of the bank they would appear to violate the interagency guidance. In addition, if a bank broker-dealer is involved and the bank employees performing these activities are unregistered, current NASD rules, which prohibit unregistered persons from providing investment advice, would also appear to be violated. In our visits to banks in 12 metropolitan areas, we found that most banks and thrifts were limiting the activities of personnel in the deposit-taking area of the bank. We found that only 1 percent of bank tellers we met discussed investment needs in general or noninsured products available through the bank. None of these discussions related to specific investments. The guidance permits institutions to pay tellers and other bank employees who are not authorized to sell investment products nominal, one-time, fixed dollar fees for each referral to a sales representative whether or not a transaction takes place. SEC, however, has taken the position that referral fees to financial institution personnel who are not qualified to sell investment products should be eliminated. According to SEC, because investors who purchase securities on the premises of a financial institution may not be aware that the securities are not guaranteed by that institution or by the federal government, the payment of referral fees creates an inappropriate incentive for unqualified bank employees to offer unauthorized investment advice to their customers. In addition, in December 1994, NASD issued a notice requesting comment on proposed amendments to its rules governing broker-dealers operating on the premises of financial institutions. Under these proposed rules changes, broker-dealers would be prohibited from making any payments, including referral fees, to individuals employed with the financial institution who are not registered representatives of the broker-dealer. As of June 1995, NASD had completed its review of 284 comment letters received on its proposal, and the letters were being considered by NASD’s bank broker-dealer committee. About 43 percent of the institutions that responded to our questionnaire indicated that they compensated at least one of the following groups with referral payments: tellers, other unlicensed branch employees, and bank and branch managers. According to officials of several banking institutions, these payments are typically for $5 or $10 and not contingent on whether a sale is actually made. Proprietary Fund Sales Literature Generally Contained Key Disclosures but Presentation Was Not Always Clear Our evaluation of proprietary fund sales literature obtained from the banks we visited showed that the great majority of documents contained disclosures of the risks of investing in mutual funds. However, in some cases the disclosures were not clear and conspicuous. Under SEC rules, fund advertisements and sales literature may not be materially misleading. Money market funds, in particular, must disclose prominently that their shares are not insured or guaranteed by the U.S. Government, and that there can be no assurance that the fund will be able to maintain a stable net asset value of $1.00 per share. In addition, SEC requires disclosure by bank-sold and bank-advised funds that their shares are not deposits or guaranteed by the bank or insured by any U.S. government agency. Mutual fund advertisements and sales literature are required to be filed with NASD (if the fund’s shares are sold by an NASD member) or with SEC. According to SEC, as a practical matter, most fund ads and sales literature are filed with NASD rather than with SEC. NASD’s advertising department is to review advertisements and sales literature for compliance with both SEC’s rules and the NASD Rules of Fair Practice. In addition to the requirements of the securities regulators, the banking regulators’ guidance states that advertisements and other promotional and sales material about nondeposit investment products sold to retail customers of depository institutions should conspicuously disclose that these products are not insured by FDIC; are not a deposit or other obligation of, or guaranteed by, the institution; and are subject to investment risks, including possible loss of principal. When we visited banks, we obtained sales literature for proprietary funds, which we analyzed to determine if it contained the required disclosures. In total, we analyzed 26 documents that we obtained at 15 banks. All of the documents we reviewed stated that the funds are not insured by FDIC and not guaranteed by the bank. All but three documents cautioned that mutual fund investments are subject to investment risks, including loss of principal, and all but four disclosed that mutual fund investments are not bank deposits. We also reviewed the literature to determine whether it complied with the interagency guidance instructions that the risks be presented in a conspicuous, clear, and concise manner. We did this by looking at the placement of the disclosures, the size of the print used, the segregation of information in the literature as it pertains to FDIC-insured and noninsured products, and by making judgments about whether any obviously misleading or confusing information was presented. We were particularly concerned with whether any of the sales or marketing brochures suggested or conveyed any inaccurate or misleading impressions that the mutual funds were insured products or guaranteed. In our subjective evaluation, nearly half of the sales literature had disclosures that were conspicuous and readable to a great or very great extent. However, we characterized about 15 percent of the literature as having little or no success in achieving this objective. None of the advertising and sales brochures describing mutual fund products that we reviewed had the FDIC-insured logo or “Member FDIC” imprinted on them. “The disclosure information on risks is located at the bottom of the back page in very small print. It is very difficult to locate and read. In general, I do not believe the brochure gives proper emphasis to the fact that these funds are not insured by FDIC and may mislead someone into thinking that these funds are backed or guaranteed by the bank. All four brochures have the disclosure statement in small print on the back cover. There is no further discussion of those points in the literature and, in my opinion, the point could be easily overlooked.” “The brochure we received on behalf of the bank itself was an introductory booklet with application materials. This material all contained the required disclosures basically, although the disclosures do not state lack of FDIC insurance specifically, and disclosures are very small and are placed at the very back, bottom of the page. Very poor job of disclosing risks and uninsured nature. Also, much information is given in the brochure that would give the impression of a very safe, almost guaranteed investment, and very high returns. “ “One possible misleading or confusing statement is that the bank’s name is used on the cover without clearly identifying it as an investment instrument of the securities firm, and not the bank. Another piece of literature in the packet states that the funds are managed by investment professionals at the bank. Finally, the one page disclosure form is covered by other material and is the last item on the right-hand side. However, the disclosure form does have a section for the investor’s signature. The disclosure form is very effective, but it was the last item placed in the information packet. Nowhere in the first five plus pages was there any mention of the four required elements. A smaller brochure within the information packet included a statement on the back cover in very small print.” In September 1994, OCC released the results of a review it did of materials used by national banks in the sale of mutual funds and annuities. The review included about 8,500 documents that were voluntarily submitted by over 700 banks. The review identified many documents that were not consistent with the interagency statement. Problems uncovered by OCC, together with OCC’s advice on how the problems should be corrected, included the following: Conspicuousness: Not all documents met OCC’s standards—disclosures in type at least as large as the predominant type and boxed, bolded, or bulleted if they appear other than on the cover or at the beginning of the relevant portion of a document. As a result of its review, OCC determined that disclosures on the back of documents were not conspicuous. Also, OCC now encourages banks to make the key disclosures in type that is larger and bolder than the predominant type in the document. Key disclosures: OCC found that some documents did not include the disclosures that the product was not FDIC-insured, was not a deposit or obligation of the bank, was not guaranteed by the bank, and could result in the possible loss of principal. In some cases, the agency told the affected banks that they could correct the problem by adding stickers that conspicuously provided the disclosure. In other cases, such as when the document contained qualifying remarks that limited the effectiveness of the disclosure, banks were advised to stop distributing the documents in question. Fees: Some documents did not disclose applicable fees, penalties, or surrender charges. OCC counseled banks to make sure that fees were disclosed to customers and suggested that banks develop suitable written acknowledgement forms. SIPC insurance: Some documents contained incomplete or confusing references to SIPC insurance. OCC told banks that they could correct these problems by using printed supplements that provide a more detailed description of SIPC coverage. Relationships: Some documents did not disclose an advisory or other material relationship between the bank or an affiliate of the bank and the mutual fund whose shares were the subject of the document. Banks were reminded that such relationships should be disclosed. Out-of-date forms: Some banks were using documents supplied by third-party vendors that were not the most current version provided by that vendor and did not contain all of the disclosure messages required by the interagency statement. OCC advised banks to remove and replace outdated forms and establish systems for controlling documents. In commenting on a draft of this report, OCC stated that it is now reviewing sales-related documents as part of its regular on-site examinations and that it is finding that banks have improved their materials. Some Institutions Provided Incentives for Selling Proprietary Funds According to the banking regulators’ guidance, personnel who are authorized to sell nondeposit investment products may receive incentive compensation, including commissions. The guidance cautions that incentive programs should not result in unsuitable recommendations or sales to customers. It makes clear that sales personnel in banks should obtain directly from the customer certain minimum information, such as his or her financial and tax status and investment objectives, upon which to base their investment recommendations. However, banks are not prohibited from providing sales personnel greater compensation for selling proprietary, as compared to nonproprietary, funds, nor are they required to disclose any such arrangement to the customer. In response to congressional interest in the extent to which incentives exist for sales personnel on bank premises to sell proprietary funds versus nonproprietary funds, our questionnaire asked banks and thrifts to describe their sales compensation policies. We also discussed the issue with a senior NASD official, who told us that it is a common and well-established practice in the industry for a sales representative to receive greater compensation, or a “better payout,” for selling the firm’s proprietary fund over third-party funds. Eleven percent of the banks and thrifts that sold proprietary funds stated that sales personnel in their institutions received greater compensation or special incentives for selling proprietary funds than for selling nonproprietary funds. Of the banks that described their sales compensation policies, most stated that proprietary funds rewarded the salesperson with a greater payout or additional revenue. For example, one bank told us that both proprietary and nonproprietary funds pay a commission of 3.2 percent, but an extra 15 percent is added to the amount of the commission for a proprietary fund. We also asked the banks and thrifts that received our questionnaire to indicate whether or not sales personnel were expected to meet quotas or targets for the sale of proprietary funds. Eighteen percent of the banks that sold proprietary funds responded that sales personnel were expected to meet sales quotas or targets for proprietary fund sales. Most of the institutions that answered this question indicated that they expected proprietary fund sales to be a certain percentage of all mutual fund or bank product sales, or a specific dollar amount each month. Customer Account Information Was Widely Used to Market Mutual Funds The interagency guidance does not prohibit banks and thrifts from providing confidential financial information to mutual fund sales representatives. However, the guidance states that the institution’s written policies and procedures should include the permissible uses of customer information, but the guidance does not suggest what would or would not be permissible uses. Whether banking institutions should be allowed to share financial information on their customers with broker-dealers is a controversial issue. In March 1994, the North American Securities Administrators Association testified in favor of placing a prohibition on banks’ sharing confidential customer information with any affiliated securities operations. Also, NASD’s December 1994 proposed rule governing broker-dealers operating on bank premises included a provision that would prohibit its members from using confidential financial information maintained by the financial institution to solicit customers for its broker-dealer services. As of March 1995, NASD was still evaluating comments on its proposal. However, reports in banking industry journals indicated that many banks and banking regulators were strongly opposed to the rule. They characterize the rule as being unfair because (1) nonbank brokerages are permitted to supply their brokers with information about their customers’ use of bank-like services, such as certificates of deposit; and (2) the rule does not clearly define what is meant by confidential customer information. The latter issue could prove to be particularly difficult to resolve. For example, in commenting on a draft of this report, FDIC stated that it knows of no reliable definition of what customer information is confidential and what information is public. FDIC noted that, while banks must comply with laws concerning confidentiality of customer information, it did not want to prohibit the use of information that is otherwise available publicly or among a bank’s affiliates. To obtain information on the extent to which banks and thrifts were using customer information in their mutual fund sales programs, we asked the institutions that received our questionnaire (1) whether they had written policies and procedures that covered the permissible uses of customer account information; and (2) to describe how they marketed their mutual funds and, if applicable, what customer information was used. About 68 percent of the institutions that responded to the question stated that they had written policies or procedures that described how customer account information is to be used. About 40 percent of the institutions that sold mutual funds stated that they provided customer information, such as account balances or CD maturity dates, to mutual fund sales personnel. Almost half of these (49 percent) said they provided sales personnel CD maturity lists; others said their sales personnel had access to all customer data (24 percent), and a minority (15 percent) provided customer account balances to their sales personnel. With regard to the use of other marketing techniques that are likely to make use of customer account information, 65 percent of our respondents used telephone calls to market their mutual funds; 63 percent targeted mailings to existing bank customers, such as holders of CDs; and 59 percent used inserts in monthly account statements. Banking Regulators Have Developed Additional Examination Procedures Although technically the interagency guidance does not have the same authority as a regulation, each of the banking regulators has developed additional examination procedures to evaluate bank and thrift compliance with the guidelines. In February 1994, about a week after the interagency guidance was issued, OCC issued examination procedures and an internal control questionnaire that specifically address sales of retail nondeposit investment products. OCC officials told us that these examination procedures are being used during the scheduled safety and soundness examination for each bank, which is either once every 12 months for large national banks, or once every 18 months for smaller national banks. The first examinations are to be a complete review of each bank’s mutual fund operations. OCC expected to complete these “benchmark” reviews by the end of 1995. Subsequent examinations could be less exhaustive depending on the results of the initial examinations. However, certain components of each review are mandatory, including separation of mutual fund sales activities, compliance with disclosure requirements, and review of suitability determinations. OCC officials told us that as of May 1995, their examinations have not shown any systemic problems with bank mutual fund sales programs. They have identified problems at individual banks, including failure to properly document suitability determinations and uncertainty about responsibilities for overseeing third-party broker-dealers, which they said these banks corrected. They also said that OCC has taken no formal enforcement actions against any bank as a result of the bank’s mutual fund sales program. FDIC issued examination procedures for state nonmember banks participating in the sales of nondeposit investment products on April 28, 1994. According to an FDIC official, these procedures are being applied during the regularly scheduled safety and soundness examinations. FDIC’s procedures require that its examiners complete a questionnaire at each examination or visit in which the bank’s sale of nondeposit investment products is reviewed. The questionnaire includes a variety of questions on whether the bank is complying with various provisions of the interagency guidance. Copies of the completed questionnaires are to be forwarded to the responsible FDIC regional office, and significant deficiencies found during examinations are to be commented on in the examination report together with the recommended corrective action. An FDIC official told us that as of May 1995 FDIC had not taken enforcement actions against any bank with regard to the operation of the bank’s mutual fund sales program. However, FDIC examiners have found that written agreements between banks and third-party broker-dealers have, in some cases, not been complete. In addition, the examiners have found instances in which banks’ written policies governing their mutual fund programs needed to be more precise. An FDIC official said that FDIC has required banks to correct these problems. He also said that FDIC is conducting its own shopper visits to banks to test bank compliance with the interagency guidance. FDIC expects to complete these visits in late summer 1995 and expects to share the results of these visits with the other banking regulators. On May 26, 1994, the Federal Reserve issued examination procedures for retail sales of nondeposit investment products. The procedures were to be used during examinations of state banks that are members of the Federal Reserve System as well as during inspections of nonbank subsidiaries that engage in securities sales on bank premises. According to Federal Reserve officials, the examination procedures were being used during annual safety and soundness examinations. All state-chartered banks that are members of the Federal Reserve System are to be examined using the new procedures by the end of 1995. Federal Reserve officials said that no material abuses have been found, but in some cases better recordkeeping and training of employees were needed. With regard to training of bank employees, Federal Reserve examiners have found some instances in which untrained bank employees were performing duties, such as gathering detailed financial information from customers, that are reserved to either licensed broker-dealers or to bank employees with training equivalent to licensed broker-dealers. The official said the Federal Reserve has been emphasizing to banks that employees who are not licensed by NASD are limited in the activities they can perform and has required banks to either appropriately train these employees or take measures to restrict their activities. In addition to the additional procedures incorporated into the annual safety and soundness examinations, the Federal Reserve conducted an in-depth review of three large banks’ mutual fund programs. Federal Reserve officials told us that they have also developed consumer education seminars for elderly investors that are to be provided at the 12 Federal Reserve banks. Further, they said they are conducting banker education conferences at the 12 Federal Reserve banks to promote banks’ understanding of and conformance with the Federal Reserve’s requirements for mutual fund sales. In April 1994, OTS issued guidelines for examining the securities brokerage activities of thrifts, including mutual fund sales. OTS’ guidelines focus on determining the adequacy of internal controls in containing the level of risk presented to the thrift and minimizing potential customer confusion between FDIC-insured and non-FDIC-insured investment products. The procedures call for the examiners to review advertising and promotional material, disclosure policies, procedures on the use of customer information, compensation policies, referral fees and practices, training and qualification policies and procedures, and systems for ensuring that investment recommendations are suitable for a particular customer. OTS officials told us that no systemic problems have been found in its examinations of thrifts’ mutual fund programs as of May 1995. Conclusions At the time of our review, many bank and thrift institutions did not fully comply with the guidance issued by the banking regulators. As a result, customers of those banks and thrifts may not have had accurate and complete information about the risks of investing in mutual funds. In addition, institutions that were not following the guidance opened themselves to the possibility of private lawsuits, particularly under the securities laws, that could affect the safety and soundness of the institution. The banking regulators have recognized the importance of closely monitoring institutions’ mutual fund sales programs and have adopted procedures to be included in periodic safety and soundness examinations, which they are currently implementing. Matter for Congressional Consideration Because the banking regulators have adopted additional examination procedures to help ensure that banks provide customers accurate and complete information about the risks of mutual funds since the completion of our field work, we are not recommending changes to the regulators’ oversight practices at this time. However, after the interagency guidelines have been in place long enough to provide data for trend analysis, Congress may wish to consider requiring that the banking regulators report on the results of their efforts to improve banks’ compliance with the interagency guidance. Agency Comments and Our Evaluation In commenting on a draft of this report, OCC, OTS, and the Federal Reserve noted that we visited banks less than a month after the bank regulatory agencies issued the interagency guidance. These banking agencies indicated that the deficiencies we noted may have been attributable to the fact that during this time the institutions were in the process of implementing new procedures, and the agencies had not yet implemented related examination procedures. OCC commented that it believes that bank practices have changed significantly since we completed our visits. OCC also commented that the conclusions, captions, and discussion in this chapter did not adequately distinguish between the adequacy of banks’ oral and written disclosures. OCC believed that our conclusion that disclosure was inadequate appears to refer to the oral disclosure requirements, and the description of banks’ written disclosure efforts did not support a conclusion of inadequate overall compliance. The Federal Reserve commented that since May 1994 its examiners have been confirming that state member banks are aware of, and making efforts to ensure that their sales programs are in conformance with, the guidelines. According to the Federal Reserve in those few cases where its examiners have discovered deficiencies, the banks in question have taken voluntary corrective action to address the problems. Our visits to banks were made in March and April 1994. The timing of these visits was dictated by our desire to respond promptly to the Committees’ requests for information on the actual practices being followed by banks and thrifts in the sale of mutual funds. As noted in the report, these requests were driven by concern that customers of banking institutions were confused about how mutual funds differ from insured deposit products. Although our visits occurred shortly after the interagency guidance was issued, each regulator had issued guidance in 1993 that banking institutions should have been following. This guidance largely paralleled the February 1994 interagency guidance. For example, on July 19, 1993, OCC released guidance to national banks that covered many of the same areas that were included, and strengthened, in the February 1994 guidance. The July 1993 OCC guidance called for banks to take steps to separate, as much as possible, retail deposit-taking and retail nondeposit sales functions. It noted that disclosure of the differences between investment products and insured bank deposits needs to be made conspicuously in all written or oral sales presentations, advertising and promotional materials, and statements that included information on both deposit and nondeposit products. Further, it recommended that banks ensure that their sales personnel are properly qualified and adequately trained to sell investment products. Similar guidelines were issued by the Federal Reserve in June 1993, by OTS in September 1993, and by FDIC in October 1993. While we believe that the results of our shoppers visits to banks and thrifts accurately portray those banks’ mutual fund sales activities at the time of our visits, we also realize that the institutions’ activities may change over time as the regulators implement their new examination procedures to ensure that the institutions comply with the interagency guidelines. We also believe that compliance with the guidelines is essential to ensure that investors obtain accurate and complete information about mutual fund risks. Thus, we believe that Congress may find it useful in exercising its oversight responsibilities to receive information on the banks’ compliance with the interagency guidelines after the banks and banking regulators have had sufficient time to fully implement their changes. Accordingly, we added a matter for congressional consideration suggesting that once the interagency guidelines have been in place long enough to provide sufficient data for trend analysis, Congress may wish to consider requesting the regulators to provide status reports on the results of their examination efforts. Such reports, for example, could be made a part of congressional oversight hearings. We disagree with OCC’s comment that the report captions do not clearly distinguish between oral and written risk disclosures. (See pp. 29 and 37, for example). Further, we tested the extent of oral disclosures because of the importance placed on these disclosures by the interagency guidance. We believe that this is an appropriate emphasis because customers are highly influenced by what they hear during sales presentations. Further, although we found that most sales literature contained the required disclosures, the disclosures were not always clear and conspicuous. This paralleled OCC’s own findings in its September 1994 review of national banks that sold mutual funds. (See pp. 39 and 40.) SEC commented that our testing of compliance with the interagency guidelines and discussion of the banking agencies examination procedures, as opposed to compliance with the federal securities laws and rules, appeared to place undue emphasis on the guidelines as a source of consumer protection in this area. SEC summarized the various means by which it and NASD regulate and oversee mutual fund sales practices of broker-dealers, including those operating on bank premises. SEC also outlined the ways in which it and NASD regulate and oversee mutual fund disclosure documents, including registrations, prospectuses, advertising, and sales literature. SEC stated that although the banking regulators’ guidelines are useful, the federal securities laws remain the most important set of investor protection criteria applicable to mutual funds and sales practices of broker-dealers. NASD made similar comments, noting that although the interagency guidelines are directly enforceable over banks and bank employees, they do not provide the bank regulators with direct and equal regulatory authority over SEC-registered NASD member broker-dealers, including the authority to bring enforcement actions for serious violations. By using the interagency guidance as criteria to assess the sales practices being followed by banks and thrifts, we did not intend to minimize the importance of securities laws and regulations. Rather, we used the interagency guidance because it provided guidelines that applied to all mutual fund sales on bank premises, including indirectly to broker-dealers working under a contractual arrangement with a bank, and it contained bank-specific requirements that we wanted to test. In addition, we noted that the guidelines are similar in many respects to securities rules. Expanded Role of Banks and Thrifts in the Mutual Fund Industry Raises Regulatory Issues The current regulatory framework allows banking institutions to choose how to structure their mutual fund sales and advisory activities and, depending on that structure, how they are regulated. For example, banks can choose to sell mutual funds directly and be subject to oversight by the banking regulators, but not by securities regulators. However, most banks that sell mutual funds choose to do so through affiliates that are subject to the oversight of the securities regulators. Banking regulators also have issued guidance to banks that sell mutual funds through these affiliates. This creates a potential for different regulatory treatment of the same activity and a potential for conflict and inconsistency among banking and securities regulators. Similar concerns arise for banks and thrifts that can carry out investment adviser activities either in the bank or thrift or in a separate affiliate, although—in this case—most institutions carry out such activities directly rather than in an affiliate. While the banking and securities regulators have been taking steps to better coordinate their efforts, additional coordination could help alleviate differences in regulatory treatment meant to protect customers who buy mutual funds from banks and thrifts. Increase in Bank Mutual Fund Activities Has Raised Concerns About Adequacy of Current Regulatory Structure When the Securities Exchange Act of 1934, the Investment Company Act of 1940, and the Investment Advisers Act of 1940 were enacted, the 1933 Glass-Steagall Act barred banks from engaging in most securities activities and limited bank securities activities to (1) underwriting and dealing in government securities, which were exempt from Glass-Steagall Act restrictions; and (2) providing brokerage services solely for customer accounts. Because banks were already subject to federal banking regulation, the securities laws exempted banks from the regulatory scheme provided for brokers and dealers and for investment advisers. However, over the last 2 decades the federal banking regulators and the courts have interpreted the Glass-Steagall Act in ways that allow banks to provide a wide range of brokerage, advisory, and other securities activities comparable to services offered by SEC-registered broker-dealers and investment advisers. Consequently, banks have rapidly expanded their presence in the mutual funds industry. Because of the rapid increase in banks’ mutual fund activities, some Members of Congress and the securities regulators have expressed concern that the current regulatory framework and oversight and enforcement mechanisms have not kept up with changes in the market and may no longer be adequate to protect the interests of investors who purchase mutual funds through a bank. SEC has testified that eliminating the banks’ exemptions from registering as broker-dealers and investment advisers would result in better investor protection and allow uniform regulation of securities activities regardless of industry classifications. SEC contends that when banks sell mutual funds directly using their own employees, customers are not afforded the same level of protection as customers who make their purchases through a broker-dealer. Specifically, SEC makes the following arguments: Guidelines issued by the banking regulators concerning retail sales of mutual funds are not regulations. Therefore, SEC believes they are not legally enforceable by the bank regulators or customers; are too general; do not contain sufficient provisions for training bank personnel, especially with regard to making suitability determinations; and raise potential problems of regulatory overlap and conflict with respect to registered broker-dealers that assist banks in the sale of securities products. The banking regulators’ primary focus is not investor protection, but the safety and soundness of the institution. As a result, SEC believes bank regulators minimize their disclosure of enforcement actions to protect the bank from adverse customer reactions in contrast to securities regulators, who make their enforcement actions a matter of public record to get maximum deterrent effect. SEC also argues that the securities regulators are better trained and have more expertise in assessing suitability determinations; that is, ensuring that customers make investments that are compatible with their income, assets, and investment goals. SEC also testified that banks’ exemption from the Investment Advisers Act should be repealed for banks that advise mutual funds and that SEC should have the authority to regulate and inspect the mutual fund advisory and sales activities of banks. In addition, SEC has testified that when banks manage proprietary funds, there may be potential conflicts of interest between the funds and the bank’s other clients—conflicts that SEC may be unable to detect because of its lack of jurisdiction over bank investment advisers. In response to criticisms that their guidelines are inadequate, the banking regulators have argued that, in some cases, their guidance exceeds SEC and NASD requirements for nonbank mutual fund companies. For example, they say the guidance requires banks to disclose orally and in writing to potential customers that their mutual fund investments are not FDIC-insured and are subject to market fluctuations in value. Banks are required to ensure that customers sign written statements acknowledging that they understand the risks associated with mutual fund investments. By contrast, nonbank mutual fund customers are not required to sign written statements acknowledging the risks associated with mutual funds. Bank regulatory officials also reject the argument that the guidelines represent a less enforceable standard than SEC regulations. The bank regulators have informed banks that the adequacy of their mutual fund operations will be assessed on the basis of the new guidelines during the next scheduled safety and soundness exam. According to the regulators, they will bring any identified deficiencies in bank mutual fund operations to the attention of senior bank managers and directors. The managers and directors will be required to correct these deficiencies within a specified period of time. Failure to make needed improvements could result in a variety of enforcement actions, such as cease and desist orders or civil money penalties. Because of such possible sanctions, the regulators believe that bank managers will establish mutual fund sales operations that comply with the interagency guidelines. Most Banks Choose to Sell Funds Through SEC-Registered Broker-Dealers Under current laws and regulations, sales of mutual funds in banks can be made either by employees of the bank; by employees of an affiliate, subsidiary, or third-party broker working on behalf of the bank; or by “dual employees”—individuals who work for both the bank and a broker. If the salesperson is an employee of a broker or is a dual employee, he or she must be registered with NASD and is subject to SEC and NASD oversight. However, because the 1934 act exempts banks from being defined as a “broker” or a “dealer,” a bank can choose to use its own employees to sell mutual funds or other securities. These employees may do so without registering with NASD or being subject to SEC and NASD rules and oversight. Banks that use their own employees to broker securities are not subject to SEC regulation and oversight. However, responses to our questionnaire showed that the vast majority of banks that sell mutual funds on their premises choose to do so through SEC-registered broker-dealers, either affiliates or subsidiaries of banks or third-party broker-dealers, rather than directly by unlicensed bank employees. As shown by table 4.1, only about 8 percent of banking institutions that responded to our questionnaire reported that only their own employees directly sold mutual funds to retail customers. On the basis of these responses, we estimate that about 180 of the 2,300 banking institutions that were selling mutual funds to their retail customers at the end of 1993 did so directly using only their own employees. About 43 percent reported sales by “dual employees” of the bank or thrift (or its affiliate or subsidiary) and a registered broker-dealer, 29 percent through an affiliated or subsidiary broker-dealer organization, and 38 percent through a networking or leasing arrangement with a registered third-party broker-dealer. When we analyzed these results by bank size, we found that small banks were least likely to sell mutual funds directly with their own employees. Only 7 percent of banks with assets less than $150 million reported selling mutual funds exclusively with their own employees. In contrast, about 14 percent of banks with assets between $250 million and $1 billion responded that their own employees, rather than broker-dealers, sold funds at their banks. According to the American Bankers Association, banks that choose to offer brokerage services directly through the bank do so because they do not yet have sufficient business to justify the expense of employing a registered broker-dealer. As a result, some in the banking industry have asserted that eliminating banks’ exemption from registering as broker-dealers would unfairly penalize banks that had a small volume of brokerage transactions. To gather information on this issue, we contacted about 80 percent of the banks that reported selling mutual funds through their own employees to find out why they sold funds directly, rather than through a broker-dealer. They cited three reasons for selling funds directly through their own employees: (1) they wanted to maintain control over their relationship with their customers, rather than turn it over to a broker-dealer; (2) they did not do enough business to justify establishing an arrangement with a broker-dealer or setting up their own affiliate; and (3) they sold funds mainly as a convenience to their customers. Three of the banks we contacted sold proprietary funds, although one of these has since switched to selling funds through a third-party broker-dealer. Regulatory Framework for Mutual Fund Sales in Banks Can Cause Conflict and Overlap Among the Regulators Under the current regulatory framework, broker-dealers that operate on the premises of banks and thrifts are subject to regulation by SEC and indirectly to oversight by banking regulators. This can cause conflict over what rules these broker-dealers are to follow in conducting their mutual fund sales programs and can also cause duplication of effort and an unnecessary burden on the broker-dealer when the regulators carry out examinations of these activities. For example, in December 1994, NASD released for comment proposed rules governing broker-dealers operating on the premises of banking institutions. According to NASD, these rules are designed to fill a regulatory void by specifically governing the activities of NASD member bank broker-dealers who conduct a securities business on the premises of a financial institution. They differ from the banking regulators’ interagency guidance in several respects. First, the proposed NASD rules prohibit the payment of referral fees by the broker-dealer to employees of the financial institution. The interagency guidance permits payment of these fees. Second, the proposed rules place restrictions on brokers’ use of the bank’s or thrift’s customer lists that are stricter than the interagency guidance. Specifically, the proposed NASD rules state that confidential financial information maintained by the financial institution can not be used to solicit customers for the brokerage. This appears to rule out the use of information such as certificate of deposit maturity dates and balances. The interagency guidance requires only that the banking institution’s policies and procedures include procedures for the use of information regarding the institution’s customers in connection with the retail sale of nondeposit investment products. Third, the proposed NASD rules appear to place limits on the use of bank or thrift logos in advertising materials. For example, the proposed rules state that advertising and other sales materials that are issued by the broker-dealer must indicate prominently that the broker-dealer services are being provided by the broker-dealer, not the banking institution. Further, the financial institution may only be referenced in a nonprominent manner in advertising or promotional materials solely for the purpose of identifying the location where broker-dealer services are available. In contrast, the interagency guidance requires only that advertising or promotional material clearly identify the company selling the nondeposit investment product and not suggest that the banking institution is the seller. NASD’s proposal has generated controversy in the banking industry. According to the financial press, some bankers have complained that the proposed NASD rules hold bank brokerages to standards that are higher than for nonbank brokerages. They point out, for example, that, unlike bank brokerages, nonbank brokerages are not required to disclose that mutual funds are not federally insured. In response, an NASD official said that when a customer deals with a brokerage in a bank, that brokerage has a higher responsibility to ensure that the customer understands the risk involved in investing in securities as compared to savings accounts or certificates of deposit. Another area of concern is the potential for overlapping examinations or examinations that may result in conflicting guidance. Under the current regulatory framework, a broker-dealer in a bank could be examined periodically by NASD to determine if it is in compliance with securities rules, by SEC if it is doing an oversight inspection of NASD or is doing an inspection for “cause,” and also by the banking regulators to determine if the bank is complying with the interagency guidance. Although we found that a number of steps have been taken to avoid overlapping and conflicting activities, some problems have not been resolved. For example, SEC is concerned that the banking regulators, particularly OCC, have begun to examine registered broker-dealers that sell securities in banks and has plans to examine mutual funds advised by banks. SEC testified that because registered broker-dealers and mutual funds are already subject to regulation by SEC and NASD under the federal securities laws, imposing an additional layer of banking regulator examination and oversight is unnecessary and may result in firms receiving inconsistent guidance on compliance issues. Because of SEC’s concern, we reviewed examination guidelines issued by OCC, the Federal Reserve, and FDIC to determine the degree to which they required examiners to review broker-dealer records, especially third-party broker-dealers. OCC’s February 1994 guidelines for examination of retail nondeposit investment sales require that contracts between banks and broker-dealers provide bank examiners access to the records of third-party vendors (broker-dealers). However, the emphasis of the guidelines is on determining whether the bank has exercised the proper management control over the third-party vendor, rather than on a specific examination of the vendor’s operations. For example, the guidelines state that when (1) preliminary examination findings clearly show that bank management has properly discharged its responsibility to oversee the third party’s operations, (2) only a few complaints have been filed against the vendor, and (3) the vendor’s reports to the bank are timely and properly prepared, examiner access to third-party records should generally be limited to reports furnished to bank management by the vendor. The guidelines are not clear, however, as to what actions examiners are to take if these conditions are not met, stating only that “After making a judgment about the effectiveness of the oversight of third party vendor sales, complete any other examination procedures that appear appropriate.” According to an OCC official, before OCC examiners do a bank inspection, they typically ask the bank to provide the results of the broker-dealer’s last NASD inspection. The NASD inspection report is to be reviewed to determine if it addresses any concerns about the bank’s mutual fund program. If the OCC examiners have concerns about the bank’s mutual fund program, they may do a limited inspection of the broker-dealer’s books and records. OCC may also direct the bank to hire an accounting firm to audit the broker-dealer if the limited OCC inspection identifies problems. This official said that OCC’s inspection approach is designed to avoid duplication by placing on the bank the responsibility for controlling and overseeing the broker-dealer’s operations. During its inspections, OCC is to check the adequacy of these controls and the bank’s oversight of broker-dealer compliance. According to the OCC official, OCC inspections of the broker-dealer should be a rare event if the bank exercises adequate oversight. In commenting on a draft of this report, OCC reiterated that any inspections of third-party broker-dealers would be limited to pertinent books and records and would not be complete examinations. The Federal Reserve’s examination guidelines do not contain provisions that imply its examiners will review the operations of a third party in detail. The guidelines state that the examination procedures have been tailored to avoid duplication of examination efforts by relying on the most recent examination results or sales practice review conducted by NASD and provided to the third party. For example, the guidelines state that in making determinations about suitability and sales practices involving registered broker dealers, Federal Reserve examiners should rely on NASD’s review of sales practices or its examination to assess the organization’s compliance with suitability requirements. The emphasis of FDIC’s examination guidelines is similar to the Federal Reserve’s. The guidelines state that examinations of banks that have contracts with a third party should focus on the agreement with the third party and the bank’s methods for determining the vendor’s compliance with bank policies and with provisions of the interagency statement. Banking and securities regulators have begun to take steps to better coordinate their efforts. In January 1995, NASD and the four banking regulators signed an agreement in principle to coordinate their examinations of broker-dealers selling mutual funds and other nondeposit investment products on bank premises. The agreement calls for the agencies to share examination schedules, NASD to share its examination findings with the banking regulators, referral of any violations of banking or securities laws to the appropriate agencies, and other matters. Also in January 1995, NASD agreed to establish a new committee for bank-affiliated brokerages. This committee is to join 32 other standing NASD committees that represent specific interests; it is to recommend to the NASD board of governors rules and procedures for bank-affiliated brokerages and third-party brokerages that are doing business on bank premises. Both SEC and Banking Regulators Have Responsibility for Bank Fund Investment Advisers Many banks now provide investment advice to their own mutual fund families. Because the Investment Advisers Act of 1940 exempts banks from being defined as investment advisers, bank advisers do not have to register with SEC and are not subject to SEC regulations and oversight. As a result, when SEC inspects the records of a bank-advised fund, it does not have the authority to review certain records of the investment adviser that may be pertinent to an examination of the fund’s portfolio transactions. According to SEC officials, when a bank serves as the investment adviser to a mutual fund and is not registered with SEC, SEC is limited to reviewing only the activities of the adviser as the activities relate to the mutual fund. If, for example, the bank serves as the investment adviser to a mutual fund, a pension fund, and private trust funds, SEC can look at the bank’s activities only with respect to the mutual fund. SEC can not review the records of the other funds or accounts to determine if conflicts of interest exist or if the mutual fund was disadvantaged in some manner in relation to the other funds the bank is advising by the decisions of the investment adviser. Banks nevertheless may establish a separate SEC-registered subsidiary or affiliate to provide investment advice to a mutual fund, or they may provide such advice directly. While some banks have established SEC-registered subsidiaries or affiliates in which to conduct their mutual fund investment advisory activities, most provide such advice directly. According to SEC’s records, 78 of the 114 (68 percent) banking organizations that provided investment advisory services to mutual funds as of September 1993 did so directly rather than through SEC-registered subsidiaries or affiliates. If the bank chooses to conduct its mutual fund investment advisory activities directly, these activities are overseen principally by the banking regulator responsible for supervising and examining that bank and by SEC to the extent bank advisory activities relate to mutual funds subject to the Investment Company Act. Banks that provide investment advice to their proprietary mutual funds are subject to examinations of these activities by the banking regulators. These examinations are carried out regardless of whether the investment advisory function is also subject to inspection by SEC. While the banking regulators’ examinations have traditionally focused on safety and soundness issues, rather than enforcement of securities laws, OCC is drafting guidelines for examination of mutual fund activities that indicate OCC examiners may attempt to determine whether bank and fund practices comply with the Investment Company Act of 1940. This concerns SEC because it believes such guidelines raise potential problems of conflict and overlap among the regulators. OCC officials told us that although the agency has been doing examinations of investment advisers for years as part of the trust examination process, the new examination guidelines will focus examiners’ attention more directly on potential conflicts of interest that can arise when banks advise mutual funds. These potential conflicts of interest may violate securities laws and could enrich fund advisers at the expense of fund investors. The Federal Reserve also examines investment advisers in state-chartered member banks and in subsidiaries of bank holding companies. If the investment adviser is the trust department of a state member bank, the examination is to be carried out as part of its examination of trust activities. If the investment adviser is a subsidiary of a bank holding company, on-site inspections are to be conducted as an integral part of bank holding company inspections. Although investment advisory subsidiaries of bank holding companies are required to register with SEC and are subject to SEC supervision and examination, the Federal Reserve’s guidelines note that such examinations are infrequent. Therefore, examinations by Federal Reserve Bank examiners are to be undertaken whenever they consider the investment adviser activities to be significant. Among the factors Federal Reserve examiners are to consider in deciding whether to schedule an examination of an investment advisory subsidiary of a bank holding company are volume and type of activity, date and results of previous Federal Reserve Bank and/or SEC inspections, and the extent of services provided to affiliated banks or trust companies. The Federal Reserve’s guidelines for inspections of investment advisory subsidiaries of bank holding companies state that these inspections are primarily focused on safety and soundness considerations and not on compliance with securities laws. The objectives of these inspections are to (1) determine whether the adviser’s organizational structure and management qualifications are satisfactory; (2) evaluate the adequacy of the adviser’s financial condition and internal controls; (3) review the appropriateness of the adviser’s investment practices; (4) determine whether the institution has adequate policies and procedures to prevent self-dealing and similar improper conflicts; and (5) evaluate compliance with bank holding company laws, regulations, and interpretations. According to an FDIC official, if an FDIC-regulated bank has an affiliate that provides investment advisory services to a proprietary mutual fund, that entity would be supervised and inspected by the Federal Reserve under the holding company inspection system. In addition, a small number of state nonmember banks provide investment advisory services to mutual funds through their trust departments. FDIC examiners are to inspect these advisers as part of FDIC’s overall trust and compliance examination program. The trust examination guidelines address a number of areas involving the investment advisers’ activities. Specifically, the guidelines focus on the advisers’ supervision and organization, operations controls and audits, asset administration, account administration, and conflicts of interest and self-dealing. Eliminating Banks’ Exemptions Would Not Resolve All Problems Under the current regulatory framework, many banks’ securities activities are subject to review by both the securities and banking regulators. As shown by the responses to our questionnaire, over 90 percent of institutions that sell mutual funds do so through SEC-registered and supervised broker-dealers. These broker-dealers are subject to review by NASD and SEC, who attempt to ensure investor protection through enforcement of the securities laws; and by the banking regulators, who, among other things, attempt to ensure that the institution is operating its mutual fund program in a safe and sound manner and in compliance with the interagency guidance. A similar situation exists in the regulation of investment advisers. We noted, for example, that even when the bank conducts its mutual fund advisory functions in a separate subsidiary, the Federal Reserve continues to conduct its own inspections of these subsidiaries. In addition, OCC is drafting examination guidelines that will call for assessing banks’ compliance with various provisions of the Investment Company Act of 1940. To the extent that these examinations would be carried out at entities already subject to SEC oversight, banks and their affiliates may be subject to having the same activities examined by two sets of regulators. The securities regulators have proposed that the regulatory framework could be simplified if a system of functional regulation were adopted. Under a “pure” functional regulation system—regulation according to function and not according to entity performing the function—SEC and the other securities regulators would be responsible for ensuring that banks comply with the securities laws. The securities activities of banks would be conducted in separate subsidiaries and affiliates, and banking regulators would be precluded from conducting examinations of the securities subsidiaries and affiliates of banks, which would eliminate duplicative regulation and oversight. However, the Comptroller of the Currency has testified that under this framework, the banking examiners would be unable to properly assess whether the securities activities were affecting the safety and soundness of the bank because they would have to rely on reports from the functional regulator that could be too infrequent, insufficiently detailed, or insufficiently comprehensive to allow the examiners to make a determination. Eliminating banks’ exemptions from the securities laws would expand SEC’s authority to oversee banks’ securities activities and would appear to address SEC’s concerns that (1) investors are not adequately protected by the securities laws when retail securities sales are made directly by bank employees, and (2) it can not fully examine the transactions of mutual fund investment advisers when the adviser is a bank. However, just eliminating the exemptions does not remove the potential for duplication and conflict between the banking and securities regulators because each group will continue to be involved in supervising banks’ securities activities. Scope and Frequency of SEC’s Inspections Has Been Limited, but Resources May Be Increasing In the past, SEC has had trouble keeping up with its existing workload because the size of its inspection staff has not kept pace with the explosive growth in the size and complexity of the mutual fund industry. As a result, the agency was forced to reduce the scope and frequency of its inspections over the past decade. The size of SEC’s mutual fund company inspection staff began to increase in fiscal year 1994, and the agency believes that with the additional staff it is adding in fiscal year 1995 and has requested for fiscal year 1996, it will be able to examine mutual fund companies and their advisers with reasonable frequency. However, if these additional resources are not approved or if the financial services industry continues to expand as it has in recent years, SEC may continue to face challenges meeting its responsibility to oversee mutual funds and their advisers. SEC’s inspections of investment companies and their related investment advisers are to be carried out by staff in SEC’s regional and district offices in accordance with general examination objectives that are established by SEC’s Office of Compliance Inspections and Examinations at the beginning of each new fiscal year. Each region is responsible for preparing an annual inspection plan that responds to these overall objectives. Fiscal Years 1991 to 1993 SEC’s objective for inspecting investment companies and investment advisers during fiscal years 1991 through 1993 was to get the greatest dollar coverage with the limited staff available. With this in mind, SEC had a program for inspecting investment companies during this period that called for inspecting funds in the 100 largest fund families and all money market funds. To the extent that time was available after SEC completed inspections of the 100 largest fund families and money market funds, SEC’s 1993 program called for its regions and districts to also inspect smaller fund families, with priority to be placed on inspecting families that had never been inspected. Moreover, SEC testified that its investment company inspections were limited in scope, focusing primarily on portfolio management to determine whether fund activities were consistent with the information given investors and whether funds accurately valued their shares. SEC stated, for example, that it rarely scrutinized important activities, such as fund marketing and shareholder services. Inspections of money market funds focused on compliance with a 1940 act rule that specifies the quality and maturity of permissible instruments that may be held for money market funds and the requirements for portfolio diversification. According to SEC officials, SEC staff review the activities of advisers to investment companies concurrent with their examination of the investment company. In addition, between 1991 and 1993, SEC’s inspection programs called for inspecting all investment advisers with $1 billion or more in assets under management, with about one-third to be done in each of the 3 years. If time permitted, the regions and districts were also to inspect some advisers with less than $1 billion under management that had custody or discretionary management authority over client assets or conducted their business in a way that regional or district office staff believed needed review. Fiscal Year 1994 For fiscal year 1994, SEC changed its inspection approach to (1) reintroduce an element of surprise into the inspections, and (2) allow the staff to focus on investment companies and advisers that they considered more likely to have problems. To accomplish these objectives, SEC headquarters informed SEC’s regions and districts that they were to inspect all medium and small fund families that had not been examined since 1990 and all new fund families formed during the year. SEC estimated that 350 families had not been inspected since 1990; and many of them, especially those connected with banks, had never been reviewed. As in preceding years, the guidance stated that, except for families that had never been inspected, inspections should be limited in scope with an emphasis on portfolio management activities. For families connected with banks, staff were to closely review advertising and the procedures by which shares were distributed to shareholders. According to SEC, during fiscal year 1994, its staff conducted inspections of 303 small and medium fund families, including 225 money market portfolios within those families. The staff inspected funds in the 100 largest fund families only when a cause inspection was necessary. With respect to inspections of investment advisers in fiscal year 1994, SEC headquarters instructed SEC’s regions and districts to focus on potentially higher risk small and medium size advisers with discretionary management authority that had not been inspected in the prior 4 years, with no particular emphasis on large entities. SEC reported that as a result of the shift to inspections of smaller, higher risk investment advisers, the assets under management of inspected advisers decreased from $1.7 trillion in 1993 to $520 billion in 1994. However, the number of deficiencies identified increased by 57 percent, from 5,523 to 8,672. Size of Inspection Staff Is Increasing, but Challenges Remain Until recently, SEC believed that it did not have enough staff to properly oversee the mutual fund industry. For example, in November 1993 SEC testified that despite efforts to use its resources more effectively, such as by obtaining data in electronic format and beginning development of a risk assessment program for investment companies, it needed more and better trained people to deal with the mutual fund industry. An SEC official told us that SEC needs a total of 300 examiners to inspect investment companies and 210 examiners to inspect investment advisers. At the end of fiscal year 1994, SEC had about 200 staff assigned to inspections of investment companies and about 50 to inspections of investment advisers. In December 1993, the Office of Management and Budget approved the hiring of 150 additional investment company examiners (50 each year beginning in fiscal year 1994 through 1996). With the additional staff, SEC plans to perform comprehensive inspections of the 50 largest mutual fund families over a 2-year period. Funds in the other families would be inspected comprehensively about once every 4 years. With respect to investment adviser examiners, in its fiscal year 1996 budget SEC is requesting an additional 193 staff years for the investment adviser inspection activity. If it receives the additional staff, SEC estimates that it will be able to inspect advisers much more frequently than it has in the past. Currently, about 21,000 investment advisers are registered with SEC, but only about 7,000 to 8,000 actually exercise discretion over client assets. According to SEC, it allocates more of its inspection resources to the advisers with discretionary authority and expects to examine these advisers every 6 to 8 years. An SEC official told us that if SEC were required to oversee bank-related investment advisers that are not currently registered with the Commission, it would have little or no effect on their resources because this would add relatively few (fewer than 100) advisers to their total inventory of advisers. Further, SEC staff already examine the activities of many of these advisers during their inspections of the related investment companies. Even if SEC acquires additional inspection staff, it will face major challenges in adhering to its planned inspection schedule. There have been time lags in hiring new examiners, and they need to be trained over a period of several months. In addition, though there has been a slowdown recently, the number of new mutual funds continues to increase. Also, such issues as the mutual funds’ use of derivatives and personal trading by fund managers have come to the forefront. Potential Conflicts of Interest May Arise When Banks Manage Mutual Funds The increase in the number of banks that manage their own proprietary funds has caused the securities regulators and some in Congress to be concerned as to whether the banking and securities regulations are adequate to prevent certain conflicts of interest when banks operate proprietary mutual funds. Specific concerns are whether, or under what circumstances, (1) banks should be permitted to serve as custodians for their own mutual funds, (2) banks should be permitted to loan money to their mutual funds, (3) bank funds should be permitted to purchase securities issued by a borrower of the bank when the proceeds are used to pay off a loan to the bank, (4) banks should be permitted to extend credit to customers to purchase shares of bank funds, and (5) limits should exist on interlocking management relationships between banks and their mutual funds. Banks May Act as Custodians of Their Own Mutual Funds The Investment Company Act of 1940 (the 1940 act) does not prohibit a bank from acting as both the advisor and the custodian for the same mutual fund. This has caused concern among securities regulators that a bank could cause its affiliated (proprietary) mutual fund to select the bank as fund custodian, thereby depriving the fund of an independent custodian and creating the potential for abuse and self-dealing. The fund custodian holds all securities and other fund assets on behalf of the fund. The 1940 act requires a mutual fund to place and maintain its securities and similar investments in the custody of a bank with aggregate capital and surplus and undivided profits of not less than $500,000; a company that is a member of a national securities exchange; or the fund itself. In practice, the fund custodian is almost always a bank. Although the 1940 act does not prohibit a bank from acting as both adviser and custodian for a mutual fund, SEC’s position is that such banks are subject to its self-custody rule. That rule requires that securities and investments of a mutual fund maintained in the custody of the fund must be verified by actual examination by an independent public accountant at least three times a year, two of which must be without prior notice. These requirements, among others, must be satisfied when a bank acts as adviser (or is affiliated with the adviser) and as custodian or subcustodian of a fund. In addition, SEC has advocated changing the 1940 act to subject affiliated bank custodianships to specific SEC rule-making authority. Our analysis of the data provided by Lipper showed that as of September 30, 1993, 53 of 114 banks that advised funds also acted as custodians of those funds. According to the SEC official in charge of SEC’s inspections of mutual funds, auditors must file a certificate reflecting securities verification, which SEC examiners typically review when examining the mutual funds. This official noted, however, that the SEC rule requiring verifications three times a year was written when securities were issued in physical form, such as stock certificates. Today, securities are issued in book-entry form rather than in physical form, requiring more elaborate verification procedures. Independent auditors now evaluate the process and controls used by the custodian to make a daily reconciliation of statements of securities held by the mutual fund with the Depository Trust Company (DTC). However, physical examination of pertinent records is still required to review the custodian’s reconciliations. The SEC official also told us that there have been no specific examples of abuses relating to the custody of securities that have occurred when banks also acted as the funds’ investment advisor. Some Bank Loans to Affiliated Funds Are Permitted The 1940 act allows a mutual fund to borrow up to one-third of its net asset value from any bank. Because the act does not expressly prohibit a mutual fund from borrowing money from an affiliated bank, securities regulators are concerned that the lack of such a prohibition creates the potential for overreaching by a bank in a loan transaction with an affiliated investment company. Several banking laws, however, restrict banks’ ability to make loans to affiliated mutual funds. For example, section 23A of the Federal Reserve Act prohibits a member bank from lending more than 10 percent of its total capital (capital stock and surplus) to a mutual fund that is advised by the bank or its affiliates and 20 percent to all affiliates (a mutual fund advised by the bank is defined as an affiliate). Section 23B of the Federal Reserve Act states that all such lending must be on an arm’s length basis. The Federal Deposit Insurance Act applies Sections 23A and B restrictions to all federally insured nonmember banks. Under Regulation Y, the Federal Reserve prohibited banking organizations (bank holding companies and their bank and nonbank subsidiaries) from extending credit to any mutual fund company advised by a bank within the organization or its affiliates. In addition, a rule adopted by FDIC permits nonmember state banks to extend credit to an affiliated mutual fund subject to the Sections 23A and B restrictions. These must be stand-alone banks and not holding companies. Holding companies must comply with Regulation Y. The Federal Reserve’s bank holding company supervision manual contains detailed guidelines for examining for compliance with Sections 23A and B. The chief examiners in three Federal Reserve district offices told us these examinations are conducted regularly. According to the Federal Reserve official responsible for overseeing enforcement actions, the Federal Reserve has never taken any enforcement actions charging that bank holding companies or member banks had violated Sections 23A or B provisions relating to proprietary mutual funds. Mutual Funds Are Not Prohibited From Purchasing Securities Issued by Borrowers From Affiliated Banks The 1940 act does not expressly prohibit a mutual fund from purchasing the securities of companies that have borrowed money from an affiliated bank, but it does prohibit most transactions between a fund and its affiliates. In addition, Sections 23A and 23B of the Federal Reserve Act, which prohibit banks from engaging in certain transactions with affiliates, do not impose restrictions on the ability of proprietary funds to purchase the securities of companies that are borrowers from an affiliated bank. As a result, securities regulators believe that there is a risk that a bank may use its affiliated mutual fund to purchase securities of a financially troubled borrower of the bank. The indebtedness to the bank would be repaid, but the mutual fund may be left with risky or potentially overvalued assets. A Federal Reserve Board attorney told us that while Sections 23A and 23B do not specifically prohibit proprietary funds from purchasing securities from a borrower of the affiliated bank, such activities are generally violations of state conflict-of-interest laws if the participants intend to prop up a weak bank borrower. This official said that the Federal Reserve enforces these laws as part of its examination and compliance process as do state regulators. This official also told us that bank commercial lending departments are prohibited from sharing sensitive loan information with trust departments. However, if a fund purchases the securities of a bank borrower, such an action would not necessarily be considered a violation of the restrictions. Illegality would depend upon the intent of the participants, that is, an intent to rescue a failing corporate borrower. Similarly, a bank intentionally causing an affiliated fund to acquire the securities of a troubled borrower to shore up the borrower’s finances may be in violation of the affiliated transaction provision of the 1940 act, and the bank would be violating its fiduciary obligations as an adviser to the fund. Officials of two very large banks that we visited told us that it was possible, even likely, that their proprietary funds would make investments in entities to which the bank had loaned money. For example, a bank official told us that if one were to examine his bank’s loan portfolio, it would not be inconceivable to find IBM as a borrower and, likewise, IBM would probably turn up as one of the stocks held by that bank’s mutual fund family. Even so, this would be coincidental rather than the result of any planned activity, as many of the Fortune 500 companies are likely to be customers of his bank and others like it. Officials at both banks stressed that the lending and investment advising activities are quite separate and that their controls for separating these activities precluded any abuses. Management Interlocks Between Some Banks and Mutual Funds Could Occur To eliminate potential conflicts of interest between securities firms (including mutual funds) and banks, Section 32 of the Glass-Steagall Act and regulations of the Federal Reserve Board prohibit interlocks among officers, directors, and employees of these entities. However, because of interpretations by the Federal Reserve Board and FDIC, there are opportunities for interlocks to occur between banking organizations and mutual funds. Whether these interlocks have resulted in actual problems is uncertain; regulators told us that no cases have been reported. Section 32 of the Glass-Steagall Act, as interpreted by the Federal Reserve Board, prohibits employee, officer, and director interlocks between banks that are members of the Federal Reserve System and mutual funds. The Board has applied Section 32 to bank holding companies; consequently, a bank holding company with member bank subsidiaries may not have an interlock with a mutual fund. However, the Board has indicated that interlocks between nonbanking subsidiaries of bank holding companies and securities firms are not subject to Section 32. Therefore, a nonbanking subsidiary of a holding company could have an interlock with a mutual fund. Section 32 does not apply to banks that are not members of the Federal Reserve. Thus, a nonmember state bank could maintain an interlock with a mutual fund. In addition, FDIC’s regulations do not prohibit interlocks between a state nonmember bank and a mutual fund for which it acts as an investment adviser. However, a nonmember bank with a bona fide subsidiary or securities affiliate that engages in mutual fund activities impermissible for the bank itself (such as acting as the fund’s underwriter) would be subject to restrictions. The bona fide subsidiary or securities affiliate may not have common officers with the bank and would be required to have a majority of independent directors. The 1940 act does not prohibit interlocks between banks and investment companies. However, Section 10(c) of the act prohibits a registered investment company from having a majority of its board consist of officers, directors, or employees of any one bank. The act defines the term “bank” to include a member bank of the Federal Reserve System. In addition, section 10(a) requires that at least 40 percent of a fund’s board members be “disinterested persons.” These are persons who are not to be affiliated with a fund’s adviser, including a bank adviser, or with the fund’s principal underwriter. The Prohibition on Sponsorship and Underwriting of Mutual Funds by Banks May Increase Banks’ Costs A bank may serve as the investment adviser to a mutual fund; act as an agent in purchasing mutual funds for customers (i.e., provide discount brokerage services); provide full brokerage services to customers, including investment advice concerning mutual funds; provide administrative services to mutual funds; and serve as the custodian and transfer agent to mutual funds. However, the Glass-Steagall Act prohibits banks that are members of the Federal Reserve System and bank holding companies from sponsoring mutual funds or underwriting and distributing the shares of mutual funds. These restrictions also apply to affiliates of banks that are members of the Federal Reserve System and to nonmember banks. They do not apply, however, to subsidiaries or affiliates of state banks that are not members of the Federal Reserve System. So, a subsidiary of a state nonmember bank (if it does not have a member bank affiliate) may provide these services, as may an affiliate of a savings association (if it does not have a member bank affiliate). Most parties seem to agree that the restrictions on sponsoring, underwriting, and distributing mutual funds are insignificant in practical terms. Shares of mutual funds are not “underwritten” in the traditional sense, whereby an underwriter commits as principal to purchase large blocks of securities for resale to the public or agrees to use its “best efforts” to sell securities to the public. Instead, investors generally purchase shares of mutual funds either directly from a fund or from securities firms, financial planners, life insurance organizations, or depository institutions. An official of one bank we visited said that he did not regard the Glass-Steagall prohibitions on sponsorship and underwriting as a necessary guard against conflicts of interest. In his opinion, the original (1933) concern about a bank exposing itself to risk by acting as principal in the underwriting of securities does not apply to the issue of bank sales of mutual funds because the bank sells mutual funds on an agency basis; since it does not act as principal, it does not expose its capital to risk. The major cost of the Glass-Steagall restrictions to banks is that they must contract with unaffiliated distributors that perform underwriting functions in return for fees. One banker told us that the elimination of the Glass-Steagall provisions that prevent commercial banks from underwriting securities would eliminate the banks’ need to hire such organizations and pay such fees. He also said that without Glass-Steagall restrictions, the banks might be able to operate more efficiently. Conclusions Eliminating banks’ exemption from the Securities Exchange Act of 1934 and requiring that all mutual fund sales by banks be conducted through broker-dealers, as suggested by SEC, currently would affect less than 10 percent of all banks. Banks that sell mutual funds directly through their own employees rather than a broker-dealer generally do so either because they want to maintain control of their customer relationship or they do not have a sufficient volume of business to justify establishing a relationship with a broker-dealer. Eliminating the exemption would allow SEC and self-regulatory organizations, such as NASD, to enforce the securities laws uniformly in connection with the sale of mutual funds. However, the fact that SEC does not now have oversight of direct retail sales by bank employees does not mean that these banks are free to conduct these sales without any supervision. The bank regulators’ interagency guidance applies to all sales activities on the premises of the banking institution, regardless of whether they are done through a broker-dealer or directly by a bank employee, and the banking regulators have taken steps in their examinations to increase their scrutiny of banks’ compliance with the guidance. Similarly, removing the exemption from the definition of investment adviser under the Investment Advisers Act of 1940 for banks that advise funds, as suggested by SEC, would allow SEC to more fully inspect previously unregistered advisers to determine that the adviser is carrying out securities transactions in a way that is fair to all of its clients, including the mutual fund. However, removing the exemption may also permit SEC to make limited inspections of bank activities that have been solely within the domain of the banking regulators, such as transactions involving trust accounts. These activities are regularly examined by the banking regulators. The banking regulators’ examinations, however, focus principally on safety and soundness considerations, rather than on compliance with the securities laws. Although removing the exemptions would allow the securities regulators to extend their oversight of banks’ mutual fund activities, this action would not, by itself, resolve conflict and overlap among the regulators. This is because the banking regulators in their role of overseeing the safety and soundness of banks would continue to be involved in conducting examinations and issuing rules and guidance on banks’ securities activities. Although the regulators have taken some actions to work more closely together, as in the January 1995 agreement between NASD and the banking regulators on coordinating their examinations, there are areas in which additional coordination would be desirable. For example, although NASD’s December 1994 proposed rules governing securities broker-dealers operating on bank premises paralleled the interagency guidance in many respects, they have caused controversy because they contain provisions that differ from the banking regulators’ interagency guidance. NASD officials commenting on this report said these differences are purposeful and provide a more explicit, well-defined, and enforceable approach to regulating these NASD members. In addition, the banking regulators and SEC do not currently have an agreement to coordinate their oversight of investment advisers similar to the one between NASD and the banking regulators for sales practice examinations. SEC is concerned that OCC examiners will be attempting to enforce securities laws as part of their examinations of investment advisers, and it would appear that development of such an agreement, to include a common approach for conducting and coordinating these examinations, would help eliminate overlapping examinations and conflicting guidance. Recommendation We recommend that SEC, the Federal Reserve, FDIC, OTS, and OCC work together to develop and approve a common approach for conducting examinations of banks’ mutual fund activities to avoid duplication of effort and conflict, while providing efficient and effective investor protection and ensuring bank safety and soundness. Agency Comments and Our Evaluation Each of the organizations (SEC, NASD, OCC, FDIC, the Federal Reserve, and OTS) that provided comments on a draft of this report supported our recommendation. Several agencies cited efforts that have been recently completed or are currently under way to work closely together, including implementing the January 1995 agreement between the banking regulators, SEC, and NASD to coordinate examinations. However, OCC believed the report overemphasized the potential for inconsistent or contradictory regulation. In addition, SEC and OCC stated that in June 1995 they reached agreement on a framework for conducting joint examinations of mutual funds and advisory entities in which both agencies have regulatory interests. Their comments indicated that they expect this agreement to result in increased coordination and result in more efficient oversight of bank mutual fund activities. According to SEC, its staff and OCC staff have informally discussed examination procedures and are beginning to schedule joint examinations. SEC also stated that its staff has met preliminarily with the staff of FDIC to discuss entering into a similar arrangement.
Pursuant to congressional requests, GAO reviewed bank and thrift sales of mutual funds, focusing on: (1) the extent and nature of bank and thrift mutual fund sales activities; (2) banks' and thrifts' disclosure of their mutual fund sales practices; and (3) the regulatory framework for overseeing bank and thrift mutual fund operations. GAO found that: (1) as of the end of 1993, about 2,300 banks and thrifts were involved in mutual fund sales and about 114 institutions had established their own mutual funds; (2) during the 5 previous years, the value and numbers of bank-owned funds grew faster than the mutual fund industry as a whole and banks and thrifts became major sellers of nonproprietary funds; (3) banks and thrifts sell mutual funds to retain customers and increase fees; (4) in February 1994, bank regulators issued guidelines on policies and procedures that financial institutions are to follow in selling nondeposit investment products due to their concern that banks and thrifts are not disclosing the risks of investing in mutual funds; (5) GAO visits to selected banks and thrifts in 1994 disclosed that only about one-third of the institutions followed the disclosure guidelines, while nearly one-fifth of the institutions failed to disclose any risks; (6) the bank regulators are including steps in their examinations to assess how well these institutions are complying with the guidelines; (7) the existing regulatory framework is inadequate to deal with the rapid increase in banks' and thrifts' involvement in securities sales and management; (8) banks that directly sell to customers are predominantly regulated by bank regulators, while securities regulators mainly oversee banks which sell or advise through affiliates or third party brokers; (9) the existing regulatory framework could lead to inconsistent or overlapping regulatory treatment of the same activity and to conflict among the regulators; and (10) conflicts of interest may arise between banks' mutual fund activities and traditional banking functions.
Background FRA is the primary federal agency responsible for issuing and enforcing railroad safety regulations and for distributing federal funds for intercity passenger rail service. PRIIA mandated new responsibilities for FRA to plan, award, and oversee the use of federal funds for intercity passenger rail. The American Recovery and Reinvestment Act of 2009 (Recovery Act) appropriated funding for high-speed rail projects, which resulted in a dramatic increase in federal funding for intercity passenger rail projects. Prior to 2009, FRA had a very limited grant portfolio, receiving appropriations for approximately $30 million in grant funding in fiscal year 2008, for example, primarily for intercity-passenger rail grants to states. With expanded responsibilities, the agency had to quickly award approximately $8 billion in Recovery Act funds while simultaneously developing policies and procedures for grants management. In 2015, FRA managed a portfolio of approximately 200 grants and managed $17.7 billion in obligated grants. In December 2015, Congress passed the Passenger Rail Reform and Investment Act of 2015 as a title in the Fixing America’s Surface Transportation Act (FAST Act), which authorized a new consolidated rail infrastructure and safety-improvements grant program to assist grantees in financing the cost of improving passenger and freight rail transportation systems. In 2010, when FRA was in the early stages of developing its grant oversight program, GAO identified principles that would become important as FRA transitioned from awarding grants to overseeing their performance. For example, a well-designed and implemented grant oversight program is critical to ensuring effective use of federal grant funds. FRA began drafting a Grants Manual to communicate the agency’s overall approach to grants management in April 2010 when it was in the process of building up its grants management program. This Grants Manual includes procedures developed to address the agency’s expanded responsibilities following the enactment of the Recovery Act. FRA’s 2013 Program Management Plan outlined a “matrixed” organization structure intended to facilitate the flow of skills and information across functions within FRA (see fig. 1 below). FRA’s grant and loan portfolio is further organized into geographically based regional portfolios. Each regional portfolio is managed by a Regional Team comprised of a team lead—the Regional Manager—and subject matter experts from other FRA offices. Teams are responsible for funding, project delivery, and monitoring and oversight activities. For example, for the section 305 procurement projects, the “project” entails the design, manufacturing, and delivery of the bi-level and locomotive equipment, and FRA is responsible for evaluating and monitoring—from start to finish—all the steps required to ensure successful project delivery. The Regional Manager is responsible for coordinating the entire regional team, and may share some advisory and oversight responsibilities with the designated grant managers—those directly responsible for overseeing and maintaining specific portions of the grants management process. For example, for the two section 305 equipment procurements—the bi-level cars and the locomotives—grant oversight includes collaboration between Regional Managers and Grant Managers within the Office of Railroad Policy and Development, subject matter experts within the Office of Safety and the Office of Chief Counsel, and contractors. Launched in January 2010, the NGEC has developed, adopted, and promulgated six specifications for next-generation corridor equipment. For example, one of the specifications was to develop and build vehicles for the future or next generation, such as a locomotive capable of speeds up to 125 mph. The NGEC developed standardized specifications intended to make it possible for a group of states to buy equipment faster, at a lower cost, with reduced operating and maintenance costs going forward. Using the NGEC-developed specifications as a foundation for two equipment procurement projects, Caltrans took the lead on the bi- level car project, and IDOT leads the locomotive project. For example, IDOT—representing the Midwest coalition of Missouri, Michigan, and Iowa—serves as the lead state on behalf of itself, Caltrans, and WSDOT in conducting the joint locomotive procurement. Though Caltrans and IDOT are the lead states for the two projects, they have established memorandums of understanding for the equipment with their partner states, including Washington, Missouri, Michigan, Iowa, and Wisconsin. The NGEC is comprised of FRA, Amtrak, state, and industry participants. The NGEC structure includes an Executive Board responsible for writing the technical requirements document and approving the final specifications. The requirements document outlines the design objectives and specific performance requirements that need to be met for each type of equipment—in the case of the section 305 equipment-procurement projects, diesel electric locomotives and bi-level passenger cars (see fig. 2 below). The specification is a detailed technical document developed by the NGEC Technical Subcommittee intended to address the range of operational considerations needed to procure, design, and manufacture a fleet of bi-level cars or locomotives for use in intercity corridor service. The Technical Subcommittee includes working groups in several functional areas—e.g., structural, electrical, and mechanical—responsible for drafting relevant elements of each technical specification. The NGEC Review Panel evaluates specifications for compliance with the requirements document as well as compliance with regulations regarding safety, accessibility, and operations. The NGEC also has a detailed system for approved specifications and documents to be revised, edited, and updated through a formal process. Six grant agreements between FRA and Caltrans, IDOT, and WSDOT, respectively, are funding the two locomotive and bi-level equipment procurement projects (see table 1 below). FRA entered into two grant agreements with Caltrans, three grant agreements with IDOT, and one grant agreement with WSDOT. WSDOT’s grant agreement was developed prior to the decision to designate one lead state for each of the two equipment projects. After all 6 grant agreements were executed, the efforts were split into two projects, as Caltrans contracted with Sumitomo Corporation of America and Nippon Sharyo in November 2012 to purchase 130 bi-level cars and IDOT contracted with Siemens in March 2014 to manufacture 47 locomotives. These grants are funded by a mix of Recovery Act and DOT fiscal year 2010 appropriations. For the six grants, approximately 75 percent of the funding is Recovery Act money and four of the six grants have an expenditure deadline of September 30, 2017. FRA’s Grants Management Approach Has Evolved to Administer the Section 305 Equipment Procurement Projects FRA’s management of the grants funding the section 305 equipment procurement projects has evolved from a general grants management approach to include additional project-level oversight. This framework is outlined in the Grants Manual—including routine and scheduled monitoring reviews—and in the specific activities outlined in the terms of the grant agreements funding the section 305 equipment procurements. FRA’s approach to manage the section 305 equipment projects evolved as the projects progressed, to include additional involvement in project- level activities. Specifically, in 2015, FRA transitioned to a project-based oversight structure to carry out its grants management framework for the section 305 equipment procurement projects and, in 2014, used independent contractors to support project oversight. For example, FRA used additional contractor resources to support the agency’s grants management oversight after delays were identified in the bi-level project schedule. FRA Implemented General Grants Management Procedures from the Start of the Equipment Procurement Projects Beginning in 2010, FRA implemented its draft grants management framework to manage the post-award phase of the grants funding the section 305 equipment procurement projects. The general framework used to manage FRA’s entire grant portfolio—including the six grants funding the procurements—is described in the agency’s Grants Manual, articulating its oversight and monitoring procedures to track grantees’ performance. After grants are awarded, the general oversight and monitoring procedures described in the Grants Manual are carried out in accordance with the specific terms of the individual grant agreements. According to the Grants Manual and interviews with FRA and state grantee officials, FRA’s monitoring procedures include what is referred to as “routine” and “scheduled” monitoring: Routine monitoring: the periodic review of progress for all active FRA grants to ensure grantees are in compliance with the terms of their respective grant agreements. For example, FRA’s grant managers review the quarterly progress reports (QPR) grantees typically are required to submit under their grant agreements, describing the status of grant projects, including any issues encountered affecting the scope, schedule, and budget described in the grant agreement. For the section 305 equipment procurement projects, FRA reviews QPRs from Caltrans, IDOT, and WSDOT for their respective grant agreements. While FRA provides grantees with a QPR template that includes fields for narrative descriptions of significant accomplishments and any technical, cost, or schedule problems experienced during the review period, our review of the QPRs FRA provided shows that the information included in these reports varied by grantee. For example, IDOT first reported delays with the bi-level delivery schedule in the first quarter of fiscal year 2013, while the bi-level project lead, Caltrans, did not report schedule problems until the first quarter of fiscal year 2014. In addition, the level of detail regarding identified issues within the QPRs varied. For example, in the first quarter of fiscal year 2014, IDOT and Caltrans each discussed the bi-level delivery schedule; however, IDOT provided additional information raising concerns that the projected delivery schedule would not meet the September 30, 2017, deadline for the expenditure of funds. FRA officials told us QPRs do not always identify project issues encountered in a given quarter because the agency works informally with grantees to address problems as they arise. According to FRA officials, the agency has developed a new QPR form for grantees that will enable more detailed data on status and progress to be collected. FRA officials told us the agency plans to begin using this form in 2016, but the new QPR forms were not in use at the time of this review. Scheduled monitoring: a detailed annual review of select grant projects’ progress, which FRA officials carry out using desk and/or on- site review sessions. FRA selects grant projects from its full grant portfolio to receive scheduled monitoring using factors such as project type, funding level, the time elapsed since previous monitoring activities, and knowledge of existing project issues. For the section 305 equipment procurement projects, a series of checklists help guide FRA’s scheduled monitoring reviews. According to the Grants Manual, the results of scheduled monitoring activities are recorded in a monitoring review report and should identify any significant findings (i.e., issues that jeopardize project completion or compliance) and areas of interest (i.e., issues that have the potential to become significant). FRA’s monitoring and oversight procedures require grantees to develop and submit corrective action plans when significant findings—such as issues that put the entire project at risk— are identified, describing the practices that grantees and other stakeholders, as appropriate, will follow to address the identified issues. To date, FRA has conducted 13 scheduled monitoring reviews for the six grants providing funding for the section 305 procurement projects, identifying significant findings associated with the bi-level car project in 2014 and 2015. For example, one monitoring report identified the bi-level design and delivery schedule as a significant finding because the manufacturing timeline did not meet the expenditure deadline. The bi-level schedule delay was reported by FRA in 2013, but at that time it was identified as an area of interest rather than as a significant finding. While no significant findings have been identified with the locomotive project to date, FRA has identified areas of interest associated with the locomotive project. For example, in a 2015 monitoring report, FRA reported one area of interest for the locomotive project—that the states receiving the locomotive equipment do not yet have an ownership, operation, or maintenance plan in place to manage the equipment upon delivery. As FRA began implementing its general grants management framework, it also began to award the six grants providing funding for the section 305 equipment procurement projects. See figure 3 below for a timeline of FRA’s grants management and section 305 equipment project milestones. FRA’s grant responsibilities are also defined by the six individual grant agreements funding the equipment purchases. As noted above, the grant agreements require that the equipment purchased with federal funds comply with specifications developed by the NGEC. The grant agreements also generally incorporate the requirement that the equipment be procured in a manner consistent with Buy America requirements. In our analysis of the six agreements, we identified areas where FRA has specific responsibilities for the equipment grants as the projects progressed. Depending on the terms of the particular grants funding the section 305 procurement projects, FRA’s involvement in the locomotive and bi-level project may include reviewing the grantees’ draft requests for proposal (RFP) seeking interest from potential equipment manufacturers, participation in the design review meetings with the grantees and equipment manufacturers, and participation during section 305 equipment testing. Reviewing and approving RFP packages: Three of the grant agreements explicitly require FRA’s review and approval of the grantees’ draft RFPs; the other three grants are either silent or require approval by NGEC. The FRA officials we spoke with explained that the agency’s participation in the proposal review process included reviewing the RFP package to ensure the grantees’ bid solicitations included applicable federal provisions (e.g., Buy America and Davis- Bacon requirements). For example, FRA officials reviewed the RFP for the contract that Caltrans eventually awarded to Nippon-Sharyo. In addition, FRA officials told us the agency had worked to reconcile differences between the various state laws of equipment grantees. According to FRA, the difficulties in reconciling each state’s requirements were a source of delay in awarding the initial procurements. The grant agreements do not provide for FRA’s participation in the grantee’s bid evaluation or selection process, nor is this process required by applicable federal law. FRA officials told us agency officials were not involved in these processes and did not provide concurrence with IDOT or Caltrans’ bid selections. Reviewing and approving design specifications: Four of the grant agreements explicitly require FRA to approve the design of the bi- level rail car and/or locomotive, while the other grant agreements provide for FRA to review the design specifications as a stakeholder. FRA officials told us agency officials participated in multiple reviews of the manufacturers’ design documents for both the bi-level cars and locomotive equipment —from the initial design concept to a final design plan—to ensure the designs comply with federal safety standards. For example, according to the IDOT officials we met with, FRA officials attend all locomotive design-review meetings and the state submits design documentation to FRA’s safety team for approval. Participation in section 305 equipment testing: Some of the grant agreements provide a role for FRA in the testing of the equipment. For example, two of the grant agreements specify that FRA will be given the opportunity to witness and approve production tests. Another grant agreement authorizes FRA to verify test results. The FRA officials we spoke with said agency officials participated in equipment testing to ensure the bi-level cars and locomotive equipment meet federal safety standards and that the testing procedures are appropriate. According to Caltrans and IDOT officials, FRA is invited to all inspections and testing for the bi-level equipment. Because the agreements between FRA and the states are cooperative agreements, FRA provides substantial programmatic involvement in the section 305 equipment procurements. As provided in the grant agreements, substantial programmatic involvement means that FRA will assist and coordinate with grantees and participate in grant project activities after the contract is awarded. For example, FRA provides grantees with administrative, programmatic, and technical assistance as needed. The FRA officials we met with said the agency has provided training on the Buy America requirements and the agency’s financial reporting requirement in response to the needs of the section 305 equipment grantees. At the start of the bi-level and locomotive procurements, the extent of FRA’s involvement varied based on the state’s previous project management experience. For example, a former FRA official involved at the start of the bi-level car project told us the Caltrans staff managing the project at the beginning did not seek FRA’s involvement beyond the areas described in the grant agreements funding the equipment procurements. These staff had previous procurement experience but lacked experience in industry and manufacturing issues, affecting the state’s ability to effectively manage the project. In addition, Caltrans had prior experience purchasing rail cars and the 305 specification that the NGEC developed for the bi-level cars was based on the “California Car” that Caltrans had purchased between 1995 and 1997. In contrast, the IDOT officials we met with said that since IDOT did not have previous procurement experience, the state proactively sought FRA’s involvement from the start of the locomotive project and hired an industry expert with locomotive experience to help manage the project. For example, FRA reviewed IDOT’s ordering agreement with Siemens (the locomotive manufacturer) and the IDOT officials we spoke with said FRA has participated in monthly meetings with the Midwest Partner states that will be receiving the locomotive equipment from the start of the project. As the Bi-level Project Encountered Challenges, FRA’s Approach for the Section 305 Equipment Procurement Projects Evolved to Include Additional Project-Level Oversight As the section 305 procurement projects have progressed, the bi-level car project has encountered challenges that jeopardize project completion by the expenditure deadline. While FRA reported issues with the bi-level project’s schedule in a 2013 monitoring review report, the issues facing the bi-level project were not identified as a significant finding—thus, did not require a corrective action plan—until 2014. In 2014, the bi-level project schedule was also reported as a significant issue in an independent review of the project by an FRA consultant. For example, the independent review raised concerns that the level of detail in the bi-level schedule did not meet the project management requirement and that design activities were up to 7 months behind schedule. In FRA’s 2014- scheduled monitoring review report, the bi-level car project’s compressed design and delivery schedule was identified as a significant finding because the schedule posed a risk to the project leading up to the expenditure deadline. Grantees are responsible for developing corrective action plans to resolve significant issues, and FRA officials told us that the agency supported the grantee’s efforts to develop corrective actions in response to the bi-level project issues identified. For example, in April 2015, FRA convened a meeting with Caltrans, IDOT, and Nippon Sharyo to work through the bi-level project issues. The meeting resulted in a series of corrective actions including replacing Caltrans and contractor program managers and establishing a designated bi-level program manager to oversee activities related to the scope, schedule, and budget of the project. Following that meeting, in May 2015, documentation of the corrective action plan to address bi-level issues revealed that the bi-level project was 14 months behind schedule. In August 2015, the bi-level car schedule encountered an additional setback after the bi-level car shell suffered a structural failure during testing and production stopped for car shell redesign. In a December 2015 monitoring review report, FRA stated that the manufacturers’ project schedule showed a delay of 24 months and that the bi-level project would not be completed by the expenditure deadline. Furthermore, FRA’s monitoring report revealed the agency’s concern with Nippon Sharyo’s quality standards and project management. For example, FRA reported that Nippon Sharyo was increasing risk to the bi-level project by manufacturing bi-level car parts without an accepted car shell design. As of April 2016, the bi-level production schedule is on hold and the final project equipment delivery date is unknown. To date, FRA has not identified significant findings with the locomotive project through its monitoring reviews, and as of April 2016 the project was on schedule, with planned equipment delivery in June 2017. In addition to FRA’s efforts to address significant issues related to the bi- level car project, FRA’s grants management approach for both of the section 305 equipment procurements has evolved to include additional project-level oversight. In practice, FRA is involved in the procurements at two levels—(1) as grants manager overseeing the state grantees, and (2) increasingly, as support for the underlying activity between the states and their selected contractors in which FRA has elected to participate. As discussed below, this approach includes a revised management structure, informal interaction with section 305 equipment procurement stakeholders, and the use of outside contractors. Revised management structure: FRA initially carried out its grants management framework under the regional oversight structure used to manage its entire grant portfolio. However, FRA officials told us this regional approach—under which FRA’s management for the six grant agreements funding the section 305 equipment procurement projects was carried out by three different FRA regional teams—was not working for the section 305 equipment procurements. According to the FRA officials we spoke with, it became clear that there should be a single manager responsible for overseeing both projects due in part to the number of grants managers and regional managers involved and due to project management challenges. In 2015, FRA transitioned to a project-based oversight structure for the section 305 equipment projects—referred to as the National Vehicle Program—making the Midwest Regional Manager the primary FRA official responsible for managing and coordinating grantees’ efforts for both the bi-level and locomotive equipment projects, instead of managing the six individual grants. In addition, FRA hired an individual with prior passenger-rail industry experience into the role of the National Vehicle Program Manager, to participate in routine meetings with grantees and report relevant information to the Midwest Regional Manager. According to one of the grantees we met with, FRA did not initially have sufficient experience to manage the needs of the delivery program, and each of the grantees said turnover in FRA’s grants management staff was a challenge. For example, Caltrans officials worked with three different FRA grants managers and four different FRA regional managers in 5 years, and IDOT officials worked with two regional managers in the same time. According to one grantee, many of the FRA staff involved in the project early on had policy experience but lacked experience in project delivery. According to FRA officials, the agency has continued to experience turnover in key roles associated with the equipment procurements. Informal Interactions with project stakeholders: While FRA continues to apply the monitoring and oversight procedures in its Grants Manual as described above, FRA officials told us frequent and informal interactions—for example, ongoing participation in meetings between project stakeholders such as grantees and equipment manufacturers—help the agency monitor grantees’ performance. Caltrans and IDOT officials told us that FRA officials and consultants currently participate in project-level meetings several times a week. For example, Caltrans officials told us that FRA officials attend meetings with project stakeholders four times a week, including a quality assurance meeting with Nippon Sharyo and a weekly coordination meeting with IDOT. According to FRA officials we spoke with, these informal interactions are the primary way information about the status of the locomotive and bi-level projects is communicated. In addition, according to FRA, the agency and its contractors have held a series of meetings with Nippon Sharyo to address bi-level rail car project challenges—including discussions related to specific project deficiencies and the manufacturer’s responses to the areas of concern identified by FRA. Additional contractor involvement: In May 2014, FRA awarded a new contract for Monitoring and Technical Assistance Contractors (MTAC) to help manage the equipment procurements, and more recently increased contractor support to meet the growing needs of the bi-level equipment project. The overall role and responsibilities of MTAC contractors are detailed in FRA’s Monitoring Procedures, published in August 2014, and include supporting FRA’s project-level oversight with technical expertise and delivering training and technical assistance to grantees. The MTAC contract enables contractors with subject-matter expertise to be deployed in response to a range of FRA programs or projects as needed. The FRA officials we met with told us MTAC is not included in the grants management framework described in the Grants Manual because the use of MTAC contractors is dependent on the needs of the grant project, allowing the agency to obtain MTAC support and expertise as needed. For example, FRA has significantly increased MTAC support based on the needs of the section 305 equipment procurement projects and, in particular, the needs of the bi-level project. FRA officials told us they executed a new MTAC contract because the contractor support necessary to oversee the bi-level project was more than the agency anticipated, exceeding the resources provided under the agency’s original contract. MTAC contractors participate in meetings between bi-level and locomotive grantees and vendors and provide project updates to FRA through summary reports of these interactions. In addition, the National Vehicle Program Manager and MTAC contractors meet weekly to discuss the procurements and report relevant project information, including any issues, to FRA’s Midwest Regional Manager who is responsible for overseeing the National Vehicle Program. FRA officials told us that MTAC’s direct involvement with the grantees receiving funding for the section 305 equipment procurement projects provides FRA with a “boots on the ground perspective” of the project’s status. In addition, MTAC provides Caltrans with technical support and oversees Caltrans project management by participating in weekly meetings with the states and manufacturer. FRA’s Grants Management Approach Partially Follows Leading Practices We found that FRA partially follows the grants management leading practices we identified in the areas of performance monitoring, written documentation, training, and communication. Practices in these four areas are closely related, and improvements or shortfalls in one practice may contribute to improvements or shortfalls in another practice. Establishing, documenting, and following practices—and their supporting characteristics—in these four areas can contribute to a more effective grants management framework. Since 2010, FRA has developed policies and procedures in all four areas; however, it has not fully implemented those policies and procedures. Table 2 shows our overall assessment of FRA’s grants management activities compared to leading practices with examples of selected characteristics. Appendix II provides greater detail on our comparison of FRA’s approach with each supporting characteristic for each leading practice. FRA Does Not Have Performance Measures Directly Linked to Project Goals and Does Not Fully Evaluate Results of Monitoring Activities An effective grants management framework includes establishing a process that ensures project goals are identified, tracked, and fulfilled and deliverables received. FRA partially follows the performance monitoring leading practice area because while FRA developed a strategic vision for the HSIPR program and outlined a monitoring process in its Grants Manual, it does not have project goals or performance measures linked to the grants funding the section 305 equipment procurement projects and it does not fully evaluate the results of monitoring activities. While FRA has developed a strategic vision for the HSIPR program— including promoting energy efficiency, environmental quality, and economic competitiveness—the agency has not formalized project goals for the two section 305 equipment procurement projects or performance measures that clearly link to project goals. FRA officials we met with stated that there is no stand-alone document outlining goals or performance measures for the section 305 equipment procurement projects, and that the officials measure project progress by tracking the scope, schedule, and budget outlined in each grant agreement. However, the agency has not formalized this approach to performance monitoring, for example in its Grants Manual, or documented specific goals or associated performance measures for the section 305 equipment procurement projects in a way that would demonstrate the agency has a process for comparing actual results against planned performance. Without explicit project goals and associated performance measures, it may be challenging for decision-makers to track and assess a project’s progress, make decisions about future efforts, and keep grantees accountable for outcomes. While FRA outlines its monitoring procedures in the Grants Manual, we found it does not fully evaluate and document results of monitoring activities, including plans for corrective action. As discussed above, FRA outlines a detailed process for routine and scheduled monitoring in its Grants Manual. According to the Federal Standards for Internal Control, management should evaluate the results of monitoring activities and remediate identified internal control deficiencies on a timely basis. In other words, management should determine the appropriate corrective actions in a timely manner based on the identified deficiency. FRA noted in a 2013 monitoring report for the bi-level car project that the schedule showed that not all rail cars would be delivered by the grant agreement’s expenditure deadline for FRA funding. While this was an issue that directly affected the project’s scope and schedule, FRA did not identify it as a significant finding that would require a corrective action plan. FRA officials ultimately published a corrective action letter to lay out expectations and discuss how to get the project back on schedule in 2015—2 years after the issue was identified in a monitoring report. The DOT Inspector General reported in 2015 that corrective action plan deadlines for several other HSIPR grants had passed without any documentation that grantees took the necessary actions or staff extended deadlines to complete the plans. According to FRA officials, they use internal reports—separate from the routine and scheduled monitoring—to inform agency management about any issues related to a project’s scope, schedule, and budget. However, the procedures for and use of these reports are not formally documented in the Grants Manual as a mechanism for evaluating progress. Furthermore, it is unclear how the internal reports relate to the monitoring and oversight approach FRA has outlined in its Grants Manual or how they are used to hold grantees accountable for potential risks to their grants. Leading practices emphasize processes used to evaluate and monitor efforts should be formalized and documented. Without evaluating and documenting the results of monitoring activities in a timely manner grant managers may not be able to address risks to a project’s completion, a situation that could have a potential negative impact on the grant. This situation is particularly important for the grants funding the section 305 equipment procurement projects because most of the funding has an expenditure deadline of September 30, 2017, and equipment purchases such as these represent a new type of project for FRA. FRA Developed Internal Documentation for Grants Management Policies and Procedures, but Has Not Developed Written Guidance For Grantees An effective grants management framework includes developing and maintaining written documentation as a means to obtain and retain organizational knowledge and to ensure accountability for achieving agreed-upon results. FRA partially follows the written documentation leading practice area because while the agency has developed and maintained written policies and procedures to communicate grants management knowledge among its staff, it has not provided documentation outlining grantee expectations or developed guidance specific to the section 305 equipment procurement projects. FRA has developed and maintained internal grants-management policies and procedures to communicate knowledge among agency staff. For example, FRA’s Grants Manual, the main policy document for agency staff, describes monitoring and oversight activities that staff are expected to conduct. The manual is periodically updated to reflect new federal regulations, such as regulations issued under the Office of Management and Budget’s new uniform guidance for federal awards, and changes in process, such as a more detailed issue escalation process. FRA also developed detailed Monitoring Procedures outlining FRA’s expectations of MTAC contractors and provided guidance to MTAC on its oversight responsibilities. For example, the monitoring procedure for oversight reports outlines the purpose of recurring oversight, documents that MTAC contractors must review, and the format and contents of the reports MTAC contractors must submit. Finally, in late 2015, FRA began drafting a Portfolio Management Guide with centralized guidance for regional managers—intended to complement the Grants Manual—recognizing that the Grants Manual was primarily written for grants managers. Although FRA has developed and maintained internal grants management policies and procedures, it has not outlined expectations or developed guidance specific to the grants that fund the section 305 equipment procurement projects for grantees. For example, one grantee stated that while the grant agreements include high-level expectations, such as complying with federal regulations and meeting the NGEC technical specifications, expectations specific to the section 305 equipment procurement projects were unclear. According to this grantee, because FRA did not assign a central point of contact for the multi-state projects, each state grantee had to coordinate separately through their own grant manager, and expectations changed with each new state grant manager. All three grantees we spoke with stated that written guidance or procedures to manage the grants funding the section 305 procurement projects would be helpful. In a 2011 monitoring report, one grantee requested that FRA provide a public, written policy on receiving, reviewing, and accepting deliverables because grantees did not know if or when they would receive feedback from FRA and whether the deliverables being submitted were in an acceptable format or of sufficient quality. Those grantee officials told us that their project manager informally uses the Federal Transit Administration’s (FTA) guidance in the absence of guidance from FRA. In a 2014 monitoring report, one grantee noted that circulars specific to the grant program would be helpful. According to FRA officials we met with, the agency informs grantees of expectations at a “kick-off meeting” when the grant is first awarded and through routine monitoring and targeted training and technical assistance related to items such as Buy America provisions or safety matters. However, according to the U.S. Domestic Working Group’s Grant Accountability Project, written policies serve as guidelines to ensure new grant programs include provisions for holding grantees accountable for properly using funds and achieving agreed-upon results. To date, FRA has not developed written guidance on grants or project management procedures for grants funding the section 305 equipment procurement projects. According to the Federal Standards for Internal Control, management should implement control activities through policies. For example, management can document responsibilities through policies and communicate policies and procedures so that personnel can implement control activities for their assigned responsibilities. Other agencies have developed detailed written guidance for grantees. For example, FTA developed grants-management circulars on general requirements as well as ones specific to each of its programs, which include the documentation that FTA needs to review and requirements associated with each program. In addition, FTA has developed guidance specific to joint procurements of rail equipment. A lack of guidance could result in FRA’s not receiving sufficient and necessary information from its grantees to carry out its grant oversight activities. Though FRA Identifies Training Needs, It Has Not Developed Training for Grantees and Agency Staff on Procedures Governing Grants That Fund the Section 305 Procurement Projects An effective grants management framework includes a mechanism that allows grant recipients and agency staff to establish and maintain a level of subject-matter expertise and competence so that they can fulfill their responsibilities. FRA partially follows the training leading practice area because FRA identifies training needs for grantees and agency staff, but it has not developed training on procedures governing the grants funding the section 305 equipment procurement projects. FRA identifies training needs of grantees and agency staff. FRA officials stated that they identify grantee training needs formally through scheduled monitoring reports, which specifically ask whether the grantee needs any training or technical assistance. For example, we found that grantees asked for training in 8 of 13 monitoring reports we reviewed, but one grantee asked for training on the terms and conditions of its grant in two consecutive reports. FRA officials stated that agency staff has the opportunity to develop Individual Development Plans to identify training needs for their current positions and development goals. FRA has not developed training on procedures governing the grants funding the section 305 equipment procurement projects for either grantees or agency staff. According to the Federal Standards for Internal Control, management should demonstrate a commitment to develop competent individuals. For instance agencies can enable individuals to develop competencies appropriate for key roles, reinforce standards of conduct, and tailor training based on the needs of the role. FRA provided grantees with a webinar called “Railroads 101,” which covered several broad industry topics, such as train types and basic rail operations. While the webinar provided to grantees gives a basic overview of the industry, it does not include particular information related to the policies and procedures governing HSIPR grant funds. For example, one grantee said that webinar and in-person trainings on the procedures to manage the grants would be helpful. According to FRA officials, all regional and grants managers can receive project management training; however, new employees often shadow current staff and use electronic systems to learn about their assigned grants. In a 2014 monitoring report, one grantee specifically stated that a training program similar to FTA’s would be very beneficial for grantees. While agency staff can learn on the job, inconsistent understanding of grant administration policies and procedures across the agency could result in risks—such as those related to scope, schedule or budget—not being identified in a timely manner with potential negative impact on a project’s completion or the management of grant funds. For example, the bi-level rail car project’s schedule challenges were not formally reported as significant findings requiring a corrective action plan until 2014, despite schedule concerns noted in a 2013 monitoring report. DOT operating administrations, such as FRA, have the option to determine grants training requirements though we recognize that resources needed for training, such as budgets and staff time, compete with other agency priorities. FRA officials stated that in 2015 it began to develop e-training modules specific to roles within the regional team structure, such as regional managers. While some recent training efforts have been targeted to meet staff and grantee’s needs, FRA officials acknowledged it is reasonable to formalize training efforts going forward. FRA Has Established Some Communication Mechanisms, but Lacks a Centralized System to Monitor Grants An effective grants management framework includes establishing an organizational structure, including well-defined reporting lines, to permit the flow of quality information and to assist agency staff and grantees in fulfilling their responsibilities. FRA partially follows the communication leading practice area because, while FRA has developed mechanisms to obtain relevant data based on project information, it does not have a centralized system to monitor grant awards. Since 2012 FRA has established mechanisms to obtain relevant data based on project information requirements. For example, FRA’s most recent Grants Manual outlines a specific process for grantees’ reporting requirements using a table to outline each step of the process for collecting, reviewing, and approving quarterly progress reports, including the individual responsible for each step and the database or form that should be used to complete the step. The previous version used a narrative format that was less detailed. We found FRA does not use a centralized system to monitor grant awards. FRA uses multiple systems for grants management, including GrantSolutions, Program Management Tracker, and Sharepoint. GrantSolutions and Program Management Tracker are organized by grant agreement number and are used to track grant administration materials—such as financial reports–and program management materials—such as final copies of monitoring and quarterly progress reports—respectively. Since these systems organize information by the grant agreement’s number and information is divided between different systems, there is no central location for materials related to projects funded through multiple grants, such as the section 305 equipment procurement projects. According to the U.S. Domestic Working Group’s Grant Accountability Project, a centralized information system can allow grant management staff to track a grant’s status, tell how well a grantee is performing, and keep track of problems. An FRA official stated that the agency considered integrating the two systems to provide a central system that combined grants and project management, but it was ultimately deemed too expensive and technologically challenging. Conclusions It has been over 5 years since PRIIA significantly expanded FRA’s grant- making role. In that time, FRA has developed a new grants management framework—a large undertaking encompassing all projects funded through its grants—that involved concurrently hiring new staff, developing grants oversight policies and procedures, and awarding grants. In addition, the agency’s grants management role has included overseeing the section 305 equipment procurement projects—a fundamentally new type of project for states to lead, involving technically complex new equipment that FRA had limited experience overseeing. In 2010, we noted that a confluence of factors, including simultaneously carrying out multiple new responsibilities, could pose risks for the use of federal funds for high speed rail projects and that a robust grant-oversight program would be a critical element to making sound federal investments in high speed rail. FRA’s grants management experience with the grants that fund the section 305 equipment procurement projects demonstrates that additional improvements to performance monitoring, written documentation, and training could enable more effective grants management. A lack of specific goals and measures can make it difficult to keep grantees accountable and ensure that grantees are making progress toward project deliverables. Further, evaluating and documenting the results of monitoring activities in a timely manner would help FRA to more proactively address risks to a project’s completion as they arise. Providing written guidance to grantees on procedures and agency expectations would help ensure that agency officials obtain the information they need to fulfill their oversight responsibilities. Training can ensure consistent understanding of policies to ensure project risks are identified in a timely manner; however, we recognize that training requires decision makers to make trade-offs based on funding and staff availability, as these resources must be used to meet multiple agency priorities. While using a centralized electronic-grant system could help agency staff better track grant progress, FRA has explored this option and determined that it would be too costly and technologically challenging. The section 305 equipment procurement projects can be used to identify lessons learned to strengthen FRA’s overall grants management framework and project oversight for future grants. While no new section 305 equipment procurement projects are currently planned, FRA manages and oversees approximately 200 ongoing grant projects and the Passenger Rail Investment and Reform Act of 2015, passed as a title in the FAST Act, authorized a new infrastructure and safety grant program to assist grantees in financing the cost of improving passenger and freight rail transportation systems. While FRA officials began efforts to further formalize grants management procedures during the course of our review—such as drafting a portfolio management guide to centralize guidance related to the regional manager role for project oversight— additional steps could further enhance the agency’s approach to grants management. Improving the agency’s processes to be more proactive in overseeing and monitoring grant performance would help minimize project risk, increase grantee accountability, and improve the efficient and effective use of federal funds for FRA’s grants portfolio. Recommendations for Executive Action To strengthen FRA’s grants management practices, we recommend the Secretary of Transportation direct the FRA Administrator to take the following actions: 1. Enhance the process outlined in the Grants Manual to monitor project performance for future grants to include: (1) performance measures directly linked to project goals, and (2) fully incorporating timely and actionable information on grantee performance into FRA’s review process to help determine whether current efforts are in line with the overall project goals. 2. Develop and provide written guidance to grantees to include FRA’s expectations on the type of information grantees should provide, such as guidance specific to deliverables and milestones for each grant project. 3. Analyze training needs and formalize a training plan for grantees and agency staff, which could include training on grant-specific procedures and policies. Agency Comments We provided a draft of this report to Amtrak and the Department of Transportation for review and comment. In written comments, reproduced in appendix III, DOT concurred with GAO’s recommendations. FRA also provided technical comments that were incorporated, as appropriate. Amtrak did not provide comments. We are sending copies of this report to the Secretary of Transportation, the Administrator of the Federal Railroad Administration, and Amtrak. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report addresses the following questions: 1) How has FRA carried out its grants management roles and responsibilities regarding the PRIIA section 305 equipment procurement projects? 2) To what extent has FRA’s approach to grants management for the PRIIA section 305 equipment procurements met leading practices and whether FRA’s grants management practices could be improved? To determine how FRA has carried out its grants management roles and responsibilities for the grants funding the section 305 equipment procurement projects, we reviewed FRA policies and other guidance established to outline the agency’s responsibilities, such as the August 2015 Grants Management Manual and the 2013 Program Management Plan. We reviewed the terms and conditions of the six grant agreements FRA executed with the California Department of Transportation (Caltrans), the Illinois Department of Transportation (IDOT), and the Washington State Department of Transportation (WSDOT) that fund the section 305 equipment procurements to identify project milestones and deliverables, as well as specific activities or points in the administration of the grants where FRA reviews, approves, or concurs with the state grantees on specific project activities. While three of these grants covered broader corridor improvements beyond the equipment projects (Chicago-St. Louis Corridor Improvement, the Chicago-Quad Cities Expansion Program, and the Pacific Northwest Rail Corridor Program), we focused on grant administration and oversight activities related to the tasks associated with the equipment procurements. We did not assess FRA’s grant management activities beyond those applicable to these six grant awards. We reviewed additional documentation related to FRA’s oversight of these six selected grants, including 96 quarterly progress reports provided by FRA for fiscal years 2013 through 2015 and 13 monitoring reports completed by FRA for calendar years 2011 through 2015 to better understand how FRA carries out its grants management roles and responsibilities. We focused our review on FRA’s grants management activities post-award, including monitoring of grant awards, project oversight, and technical assistance activities. We did not examine the agency’s award issuance or grant closeout processes, as the equipment projects are ongoing at the time of this report’s issuance. We reviewed additional policy and guidance documentation related to the oversight of the section 305 equipment procurement projects, including the relevant Notices of Funding Availability issued by FRA, FRA’s Monitoring Procedures describing the oversight conducted by the Monitoring and Technical Assistance Contractors (MTAC), and the MTAC task order outlining specific tasks and technical assistance contractors may perform for the equipment projects. We also reviewed reports independent consultants provided to FRA, such as reports on the bi-level car project’s status and routine summaries of project-level activities and meetings between state grantees, equipment vendors, and subject matter experts. To collect additional information about how FRA carried out grants management activities for the locomotive and bi-level car equipment procurements, we interviewed FRA Regional Managers and Grant Managers within the Office of Railroad Policy and Development, subject matter experts within other FRA offices such as the Office of Research, Development and Technology, and the Office of Chief Counsel, and MTAC contractors supporting the agency’s oversight activities. We also interviewed Amtrak officials, California, Illinois, and Washington State departments of transportation, as well as Next Generation Equipment Committee (NGEC) participants and independent consultants involved with the section 305 equipment procurement projects. To assess the extent to which FRA’s approach to grants management for the equipment procurements met leading practices, we identified relevant and applicable leading practices and supporting characteristics that contribute to those practices, using generally accepted grants management practices from a variety of sources (see table 3). Using these sources, we identified four leading grants management practices—communication, written documentation, training, and performance monitoring—as well as specific characteristics of grants management that support these practices. To assess FRA’s grants management, we reviewed grant documentation and FRA’s grants management plans, policies, and procedures to determine the extent to which FRA’s practices aligned with the supporting characteristics of our leading practices. Each leading practice is aligned with a few supporting characteristics. For example, supporting characteristics for the written documentation practice include: (1) develop and maintain written policies and procedures that communicate knowledge among agency staff, and (2) develop guidance specific to each grant program, including documentation outlining agency and grantee expectations, among others. For additional information on the leading practices and supporting characteristics, see appendix II. We used that information in aggregate to determine the extent to which the leading practice was followed. For example, we reviewed grant documentation, such as quarterly progress reports submitted by grantees and monitoring reports completed by FRA officials for the section 305 equipment procurements from calendar year 2011 through 2015. We also reviewed FRA’s grants management policies and procedures, such as those outlined in the Grants Management Manual and Monitoring Procedures to determine the extent to which the activities and processes described met the supporting characteristics. We also interviewed officials at FRA and the California, Illinois, and Washington State departments of transportation as well as FRA’s independent contractors. Our assessment of the alignment of FRA’s practices with the supporting characteristics served as the basis for our overall assessment as to whether each leading practice was followed or substantially followed; partially followed; or minimally or not followed. For example, if we found supporting evidence that two of the three characteristics of a practice were substantially followed but the third characteristic was not followed, we determined that the leading practice was partially followed. The criteria to determine if practices and supporting characteristics were: followed or substantially followed—plans, policies, or processes have been developed and implemented properly for all or nearly all supporting characteristics partially followed—plans, policies, or processes have been developed and implemented properly for some supporting characteristics minimally or not followed—plans, policies, or processes are lacking for all or nearly all supporting characteristics To further determine whether FRA’s grants management practices could be improved, we interviewed FRA and Caltrans, IDOT, and WSDOT officials, NGEC participants, and independent contractors to obtain perspectives on lessons learned from the projects to date, as well as examples of what worked well with the FRA’s management and oversight of the section 305 equipment procurements. We conducted this performance audit from June 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Grants Management Leading Practices with GAO Assessments We assessed the extent to which the Federal Railroad Administration’s (FRA) approach to grants management for the grants funding the section 305 equipment procurement projects met leading practices and supporting characteristics for grants management that we identified in the areas of performance monitoring, written documentation, training, and communication. We reviewed grant documentation and FRA’s plans, policies, and procedures and analyzed interviews with FRA and grantee officials as well as Monitoring and Technical Assistance Contractors (MTAC). Based on our analysis, we determined the extent to which each characteristic was followed or substantially followed; partially followed; or minimally or not followed. Our assessment of the characteristics served as the basis for our overall assessment on the extent to which each leading practice was followed. We categorized the assessments using the scale below: Followed or substantially followed the leading practice—plans, policies, or processes have been developed and implemented properly for all or nearly all supporting characteristics. Partially followed the leading practice—plans, policies, or processes have been developed and implemented properly for some supporting characteristics. Minimally or did not follow the leading practice—plans, policies, or processes are lacking for all or nearly all supporting characteristics. Table 4 provides greater detail, including examples, of our comparison of FRA’s grants management approach with the supporting characteristics that are aligned with leading practices. Appendix III: Comments from the Department of Transportation Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, the following individuals made important contributions to this report: Melissa Bodeau, Steve Cohen, Swati Deo, Derry Henrick, SaraAnn Moessbauer, Malika Rice, Maria Wallace, and Crystal Wesco.
The Passenger Rail Investment and Improvement Act of 2008 (PRIIA) expanded FRA's role by, among other things, authorizing grant programs for intercity passenger rail. Section 305 of PRIIA established a Next Generation Equipment Committee to design, develop specifications, and procure standardized rail equipment. FRA awarded approximately $800 million in grant funding for two locomotive and bi-level passenger car procurement projects. GAO was asked to review issues related to FRA's oversight of the grants funding the PRIIA section 305 equipment procurements. This report examines: (1) how FRA has carried out its grants management roles and responsibilities for the PRIIA section 305 equipment procurement projects, and (2) the extent to which FRA's approach has met leading practices and whether FRA's grants management practices could be improved. GAO reviewed grants management policies and practices and identified relevant and applicable leading practices to be used as criteria in assessing FRA's grants management. The Federal Railroad Administration (FRA) initially used a regional oversight and monitoring approach outlined in its Grants Management Manual (Grants Manual) to manage the grants funding the section 305 equipment procurement projects, but in the face of challenges that approach evolved to include additional project-level oversight. When FRA began awarding the grants for the section 305 equipment procurement projects in 2010, its grants management approach was defined by the Grants Manual —including routine and scheduled monitoring—and by the terms of the grant agreements funding the locomotive and bi-level passenger car projects (see figure). As the bi-level car project encountered significant challenges—including major schedule delays—FRA moved to a project structure including increased contractor support to better oversee the locomotive and bi-level car projects in response to problems identified in the bi-level car project's schedule. In August 2015, the bi-level car suffered a structural testing failure, and as of April 2016, production of the bi-level car was on hold, and the final equipment delivery date was unknown. FRA's grants management approach partially follows GAO-identified leading practices for performance monitoring, communication, training, and written documentation, but FRA's approach could be improved by better alignment with those practices. For example, while FRA stated that project progress is measured by tracking scope, schedule, and budget, it has not documented a process to identify project-specific goals and associated performance measures. Establishing a process that ensures project goals are identified, tracked, and fulfilled is a leading practice of effective grants management. Without explicit project goals and associated performance measures, it may be challenging for decision makers to track and assess a project's progress, make decisions about future efforts, and keep grantees accountable for outcomes. In addition, FRA has not provided documentation outlining grantees' expectations or developed written guidance specific to the section 305 equipment procurement projects. An effective grants management framework includes developing and maintaining written documentation as a means to obtain and retain organizational knowledge and to ensure accountability. According to FRA officials, the agency informs grantees of expectations through routine monitoring and technical assistance. However, the lack of written guidance, goals, and performance measures could result in FRA's not receiving sufficient and necessary information from its grantees to carry out its grant oversight activities.
Background As the lead federal agency for maritime homeland security within the Department of Homeland Security, the Coast Guard is responsible for a variety of missions, including ensuring security in ports and waterways and along coastlines, conducting search and rescue missions, interdicting drug shipments and illegal aliens, enforcing fisheries laws, and responding to reports of pollution. The Deepwater fleet, which currently consists of 186 aircraft and 88 vessels of various sizes and capabilities, plays a critical role in all of these missions. Some Coast Guard Deepwater vessels were built in the 1960s. Notwithstanding extensive overhauls and other upgrades, a number of the vessels are nearing the end of their estimated service lives. Similarly, while a number of the Deepwater legacy aircraft have received upgrades in engines, operating systems, and sensor equipment since they were originally built, they too have limitations in their operating capabilities. The Integrated Deepwater System acquisition program, which the Coast Guard began developing in 1996, is its major effort to replace or modernize these aircraft and vessels. This Deepwater program is designed to replace some assets—such as deteriorating vessels—with new assets, and to upgrade other assets—such as some types of helicopters—so they can meet new performance requirements. The Deepwater program represents a unique approach to a major acquisition in that the Coast Guard is using a prime contractor—the system integrator—to identify and deliver the assets needed to meet a set of mission requirements the Coast Guard has specified. In 2002, the Coast Guard awarded a contract to Integrated Coast Guard Systems (ICGS), a joint venture of Lockheed Martin and Northrop Grumman, as the system integrator for the Deepwater program. Lockheed Martin and Northrop Grumman, as the two main subcontractors, in turn contract with other subcontractors. Rather than using the traditional approach of replacing classes of ships or aircraft through a series of individual acquisitions, the Coast Guard chose to employ a system-of-systems acquisition strategy that would replace its deteriorating Deepwater assets with a single, integrated package of new or modernized assets. This system-of-systems approach is designed to provide an improved, integrated system of aircraft, vessels, and unmanned aerial vehicles to be linked effectively through systems that provide command, control, communications, computer, intelligence, surveillance, reconnaissance, and supporting logistics. The Deepwater program’s three overarching goals are to maximize operational effectiveness, minimize total ownership cost, and satisfy the customer— the operational commanders, aircraft pilots, cutter crews, maintenance personnel, and others who will use the assets. We have been reviewing the Deepwater program for several years, pointing out successes as well as difficulties and expressing concern over a number of facets comprising the program. In 2001, we identified several areas of risk for Deepwater. First, the Coast Guard faced potential risk in the overall management and day-to-day administration of the contract. At the time, we reported on the major challenges in developing and implementing plans for establishing effective human capital practices, having key management and oversight processes and procedures in place, and tracking data to measure system integrator performance. In addition, we expressed concerns about the potential lack of competition during the program’s later years and the reliance on a single system integrator for procuring the Deepwater assets. We also reported there was little evidence that the Coast Guard had analyzed whether the approach carried any inherent risks for ensuring the best value to the government and if so, what to do about them. We reviewed the Deepwater program again in 2004 and found many of the same concerns. Specifically, we reported that key components needed to manage the program and oversee the system integrator’s performance had not been effectively implemented. The Coast Guard’s primary tool for overseeing the system integrator, integrated product teams (IPT), were struggling to effectively collaborate and accomplish their missions because of changing membership, understaffing, insufficient training, and inadequate communication among members. Also, the Coast Guard had not adequately addressed the frequent turnover of personnel in the program and the transition from existing assets to those assets that will be part of the Deepwater program moving forward. Further, the Coast Guard’s assessment of the system integrator’s performance in the first year of the contract lacked rigor, and the factors that formed the basis for the award fee were unsupported by quantifiable measures. This resulted in the system integrator receiving an award fee of $4.0 million out of a maximum of $4.6 million despite documented problems in schedule, performance, cost controls, and contract administration. At the time of our 2004 report, the Coast Guard had begun to develop models to measure the extent to which Deepwater was achieving operational effectiveness and had reduced total ownership cost, but it had not made a decision as to which specific models would be used. Further, Coast Guard officials were not able to project a time frame for when the Coast Guard would be able to hold the contractor accountable for progress toward the goals of maximizing operational effectiveness, minimizing total ownership cost, and increasing customer satisfaction. Additionally, the Coast Guard had not measured the extent of competition among suppliers of Deepwater assets or held the system integrator accountable for taking steps to achieve competition. At the time, the Coast Guard’s lack of progress on these issues had contributed to our concerns about the Coast Guard’s ability to rely on competition as a means to control future programmatic costs. In response to these concerns, we made a number of recommendations to improve Deepwater management and oversight of the system integrator. In 2005, we reported that the Coast Guard had fully addressed three of the recommendations and had actions underway on others. For the past several years, the Coast Guard has been revising its Deepwater plan to incorporate expanded homeland security requirements it received after the terrorist attacks of September 11, 2001. On May 31, 2005, the Coast Guard submitted a revised implementation plan to the House Subcommittee on Homeland Security, Committee on Appropriations, which included both a 20-year and a 25-year plan. The House Appropriations Committee directed the Department of Homeland Security and the Coast Guard to select a single revised implementation plan to accompany the Deepwater fiscal year 2006 budget request. In compliance with the Committee’s direction, the Coast Guard Commandant testified on July 21, 2005 to the 25-year revised Deepwater implementation plan. Further, in February 2006, the Coast Guard submitted an updated Deepwater implementation plan to align with its fiscal year 2007 budget submission. These 2005 and 2006 revised plans are the ones we are using to compare to the Coast Guard’s August 26, 2002, original implementation plan. To reflect added homeland security responsibilities based on the terrorist attacks of September 11, 2001, the August 2005 revision and February 2006 update to the Deepwater implementation plan change the balance of upgraded legacy versus new assets, the delivery schedules, and program costs from the original 2002 plan. For aircraft, the revised plans include upgrading many of the legacy aircraft rather than replacing them with new assets as called for in the original plan. For vessels, the revised plans maintain the original plan’s strategy of replacing all of the legacy vessels, but include some changes in the number of small boats being acquired. Overall, the revised plan (1) increases the program length by 5 years, to a total of 25 years; (2) changes the delivery schedules for a number of assets; and (3) increases overall costs to $24 billion, $7 billion more than earlier estimates. The program’s higher costs largely reflect the Coast Guard’s expanded homeland security responsibilities and cover such changes as greater weaponry, improved communications systems, and greater operating capabilities. Coast Guard officials caution, however, that this 25-year program is heavily dependent on receiving the anticipated budget amount each fiscal year. If full funding is not available in any given year—for example, because of competing budget priorities—the shortfall could have cascading effects on overall costs for the Deepwater program. Terrorist Attacks Have Led to Increased Emphasis on Homeland Security and Enhanced Deepwater Asset Capabilities The original Deepwater plan, while published in 2002, was developed before the terrorist attacks of September 11, 2001. It reflected an emphasis on the Coast Guard’s traditional Deepwater missions, such as conducting search and rescue operations at sea, preventing and mitigating oil spills and other threats to the marine environment, inspecting foreign vessels, protecting important fishing grounds, and stemming the flow of illegal drugs and migrants into the United States. After the events of September 11, 2001, the revised plans took into account the increased security threats by incorporating a new mission to provide greater security for ports, waterways, and coastal areas and enhancing the capabilities of the Deepwater assets to better meet the increased threats. In particular, the revised plans call for equipping Deepwater helicopters to provide warning and disabling weapons fire at sea and in ports, waterways, and coastal areas. Further, while the original plan called for assets to have Deepwater interoperability—meaning that all Deepwater aircraft and vessels could communicate with one another—the revised plans call for Deepwater assets to also have interoperability with assets from the Departments of Homeland Security and Defense, as well as with the Coast Guard’s Rescue21 (R21) project. According to Coast Guard officials, this increased interoperability involves such things as adding circuits and data transmission capability to allow for more reliable and secure communication. Table 1 provides further information on some of the key differences between Deepwater asset capabilities in the original and revised plans. Revised Plans Propose Replacing Fewer Aircraft and Adjusting the Mix of Vessels to Be Acquired The revised plans change the final mix of Deepwater aircraft more significantly than the mix of vessels. For example, the original plan called for replacing all 41 HH-60 Medium-Range Recovery Helicopters with 34 AB-139 helicopters. Under the revised plans, the Coast Guard will upgrade the HH-60s and not purchase any AB-139 helicopters. Coast Guard officials said they elected to retain the HH-60s because they determined that the AB-139 aircraft was unsuitable to meet new requirements for weaponry and for tactical operations. Retaining and upgrading HH-60 helicopters cost $500 million less than replacing them. Another major change in aircraft involved retaining more HC-130s to meet long-range surveillance, search and rescue, and airlift needs. For vessels, the revised plans retain the original plan’s approach of replacing all cutters and patrol boats. The only change to the number of vessels is that the revised plans include nine additional 25-foot short range boats and nine fewer 35-foot long range boats than were included in the original plan. Table 2 compares the number and types of Deepwater assets under the original and revised plans. Delivery Schedules for Deepwater Assets Have Changed Estimated delivery schedules for the Deepwater assets have changed. For some of the aircraft, deliveries have been projected for later years than were estimated in the original plan. For example, the Coast Guard now plans for delivery of its first 3 CN-235 Medium-Range Surveillance Aircraft during calendar year 2008. Under the original plan, the Coast Guard had anticipated delivery of the first 12 in 2006, with a total of 18 delivered by the end of 2008. Final deliveries of the CN-235s under the 2006 revised plan are now scheduled for 2027, as opposed to 2012 under the original plan. According to the Coast Guard, the delivery schedule for the CN-235 Medium-Range Surveillance Aircraft was delayed because the Coast Guard did not receive the anticipated level of funding in fiscal years 2002 and 2003, which required renegotiations. Figure 1 shows the original and revised delivery schedules for Deepwater aircraft. For vessels, the revised plans generally spread out deliveries of each class of vessel over a larger number of years, as shown in figure 2. For example, the original plan called for delivery of 58 of the 140-foot Fast Response Cutters between 2018 and 2022. The revised plans call for delivering the first Fast Response Cutter in 2007 or 2008, with additional cutters being delivered every year from 2009 through 2027—a span of 21 years. The Coast Guard originally planned to convert its legacy 110-foot patrol boats to 123-foot patrol boats and, beginning in 2018, replace the 123-foot patrol boats with 140-foot Fast Response Cutters. However, the patrol boat conversion project was halted after the first 8 patrol boats because the 123-foot patrol boats could not meet post September 11, 2001 mission requirements and were experiencing technical difficulties. Because of this, the Coast Guard needed to advance the delivery of the Fast Response Cutters. Estimated Cost of Revised Deepwater Plans is $7 Billion Higher, Largely Reflecting Increased Homeland Security Mission Requirements The total estimated cost of the revised Deepwater plans increased by $7 billion over the original plan—from $17 billion to $24 billion. According to the Coast Guard, most of the $7 billion increase is due to enhanced homeland security mission requirements brought about by the events of September 11, 2001. In particular, data provided by the Coast Guard show that most of the $7 billion increase is attributable to costs for enhancing and upgrading the capabilities of the planned Deepwater replacement vessels. More specifically, as shown in table 3, upgrades to the Deepwater vessels account for about $5.5 billion of the increase in the 2005 plan, and $5.9 billion in the 2006 update. Beyond the increases related solely to vessels, upgrades to the C4ISR and maritime domain awareness capabilities to improve interoperability between the Coast Guard and other Department of Homeland Security components, as well as with the Department of Defense, account for the second largest category of cost increases—increasing by $1.1 billion in the 2005 revised plan, and by $663 million in the 2006 plan. In contrast, because the revised plans include upgrading the HC-130 aircraft and the HH-60 helicopter rather than replacing them as called for in the original plan and for scaling back on the number of unmanned aerial vehicles to be acquired, costs for Deepwater aircraft decreased from the original plan to the revised plan. Overall, costs for Deepwater aircraft were reduced by about $600 million in the 2005 plan and by about $400 million in the 2006 plan from the amount included in the original plan. According to the Coast Guard, the primary elements of the enhanced homeland security mission requirements that contributed to the $7 billion increase include the following: Chemical, biological, and radiological detection and defense. For this element, the additional capabilities included in the revised plans vary by asset. The most extensive are for the National Security Cutter, which is to have a sealed section within which crew can operate the ship in a contaminated environment for limited time periods. In the event an area is contaminated, such as from a terrorist attack, the crew can use radar, heat-seeking sensors, and other equipment to determine what is occurring—such as whether engines are operating, vessels are being moved, or people are alive. Other Deepwater vessels and aircraft are to be equipped with exposure suits and storage for those suits. Antiterrorism and force protection. The revised plans call for more powerful weapons for National Security Cutters, Offshore Patrol Cutters, and Fast Response Cutters. Manual gun mounts on cutters will be replaced with selected sensor-integrated, remote-operated, and semi-automated gun systems. This weaponry is to give the Coast Guard enhanced capabilities to protect its own cutters and other high value assets by, for example, providing cutters with the ability to stop terrorists who have taken control of a ship by disabling that ship’s propulsion with precision fire. Airborne use of force and vertical insertion and delivery. The revised plans call for the Deepwater helicopters to be fitted with weapons and equipment that will enable armed teams to land on a vessel, such as in the event a hostile group has taken over the vessel. Crew members can use machine guns to provide cover while a team travels by rope from the hovering helicopter to the vessel’s deck. Additionally, for certain terrorist and criminal scenarios, the helicopter can use disabling fire to stop an illegally operated boat. In the event of a terrorist attack and the right circumstances, the disabling fire can be changed to deadly fire if necessary to stop terrorists. Interoperability with the Departments of Defense and Homeland Security, as well as Rescue-21 equipment. All Deepwater vessels and aircraft are to receive C4ISR enhancements that make them interoperable with other DHS entities, DOD assets, and local first responders. These enhancements include added circuits and equipment that provide full voice communication and limited data communications between these entities. Extended/enhanced flight deck. The flight decks of the National Security Cutter and Offshore Patrol Cutter are to be enlarged so that helicopters from other Department of Homeland Security components and from DOD can land on the cutters. Deepwater Costs Could Rise if Funding Deviates from Levels Called For in the Plans In May 2001, we reported that affordability was the biggest risk for the Deepwater program because the Coast Guard’s contracting approach depends on a sustained level of funding each fiscal year over the life of the program. For the 2005 revised implementation plan, these funding levels average over $1 billion per year and range from $650 million to over $1.5 billion per year through fiscal year 2026. According to Coast Guard officials, any significant or sustained deviation from the planned funding levels would be costly to the Coast Guard in the short term and set off ripples affecting the acquisition of Deepwater equipment for years to come. The officials added that significant shortfalls would likely result in increased costs, late delivery of equipment, and degradation of Deepwater asset performance. Model Used to Determine Revised Asset Mix Is Reliable, and the Coast Guard Hopes to Expand Its Use In revising the Deepwater asset mix to meet new mission demands, the Coast Guard undertook a series of analyses that used a computer simulation model to project the operational effectiveness of a variety of potential Deepwater force structures or asset mixes. We found that this model contains reliable information and is useful for guiding decisions on the revised Deepwater asset mix. Further, a Department of Defense review board facilitated accreditation of the model and another group with expertise in this type of modeling has studied the Coast Guard’s approach and concluded that it is reliable. Through use of this model, the Coast Guard projects that the Deepwater asset mix in the $24 billion revised implementation plan will provide greater mission performance than the asset mix in the original plan. Other factors beyond this model, such as decisions of internal working groups and projected funding, also contributed to the adoption of the revised Deepwater asset mix. Because the model has proved useful for guiding Coast Guard decisions on the proper asset mix for enhancing the mission performance of the Deepwater assets, the Coast Guard is considering ways to expand the model to guide decisions on meeting its Coast Guard-wide Government Performance and Results Act (GPRA) performance goals. Computer-Based Model Used in Analyzing Capacity Gaps Is Credible After the events of September 11, 2001, the Coast Guard undertook a series of analyses intended to determine what capability and capacity gaps would exist if the asset mix in the original Deepwater plan were applied to the revised Deepwater missions. To conduct this analysis, the Coast Guard projected the performance of a variety of asset mixes using a computer- based operational effectiveness simulation model known as the Deepwater Maritime Operational Effectiveness Simulation (DMOES). Using three different capacity models, the Coast Guard generated three different versions of the asset mix needed to meet Coast Guard performance targets. The resulting force structures were then modeled in DMOES to project their operational effectiveness. The results of this assessment led the Coast Guard to change the asset mix for its revised Deepwater plan. We found that DMOES, which provided important evidence for Deepwater operational effectiveness analyses, contains reliable information for decision making. Specifically, our review of various statistical aspects of DMOES indicates that the parameters used in the DMOES model—the targets, missions, weather events, and probability of target detection present in the Deepwater environment—appear to be the result of a thorough and rigorous process that enhanced the model’s credibility. In performing our review of DMOES, we reviewed computer simulation model criteria developed by an authority in the field of simulation modeling and found that the DMOES model successfully addressed these criteria. For example, the parameters used were derived from historical events (e.g., target detection or weather events), which helped satisfy the criterion that interactions between the modeled system and the outside environment be considered. To ensure use of valid and current data for its major updates of DMOES, Coast Guard gathered updated historical data and compared these data to data from past events. Further, because the Coast Guard modeled target detection capabilities for the assets at less than their full potential, the asset’s target detection capabilities do not appear to be overstated. In addition, independent authorities, in their reviews of DMOES, have assessed the model and have accredited it for force structure planning. For example, the MITRE Corporation, in an independent analysis of the performance gap analysis process (of which DMOES was a key component), found that the process and the resulting analytic results were “likely the most complete and comprehensive campaign-level study conducted by any uniformed service in recent times.” Further, the Coast Guard submitted DMOES to a verification, validation, and accreditation review monitored and facilitated by the Joint Accreditation Support Activity. The DMOES Accreditation Review Board, consisting of Coast Guard officials and external experts in the field of military force structure determinations and capability-based planning, conducted the actual review and accredited the DMOES model for acquisition support and force structure planning. Other Factors Also Affected Revised Deepwater Asset Mix While the capability and capacity gaps identified in the performance gap analysis process were a key input into the decisions leading to the revised Deepwater asset mix, they were not the only factor. The Coast Guard also shaped the Deepwater asset mix based on budget considerations and information developed by an internal working group. In particular: Coast Guard officials stated that affordability was a key factor in shaping the revised Deepwater asset mix. According to the officials, Deepwater was never intended to be an unconstrained acquisition program, and the $24 billion force structure was determined through a process of modeling performance of anticipated asset mixes, weighed against expected funding levels over the life of the program, to come up with an optimal balance of performance and affordability. As a result, the revised Deepwater asset mix was developed to maximize the system’s capabilities and capacities within this $24 billion budget. The officials added that while the $24 billion budget may not allow for all desired capabilities on each asset, capabilities are being designed for later refit, if applicable. Further, in April 2004, the Assistant Commandant for Operations Capability commissioned an Aviation Legacy Alternatives Working Group to analyze possible alternatives to the aviation force structure in the original Deepwater plan. This working group provided key data used to enhance the performance gap analysis process. For example, as a result of the working group’s analyses, the Coast Guard decided to convert and upgrade two of its four legacy aircraft (HC-130 and HH-60) and replace only the HU-25. This strategy was deemed by the Coast Guard to be the most cost-effective solution for meeting Deepwater mission requirements. According to the Coast Guard, other alternatives added significant capacity, but at a greater cost. Model Indicates That the Revised Asset Mix Will Provide Improved Mission Performance over the Original Plan The most recent DMOES runs conducted by the Coast Guard, published in October 2005, project that the revised Deepwater asset mix will provide “a significant improvement in traditional Coast Guard mission performance” compared to the original Deepwater asset mix. This marked the first time that the Coast Guard used DMOES to model the operational effectiveness of the revised Deepwater asset mix. According to the Coast Guard, the projected improvement in the overall mission performance of the revised asset mix is due mainly to increased maritime surveillance aircraft and, more specifically, to the converted HC-130, which will be present in greater numbers and with greater capabilities than the comparable long- range surveillance aircraft from the original Deepwater plan. Table 4 provides a brief summary of the results of our analysis of the latest DMOES modeling in terms of how the revised asset mix is projected to improve performance for the Coast Guard’s various Deepwater missions. Appendix I provides more details on our analysis of the latest DMOES modeling. Coast Guard Is Exploring Options for Applying Further Modeling for Projecting Coast Guard- wide Performance Capabilities Though DMOES was an accredited, rigorous simulation model effective in supporting Deepwater force structure planning, it does not capture the impact of non-Deepwater asset contributions and, therefore, does not provide a means for the Coast Guard to estimate the extent to which its entire fleet of aircraft and vessels will allow it to meet Coast Guard-wide GPRA performance targets. The Coast Guard is aware of this limitation and is exploring options for expanding DMOES to encompass all Coast Guard assets—both Deepwater and non-Deepwater—in an effort to provide for a true analysis of Coast Guard-wide mission performance capabilities. While this has not yet occurred, Coast Guard officials told us they were reasonably confident that the cumulative effect of merging the revised Deepwater assets with its non-Deepwater assets would allow the Coast Guard to meet GPRA targets for those missions involving Deepwater aircraft and vessels. In the interim, the Coast Guard has taken steps to measure the impact of Deepwater assets on Deepwater-related metrics. Since 2002, the Coast Guard has annually reviewed—and plans to continue reviewing—the most recent complete year’s worth of data and estimates the Deepwater-only contribution toward meeting performance goals for seven particular performance metrics. These performance metrics and results for the most recent year available are shown in table 5. Disaggregating performance data to reflect Deepwater-only contributions provides an estimate of the extent to which the Deepwater fleet is helping the Coast Guard meet these key targets on an annual basis. For example, as a result of these efforts, the Coast Guard determined that Deepwater assets saved 92.6 percent of lives at risk after Coast Guard notification in fiscal year 2004, which is slightly below the Deepwater asset target value of 93 percent. Progress Continues in Making Recommended Improvements Our past concerns about the Deepwater program have been in three main areas—ensuring better program management and contractor oversight, ensuring greater accountability on the part of the system integrator, and creating sufficient competition to help act as a control on costs—and we made a total of 11 recommendations to address these concerns. During our 2005 review, we determined that the Coast Guard had addressed and fully implemented 2 of these 11 recommendations. The Coast Guard disagreed with and declined to implement a separate recommendation that pertained to updating its cost baseline to determine whether the Deepwater acquisition approach is costing more than a conventional acquisition approach. While we stand behind our original recommendation, we decided not to pursue it further because the Coast Guard determined that the cost to implement this recommendation was excessive. Thus, at the time we began our current review, 8 of the 11 recommendations were not yet fully implemented. On the basis of information we gathered for this review, we consider 3 of these 8 recommendations to be fully implemented. The Coast Guard is in the process of taking actions to implement 3 more recommendations, but full implementation is dependent on seeing results or completion of actions that are not yet in final form. The 2 remaining recommendations, both relating to overall program management and oversight, remain problematic. One relates to improving the effectiveness of integrated product teams, the other to providing field personnel with guidance and training on transitioning to new Deepwater assets. In each case, the Coast Guard has taken actions, but our review of program reports and our discussions with program and field personnel indicate the problems still remain. In all cases, however, the steps needed to fully implement these recommendations seem relatively clear. Table 6 provides an overview of the 11 recommendations. The sections below discuss the recommendations made in each of the three areas of concern, describing the initial issue that led to the recommendation, the steps taken to date to address it, and our rationale for considering the recommendation as being fully implemented or not. Where we make a determination that a recommendation has not yet been implemented, we indicate what actions are needed. Coast Guard’s Efforts to Improve Oversight and Program Management Show Mixed Results We continue to see mixed results in the Coast Guard’s efforts to improve oversight and management of the Deepwater program. The Coast Guard has put in place a human capital plan to help ensure adequate staffing of the Deepwater program and has taken actions to improve the effectiveness of integrated product teams. However, subcontractor collaboration and provision of guidance on transitioning to new Deepwater assets to field personnel, particularly as it pertains to maintenance and logistics responsibilities, continue to need additional attention. Put in Place a Human Capital Plan to Ensure Adequate Staffing of the Deepwater Program Original issue: As early as 2001, we noted that difficult human capital challenges would need to be addressed, including the need to recruit and train sufficient staff to manage and oversee the Deepwater contract. Reviewing this matter again in 2004, we found that the Coast Guard had not funded the number of staff requested by the Deepwater program and had not adhered to the processes outlined in its human capital plan for addressing turnover of Deepwater officials, particularly Coast Guard personnel. These staffing shortfalls contributed to problems in making timely decisions and keeping pace with the workload. Steps taken: The Coast Guard took several steps to address this issue. Its initial steps involved hiring contractors to assist with program support functions, shifting some positions from being staffed by military personnel to civilian personnel to mitigate turnover risk, and identifying the hard-to- fill positions and developing recruitment plans specifically for them. Subsequent to these changes, the Deepwater program’s executive officer (1) approved a revised human capital plan in February 2005 emphasizing workforce planning and (2) is developing ways to leverage institutional knowledge as staff rotate out of the Deepwater program. The Coast Guard plans to review the human capital plan annually to ensure continual alignment between human capital management and actual program performance. The Coast Guard has also placed added emphasis on staffing when formulating the program’s budget request—for example, in adding contracting officers and specialists. Finally, the Coast Guard has worked closely with the Department of Homeland Security and the Defense Acquisition University to provide training for Deepwater personnel. Recommendation status: The steps the Coast Guard has taken appear sufficient to address matters related to adequately staffing the Deepwater program and mitigating turnover, and therefore we consider this recommendation to be fully implemented. Strengthening Integrated Product Teams Original issue: Effective management of the Deepwater program depends heavily on strong collaboration among the Coast Guard, the system integrator, and the subcontractors. Integrated product teams (IPTs), the Coast Guard’s primary tool for managing the Deepwater program, overseeing contractor activities, and ensuring collaboration have experienced difficulty from the outset. IPTs, which are generally chaired by a subcontractor representative and consist of members representing the subcontractors and the Coast Guard, are responsible for overall program planning and management, asset integration, and overseeing delivery of specific Deepwater assets. In 2004, we reported these teams were struggling to carry out their missions because of four major issues: lack of timely charters to provide authority needed for decision making, inadequate communication among team members, high turnover, and insufficient training. Steps taken: In 2005, we found that all IPTs had charters and their members had received entry-level training. Decision making, however, continued to be largely compartmented. Since then, the Coast Guard has established domain management teams to serve as oversight and conflict resolution entities for the IPTs. According to Coast Guard officials, these teams are also to enhance collaboration on issues that cut across several IPTs. Monthly assessments show IPTs have continued to improve their effectiveness across all performance measures. Recommendation status: While the Coast Guard has taken some actions, we do not believe the actions are sufficient to consider the recommendation to be fully implemented because there are indications that collaboration among subcontractors remains inconsistent. Last year we pointed out that ICGS’s two major subcontractors, Lockheed Martin and Northrop Grumman, were operating under their own management systems and that this approach could lessen the likelihood that a system- of-systems outcome would be successfully achieved. During our current review, Coast Guard performance monitors and the program’s executive officer reported that collaboration among the subcontractors continues to be problematic and that ICGS wields little influence to compel decisions among them. For example, when dealing with proposed design changes to assets under construction, ICGS submits the changes as two separate proposals from both first-tier subcontractors rather than coordinating the separate proposals into one coherent plan. According to Coast Guard performance monitors, this approach complicates the Coast Guard’s review of the needed design change because the two proposals often carry overlapping work items, thereby forcing the Coast Guard to act as the system integrator in these situations. Providing Field Personnel with Guidance and Training on Transitioning to New Deepwater Assets Original issue: In 2004, we found the Coast Guard had not effectively communicated decisions on (1) how new Deepwater and existing assets are to be integrated during the transition and (2) whether Coast Guard or contractor personnel (or a combination of the two) will be responsible for maintenance of the Deepwater assets. For example, Coast Guard field personnel, including senior-level operators and naval engineering support command officials, said they had not received information about how they would be able to continue accomplishing their missions using existing assets while also being trained on the new assets. Steps taken: The Coast Guard has taken some steps to improve the level of communication between the Deepwater program and field operators and maintenance personnel. A November 2004 analysis of the Deepwater program’s communication process, conducted in coordination with the National Graduate School, found that the communication and feedback process was inadequate. Since then, the Coast Guard has placed more emphasis on outreach to field personnel, including surveys, face-to-face meetings, and presentations. More recently, officials from the Atlantic and Pacific Area Commands, Maintenance and Logistics Commands, and the Aircraft Repair and Supply Center agreed that Deepwater program officials have significantly improved the frequency and types of information flowing from the program office to the field. In addition, field personnel are members of several IPTs and working groups, and ICGS has placed liaisons at several field locations. Recommendation status: While the Coast Guard has taken some actions, there are indications that the actions are not yet sufficient to consider the recommendation to be fully implemented. In particular, our review of relevant documents and our discussions with key personnel make clear that field operators and maintenance personnel are still concerned that their view are not adequately acknowledged and addressed, and have little information about maintenance and logistics plans for the new Deepwater assets. For example, though the first National Security Cutter is to be delivered in August 2007, field and maintenance officials have yet to receive information on plans for crew training, necessary shore facility modifications, or how maintenance and logistics responsibilities will be divided between the Coast Guard and ICGS. According to Coast Guard officials, many of these decisions need to be made and communicated very soon in order to allow for proper planning and preparation in advance of the cutter’s delivery. More Time Needed to Determine Adequacy of Steps Taken to Improve System Integrator Accountability Unlike actions on the previous recommendations, Coast Guard actions to provide better input from Coast Guard performance monitors and to hold the system integrator more accountable for performance appear to be largely sufficient. We cannot determine whether the Coast Guard has implemented several of our recommendations in this area, however, until more Deepwater assets are delivered and the results of these actions can be assessed. Providing Better Input from Coast Guard Performance Monitors Original issue: In 2004, we reported that the Coast Guard’s award fee evaluation of the first year of ICGS’s performance was based on unsupported calculations and relied heavily on subjective judgments. Rating procedures used by Coast Guard performance monitors were inconsistent, as were procedures for calculating scores, leading to questions about whether the award fee decision was well supported. Actions taken: The Coast Guard has provided additional guidance and training to performance monitors, better allowing them to link their comments with specific examples within their respective areas of responsibility. The Coast Guard has also improved the consistency of the format that performance monitors use to provide input about the system integrator’s performance and revised assessment criteria to more clearly differentiate between objective measures (that is, developed using automated tools and compared against defined standards) and subjective evaluations. Weights have been assigned to each set of evaluation factors, and the Coast Guard continues to adjust these factors to achieve an appropriate balance between automated results and eyewitness observations. Recommendation status: The Coast Guard’s efforts to provide better guidance and training, improve the consistency of the format for performance monitors’ input, and clarify performance assessment criteria appear sufficient for addressing the issue, and therefore we consider this recommendation to be fully implemented. Holding the System Integrator Accountable for Improving Effectiveness of the Integrated Product Teams Original issue: In 2004, we found that the system integrator, whose subcontractors chaired the IPT working groups, was not being held accountable for IPT effectiveness in its performance assessments. Actions taken: The Coast Guard changed award fee measures to place additional emphasis on the system integrator’s responsibility for making the IPTs effective. Award fee criteria now incorporate the administration, management commitment, collaboration, training, and empowerment of these teams. Recommendation status: With IPTs’ performance now included in the criteria for measuring the system integrator’s performance, we consider this recommendation to be fully implemented. Establishing a Time Frame for Putting Steps in Place to Measure Contractor’s Progress toward Improving Operational Effectiveness Original issue: In 2001, the Coast Guard set a goal of developing measures, within 1 year after contract award, to conduct annual assessments of the system integrator’s progress toward achieving the three overarching goals of the Deepwater program, including increased operational effectiveness. In 2004, we found that the time frame for the first review of the contractor’s performance against the Deepwater goals had slipped. The former Deepwater chief contracting officer told us that he anticipated that the metrics would be in place in the fourth year of the contract, the same year the Coast Guard would decide whether or not to extend the contract. Steps taken: The Coast Guard has since developed modeling capabilities—namely the DMOES model discussed earlier—to simulate the effect of the new assets’ capabilities on the Coast Guard’s ability to meet its missions. Coast Guard officials told us that they are now beginning to track the operational effectiveness of the Deepwater program using both the DMOES model and actual mission performance data. Further, at the Coast Guard’s request, the Center for Naval Analyses developed a tool to measure the “presence” of Deepwater assets—that is the number of square miles of ocean in which Deepwater aircraft and vessels can detect, identify, and prosecute targets. In addition, Coast Guard officials have also begun using mission performance data from 2004, the most recent year of complete information, to measure the contribution provided by Deepwater systems or assets in seven mission areas: search and rescue, cocaine seizure rate, illegal or undocumented migrant interdiction, foreign fishing vessel interdiction, protection of living marine resources, national defense/military readiness, and international ice patrol. Coast Guard officials acknowledge that this is difficult, though, because the data on mission results and accomplishments do not differentiate between Deepwater assets and non-Deepwater assets. Coast Guard officials said doing so should become easier as more Deepwater assets come on line and as analytical tools are refined. Recommendation status: Although the models have been developed and are being refined to measure operational effectiveness, there are too few Deepwater assets currently in operation to effectively measure the system integrator’s actual performance in improving operational effectiveness. As a result, we do not consider this recommendation to be fully implemented. We recognize, though, that as more Deepwater assets and systems come on line, the amount of data will increase and the analytical tools will be more refined so that the Coast Guard should be in a better position to (1) discern the Deepwater program’s contribution to operational effectiveness and (2) fully implement this recommendation. Establishing Criteria to Determine When to Adjust the Project Baseline and Document the Reasons for Change Original issue: Establishing a solid baseline against which to measure progress in lowering total ownership cost (TOC) is critical to holding the system integrator accountable. However, during our 2004 review, we found that the Coast Guard’s Deepwater TOC baseline had been significantly changed from what had been originally envisioned and that further changes could be made as a result of variables such as fuel costs or vessels’ operating tempo. At the time, Coast Guard officials explained that proposed changes to the baseline would be approved by the program executive officer on a case-by-case basis, though the Coast Guard had not developed criteria for potential upward or downward adjustments to the baseline. Steps taken: In response to our concerns, the Coast Guard began using criteria from its Major Systems Acquisition Manual as the basis for adjusting the TOC baseline. These criteria allow the baseline to be adjusted based on significant changes in mission requirements, schedule changes, or project funding, or for specific congressional actions. Coast Guard officials also told us that they have also added criteria for making changes to the baseline, such as: insufficient program funding or inflationary pressure that exceeds the assumptions and natural disasters or periods of national emergency that require a deviation from the baseline’s cost, schedule, or performance parameters. Coast Guard officials said that approval of revisions to the program’s overall baseline must come through approved decision memorandums from the Agency Acquisition Executive, who is the Vice Commandant of the Coast Guard. The Deepwater Program Executive Officer still has authority to approve baseline revisions at the asset and domain level. Depending on their severity, baseline changes now are also subject to review and approval by the Department of Homeland Security (DHS), the Coast Guard’s parent agency. The Coast Guard is required to submit Deepwater program baseline information to DHS on a quarterly basis, and the project is subject to an annual review by the DHS Investment Review Board. According to DHS officials, a baseline breach of 8 percent or more would require that the Coast Guard provide information on the causal factors and propose corrective actions to rectify the breach. The officials added that, if the baseline breach is considered significant, the Office of Management and Budget is to be notified that the program will have to undergo a rebaselining and its funding profile will need to be altered. Further, as a result of its latest review of the Deepwater program, the DHS Investment Review Board has asked that, in addition to overall program baseline information, the Coast Guard also provide baseline information for each of the Deepwater assets. This will provide DHS with more insight into the program’s cost, schedule, and performance. Recommendation status: The Coast Guard’s steps, combined with DHS’s oversight requirements, should be sufficient to resolve this issue. At present, however, DHS’s policy directive is only in draft form. We will consider this recommendation to be fully implemented when the management directive is finalized. Effects of Steps Taken to Control Future Costs through Competition Will Take Time to Assess The Coast Guard has taken a number of actions to address the remaining recommendation in this area, which relates to holding the system integrator accountable for ensuring competition among subcontractors. However, until the effects of these actions are more apparent, we are not able to consider the recommendation as being implemented. Developing a Plan for Holding the System Integrator Accountable for Ensuring Adequate Competition among Suppliers Original issue: Competition is a key component for controlling costs in the Deepwater program and a guiding principle for DHS’s major acquisitions. In 2004, we found that beyond the initial 5-year contract period, the Coast Guard had no way to ensure competition was occurring because it did not have mechanisms in place to measure the extent of competition or to hold the system integrator accountable for steps taken to achieve competition. Shortly before, the system integrator had adopted Lockheed Martin’s “open business model” as a corporate policy to help ensure competition and keep costs under control. However, the open business model is not a formal policy involving specific decision points to ensure that competition will be considered. Further, the first-tier subcontractors, Lockheed Martin and Northrop Grumman, have largely continued to follow their own procurement procedures and guidance for determining whether competition will occur and the suppliers who will be invited to compete for Deepwater assets. Steps taken: To address our recommendation about ensuring out-year competition among second-tier suppliers, the Coast Guard contracted with Acquisitions Solutions, Inc. (ASI), to assess the amount of second-tier competition conducted by ICGS during 2004. ASI issued a report in May 2005 that, among other things, found that the open business model had not been fully embraced by Northrop Grumman despite its being an ICGS corporate policy. The report made nine recommendations aimed at improving competition throughout the Deepwater program. According to Deepwater officials, ICGS developed a plan to adopt all nine recommendations by March 1, 2006, and is providing training on use of the open business model to Northrop Grumman personnel working on the Deepwater program. Further, Coast Guard officials reported that competition will be assessed as a part of the award fee assessment subjective criteria for the fifth year of the contract and the Coast Guard will specifically examine the system integrator’s ability to control costs by assessing the degree to which competition is fostered at the major subcontractor level during the award term decision process later this year. Recommendation status: While steps already under way appear to be sufficient to resolve our concerns, we cannot consider this recommendation as being fully implemented until the Coast Guard has addressed the ASI recommendations and results of the next award term assessment are known. Concluding Observations The Coast Guard has done a commendable job of adapting the Deepwater program to post-September 11 realities. Our analysis shows that Coast Guard officials used sound analytical methods to assess the revised needs for aircraft and vessels. Coast Guard officials have also taken strong efforts to address concerns about program management and contract performance and have largely implemented or are in the process of implementing steps that would help mitigate these concerns. We agree that the Coast Guard would be well served to continue developing ways to use its computer modeling to establish clear relationships between its mix of assets—aircraft and vessels—and its Deepwater and agency-level performance goals. We have pointed out in past reports that the Coast Guard lacks clear measures of how its resources are linked to achieving performance goals, so these steps should help resolve this concern. We realize that this ongoing effort will likely take some time to complete. While the Coast Guard has made good progress in addressing our recommendations, there are aspects of the Deepwater program that will require continued attention. First, the Deepwater program continues to face a degree of underlying risk, in part because of the unique approach involving a system-of-systems approach with the contractor acting as overall integrator, and in part because it so heavily tied to precise year-to- year funding requirements over the next two decades. Further, a project of this magnitude will likely continue to experience other concerns and challenges beyond those that have emerged so far. It will be important for Coast Guard managers to continue careful monitoring of contractor performance and to continue addressing program management concerns as they arise. Agency Comments We requested comments on a draft of this report from the Department of Homeland Security and the U.S. Coast Guard. The U.S. Coast Guard provided technical comments, which have been incorporated into the report as appropriate. We are providing copies of this report to the Secretary of the Department of Homeland Security, the Commandant of the U.S. Coast Guard, and interested congressional committees. The report will also be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http//www.gao.gov. If you or your staff have any questions about this report, please contact me on (415) 904-2200 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology This report, which focuses on the Coast Guard’s Deepwater management challenges, provides details on three issues: (1) a comparison of the revised Deepwater implementation plan issued in August 2005 with the original (August 2002) plan in terms of cost, time frames, and the balance of legacy and replacement assets; (2) an assessment of the degree to which the operational effectiveness model and other analytical methods used by the Coast Guard to develop the revised Deepwater asset mix are sound and appropriate for such a purpose; and (3) an assessment of the progress made in implementing our prior recommendations regarding Deepwater program management. To compare the revised Deepwater implementation plans issued in August 2005 and February 2006 with the original (August 2002) Deepwater implementation plan in terms of cost, time frames, and the balance of legacy and replacement assets, we analyzed the original and revised Deepwater implementation plans and related guidance. We also reviewed and analyzed relevant Coast Guard documentation on changes in missions, costs, asset mix, asset capabilities, and asset delivery schedules. We supplemented the documentation reviews and analyses with discussions with officials from the Deepwater Program Executive Office. Finally, we discussed the risks associated with the Deepwater program’s reliance on a sustained level of funding through 2027 and the implications of these risks. To assess the degree to which the operational effectiveness models and other analytical methods used to develop the revised Deepwater asset mix are sound and appropriate for such a purpose, we reviewed the capacity and operational effectiveness models used in determining the current Deepwater asset mix to ensure that the approach was sound and that appropriate assumptions were made in the models’ use. This review involved assessing Coast Guard documentation on how its models were developed and executed, determining the views of knowledgeable independent parties on the Coast Guard’s operational effectiveness model, and interviewing cognizant Coast Guard officials. These interviews also included discussions of how these models, and other factors, were used in developing the current Deepwater asset mix, as well as whether the Coast Guard has developed an approach for determining the extent to which the Deepwater asset mix will allow it to meet its performance targets. In assessing the Coast Guard’s modeling and other analytical methods used for developing the revised Deepwater asset mix, we paid particular attention to the most recent performance gap analysis (PGA) study (PGA IV), which compared the projected performance of the revised Deepwater asset mix to that of the original Deepwater asset mix—so that we could gain a better understanding of how these results were used in developing the revised Deepwater asset mix. Specifically, we reviewed the report’s methodology and requested additional clarifying information to help determine if the analytic work supported the report’s conclusions. As part of our assessment, we developed an analysis that departs from what the Coast Guard describes in its report in two important ways. First, and most important, the Coast Guard assigned a linear scale ranging from 1 to 5 to five statistical categories describing the distribution of the analysis data. This range assigned numerical values to the degree to which the revised asset mix was projected to outperform (or underperform) the original asset mix, with 1 representing projected performance two or more standard deviations below that of the original asset mix, up to 5, representing projected performance two or more standard deviations above that of the original asset mix. It is our opinion that this type of linear scale is not appropriate for capturing the variations in projected performance. Accordingly, we used a weighting scheme for these categories (known as z-scores) that better reflects the relationship among these categories. The z-scores take into account the statistical property that being two standard deviations away from the mean value is almost five times more difficult than being one standard deviation away from the mean. Second, we compared our calculated performance measure weights to a standard in order to assess if our weighting scheme would affect the study’s conclusions. Since the methodology identified three mission significance categories and four regional mission priority categories, we compared our recalculated weights based upon the z-score with the weights we would expect to see if all mission performance measures across all mission priorities for the four modeled regions had exceed one standard deviation above the mean in improvement. Despite the different methodologies used, our results generally aligned with what the Coast Guard reported in PGA IV. To determine the status of the Coast Guard’s implementation of our prior recommendations for improving program management, strengthening contractor accountability, and controlling costs, we reviewed and analyzed briefings and relevant documentation provided by the Deepwater Program Executive Office on actions taken to address our concerns. We reviewed and analyzed documentation on the Coast Guard’s assessment of the contractor’s system integration and management performance in the first period of the fourth year of the contract, including written comments by the performance monitors. We also reviewed and analyzed information on Deepwater integrated product teams, including membership lists and briefings provided by the Coast Guard on measures of effectiveness for the teams. We analyzed the Coast Guard’s plans to increase communications to field operators, and documentation from field operators and maintenance personnel regarding these communications. Further, we analyzed the February 2005 Deepwater revised Human Capital Plan to identify changes that have been made and discussed Deepwater Program Office staffing plans with Coast Guard officials. To supplement our analyses of the relevant documentation, we held several meetings with the Deepwater Program Executive Officer, the Deputy Program Executive Officer, and a number of Deepwater staff, including contracting officials and representatives from the system integrator. We also held discussions with Coast Guard Deepwater performance monitors to discuss their written comments to the latest award fee assessment. We also held discussions with officials from the Pacific Area Command and Pacific Area Maintenance and Logistics Command in Alameda, California; the Atlantic Area Command and Atlantic Area Maintenance and Logistics Command in Norfolk, Virginia; and the Aircraft Repair and Supply Center in Elizabeth City, North Carolina. Further, we reviewed acquisition guidance and spoke with officials from the Department of Homeland Security regarding their oversight of the Deepwater acquisition program baseline. We performed our review from August 2005 to March 2006 in accordance with generally accepted government auditing standards. Appendix II: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Steven Calvo, Christopher Conrad, Adam Couvillion, Christine Davis, Art James, Julie Leetch, Michele Mackin, Stan Stenersen, and Linda Kay Willard made key contributions to this report. GAO Related Products Coast Guard’s Acquisition Management: Deepwater Project’s Justification and Affordability Need to Be Addressed More Thoroughly, GAO/RCED-99-6 (Washington, D.C.: Oct. 26, 1998). Coast Guard: Budget Challenges for 2001 and Beyond, GAO/T-RCED-00-103 (Washington, D.C.: March 15, 2000). Coast Guard: Progress Being Made on Deepwater Project, but Risk Remain, GAO-01-564 (Washington, D.C.: May 2, 2001). Coast Guard: Actions Needed to Mitigate Deepwater Project Risks, GAO-01-659T (Washington, D.C.: May 3, 2001). Coast Guard: Strategy Needed for Setting and Monitoring Levels of Effort for All Missions, GAO-03-155 (Washington, D.C.: Nov. 12, 2002). Coast Guard: Comprehensive Blueprint Needed to Balance and Monitor Resource Use and Measure Performance for All Missions, GAO-03-544T (Washington, D.C.: March 12, 2003). Coast Guard: Challenges during the Transition to the Department of Homeland Security, GAO-03-594T (Washington, D.C.: April 1, 2003). Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight, GAO-04-380 (Washington, D.C.: March 9, 2004). Coast Guard: Replacement of HH-65 Helicopter Engine, GAO-04-595 (Washington, D.C.: March 24, 2004). Coast Guard: Key Management and Budget Challenges for Fiscal Year 2005 and Beyond, GAO-04-636T (Washington, D.C.: April 7, 2004). Coast Guard: Deepwater Program Acquisition Schedule Update Needed, GAO-04-695 (Washington, D.C.: June 14, 2004). Coast Guard: Observations and Agency Priorities in Fiscal Year 2006 Budget Request, GAO-05-364T (Washington, D.C.: March 17, 2005). Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges, GAO-05-307T (Washington, D.C.: April 20, 2005). Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges, GAO-05-651T (Washington, D.C.: June 21, 2005). Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain, GAO-05-757 (Washington, D.C.: July 22, 2005).
The Deepwater program was designed to produce aircraft and vessels that would function in the Coast Guard's traditional at-sea roles. After the terrorist attacks of September 11, 2001, however, the Coast Guard began taking on additional homeland security missions, and so it revised the Deepwater implementation plan to provide assets that could better meet these new responsibilities. While many acknowledge that the Coast Guard's aging assets need replacement or renovation, concerns exist about the approach the Coast Guard adopted in launching the Deepwater program. The subsequent changes in the program's asset mix and delivery schedules only increased these concerns. This report (1) compares the revised Deepwater implementation plans with the original plan in terms of the assets to be replaced or modified, and the time frames and costs for doing so; (2) assesses the degree to which the operational effectiveness model and other analytical methods used by the Coast Guard to develop the revised Deepwater asset mix are sound and appropriate for such a purpose; and (3) assesses the progress made in implementing GAO's prior recommendations regarding program management. GAO is not making any new recommendations in this report. The revised Deepwater implementation plans change the balance between new and legacy assets, alter the delivery schedule for some assets, lengthen the overall acquisition schedule by 5 years, and increase the projected program cost from $17 billion to $24 billion. The higher cost generally relates to upgrading assets to reflect added homeland security mission requirements. Upgrades to vessels account for the single largest area of increase; with upgrades to the command, control, communications and other capabilities being second highest. In contrast, because the revised plans upgrade rather than replace most legacy aircraft and reduce the number of unmanned aircraft, the cost for Deepwater aircraft drops. The revised plans, like the original plan, are heavily dependent on receiving full funding each year. Coast Guard officials state that a shortfall in funding in any year could substantially increase total costs. The Coast Guard's analytical methods were appropriate for determining if the revised asset mix would provide greater mission performance and whether the mix is appropriate for meeting Deepwater missions. GAO and other independent experts found the Coast Guard's methods were reliable for assessing the effects of changing the asset mix and a Department of Defense review board facilitated accreditation of the Coast Guard's approach. Because the model has proved useful for guiding Coast Guard decisions on the proper asset mix for achieving Deepwater performance goals, the Coast Guard is considering ways to expand the model to guide decisions on meeting its Coast Guard-wide performance goals. Actions by the Coast Guard and the system integrator have fully implemented three of the eight GAO recommendations that were not fully addressed during GAO's review in 2005, and three more recommendations appear to be nearly implemented. The remaining two have unresolved concerns, but the Coast Guard is taking steps to resolve them. A program of this size, however, will likely experience other challenges beyond those that have emerged so far, making continued monitoring by the Coast Guard important.
Background The Strategy lays out three high-level goals to prepare for and respond to an influenza pandemic: (1) stop, slow, or otherwise limit the spread of a pandemic to the United States; (2) limit the domestic spread of a pandemic and mitigate disease, suffering, and death; and (3) sustain infrastructure and mitigate impact on the economy and the functioning of society. These goals are underpinned by three pillars that are intended to guide the federal government’s approach to a pandemic threat: (1) preparedness and communication, (2) surveillance and detection, and (3) response and containment. Each pillar describes domestic and international efforts, animal and human health efforts, and efforts that would need to be undertaken at all levels of government and in communities to prepare for and respond to a pandemic. The Plan is intended to support the broad framework and goals articulated in the Strategy by outlining specific steps that federal departments and agencies should take to achieve these goals. It also describes expectations regarding preparedness and response efforts of state and local governments and tribal entities and the private sector. The Plan’s chapters cover categories of actions that are intended to address major considerations raised by a pandemic, including protecting human and animal health; transportation and borders; and international, security, and institutional considerations. The Plan is not intended to describe the operational details of how federal departments and agencies would accomplish their objectives to support the Strategy. Rather, these operational details are supposed to be included in the departments’ and agencies’ pandemic implementation plans along with additional considerations raised during a pandemic involving (1) protection of employees, (2) maintenance of essential functions and services, and (3) the manner in which departments and agencies would communicate messages about pandemic planning and respond to their stakeholders. All-Hazards Emergency Management Policies Provide the Overarching Context for the Strategy and Plan The Homeland Security Act of 2002 required the newly established DHS to develop a comprehensive National Incident Management System (NIMS) and a comprehensive NRP. NIMS and the NRP are intended to provide an integrated all-hazards approach to emergency incident management. As such, they are expected to form the basis of the federal response to a pandemic. NIMS defines “how” to manage an emergency incident. It defines roles and responsibilities of federal, state, and local responders for emergency incidents regardless of the cause, size, or complexity of the situation. Its intent is to establish a core set of concepts, principles, terminology, and organizational processes to enable effective, efficient, and collaborative emergency incident management at all levels. The NRP, on the other hand, defines “what” needs to be done to manage an emergency incident. It is designed to integrate federal government domestic prevention, protection, response, and recovery plans into a single operational plan for all hazards and all emergency response disciplines. Using the framework provided by NIMS, the NRP is intended to provide the structure and mechanisms for national-level policy and operational direction for domestic incident management where federal support is necessary. States may need federal assistance in the event of a pandemic to maintain essential services. Upon receiving such requests, the President may issue emergency or major disaster declarations pursuant to the Robert T. Stafford Disaster Relief and Emergency Assistance Act of 1974 (the Stafford Act). The Stafford Act primarily establishes the programs and processes for the federal government to provide major disaster and emergency assistance to state and local governments and tribal nations, individuals, and qualified private nonprofit organizations. Federal assistance may include technical assistance, the provision of goods and services, and financial assistance, including direct payments, grants, and loans. FEMA is responsible for carrying out the functions and authorities of the Stafford Act. The Secretary of Health and Human Services also has authority, under the Public Health Service Act, to declare a public health emergency and to take actions necessary to respond to that emergency consistent with his/her authorities. These actions may include making grants, entering into contracts, and conducting and supporting investigations into the cause, treatment, or prevention of the disease or disorder that caused the emergency. The Secretary’s declaration may also initiate the authorization of emergency use of unapproved products or approved products for unapproved uses as well as waiving of certain HHS regulatory requirements. The NRP, as revised in May 2006, applies to all incidents requiring a coordinated federal response. The most severe of these incidents, termed Incidents of National Significance, must be personally declared and managed by the Secretary of Homeland Security. According to the Plan, the Secretary of Homeland Security may declare a pandemic an Incident of National Significance, perhaps as early as when an outbreak occurs in foreign countries but before the disease reaches the United States. In addition to the base response plan, the NRP has 31 annexes consisting of 15 Emergency Support Function (ESF) annexes, 9 support annexes, and 7 incident annexes. The ESFs are the primary means through which the federal government provides support to state, local, and tribal governments, and the ESF structure provides a mechanism for interagency coordination during all phases of an incident—some departments and agencies may provide resources during the early stages, while others would be more prominent in supporting recovery efforts. The ESFs group capabilities and resources into the functions that are most likely needed during actual or potential incidents where coordinated federal response is required. Of the 15 ESF annexes, ESF-8, the public health and medical services ESF, would be the primary ESF used for the public health and medical care aspects of a pandemic involving humans. Although HHS is the lead agency for ESF-8, the ESFs are carried out through a “unified command” approach and several other federal agencies, including the Departments of Agriculture, Defense, Energy, Homeland Security (and the U.S. Coast Guard), Justice, and Labor, are specifically supporting agencies. ESF-11 pertains to agriculture and natural resources, and its purpose includes control and eradication of an outbreak of a highly contagious or economically devastating animal/zoonotic disease including avian influenza. The purpose of ESF-11 is to ensure, in coordination with ESF-8, that animal/veterinary/wildlife issues in natural disasters are supported. The Departments of Agriculture and the Interior share responsibilities as primary agencies for this ESF. FEMA has or shares lead responsibility for several of the ESFs, including those that would be applicable during a pandemic. For example, FEMA is the lead agency for ESF-5 (emergency management), ESF-6 (mass care, housing, and human services), and ESF-14 (long-term community recovery and mitigation) and is the primary agency for ESF-15 (external affairs). Additionally, FEMA is responsible for carrying out the functions and authorities of the Stafford Act. The incident annexes describe the policies, situations, concept of operations, and responsibilities pertinent to the type of incident in question. Included among the seven incident annexes within the NRP is the Catastrophic Incident Annex. The Catastrophic Incident Annex could be applicable to a pandemic influenza as it applies to any incident that results in extraordinary levels of mass casualties, damage, or disruption severely affecting the population, infrastructure, environment, economy, national morale, and/or government functions. The NRP also addresses two key leadership positions in the event of a Stafford Act emergency or major disaster. One official, the FCO, who can be appointed by the Secretary of Homeland Security on behalf of the President, manages and coordinates federal resource support activities related to Stafford Act disasters and emergencies. The other official, the PFO, is designated by the Secretary of Homeland Security to facilitate federal support to established incident command structures and to coordinate overall federal incident management and assistance activities across the spectrum of prevention, preparedness, response, and recovery. The PFO is to provide a primary point of contact and situational awareness for the Secretary of Homeland Security. While the PFO is supposed to work closely with the FCO during an incident, the PFO has no operational authority over the FCO. The Executive Branch Has Taken Other Steps to Prepare for a Pandemic The executive branch has also developed tools and guidance to aid in preparing for and responding to a pandemic influenza. Among these are the following: A Web site, www.pandemicflu.gov, to provide one-stop access to U.S. government avian and pandemic influenza information. This site is managed by HHS. Planning checklists for state and local governments, businesses, schools, community organizations, health care providers, and individuals and families. As of July 2007, there were 16 checklists included on the Web site. Interim planning guidance for state, local, tribal, and territorial communities on nonpharmaceutical interventions (i.e., other than vaccines and drug treatment) to mitigate an influenza pandemic. This guidance, called the Interim Pre-pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States, includes a Pandemic Severity Index to characterize the severity of a pandemic, provides planning recommendations for specific interventions for a given level of pandemic severity, and suggests when those interventions should be started and how long they should be used. In March 2006, FEMA issued guidance for federal agencies to revise their Continuity of Operations (COOP) Plans to address pandemic threats. COOP plans are intended to ensure that essential government services are available in emergencies. We testified in May 2006, on the need for agencies to adequately prepare their telework capabilities for use during a COOP event. In September 2006, DHS issued guidance to assist owners and operators of critical infrastructure and key resources to prepare for a localized outbreak, as well as a broader influenza pandemic. In addition to these tools and guidance, other actions included HHS grant awards totaling $350 million to state and local governments for pandemic planning and more than $1 billion to accelerate development and production of new technologies for influenza vaccines within the United States. Federal Government Leadership Roles and Responsibilities Need Clarification and Testing While the Strategy and Plan describe the broad roles and responsibilities for preparing for and responding to a pandemic influenza, they do little to clarify existing emergency response roles and responsibilities. Instead, the documents restate the shared roles and responsibilities of the Secretaries of Health and Human Services and Homeland Security already prescribed by the NRP and related annexes and plans. These and other leadership roles and responsibilities continue to evolve, such as with the establishment of a national PFO and regional PFOs and FCOs and potential changes from ongoing efforts to revise the NRP. Congress has also passed legislation to address prior problems that emerged regarding federal leadership roles and responsibilities for emergency management that have ramifications for pandemic influenza. Although pandemic influenza scenarios have been used to exercise specific response elements, such as the distribution of stockpiled medications at specific locations or jurisdictions, no national exercises have tested the new federal leadership structure for pandemic influenza. The only national multisector pandemic exercise to date was a tabletop simulation conducted by members of the cabinet in December 2005, which was prior to the release of the Plan and the establishment of the PFO and FCO positions for a pandemic. The Strategy and Plan Do Not Clarify Leadership Roles and Responsibilities The Strategy and Plan do not clarify the specific leadership roles and responsibilities for a pandemic. Instead, they restate the existing leadership roles and responsibilities, particularly for the Secretaries of Homeland Security and Health and Human Services, prescribed in the NRP—an all-hazards plan for emergencies ranging from hurricanes to wildfires to terrorist attacks. However, the leadership roles and responsibilities prescribed under the NRP may need to operate somewhat differently because of the characteristics of a pandemic that distinguish it from other emergency incidents. For example, because a pandemic influenza is likely to occur in successive waves, planning has to consider how to sustain response mechanisms for several months to over a year— issues that are not clearly addressed in the Plan. In addition, the distributed nature of a pandemic, as well as the sheer burden of disease across the nation, means that the support states, localities, and tribal entities can expect from the federal government would be limited in comparison to the aid it mobilizes for geographically and temporarily bounded disasters like earthquakes and hurricanes. Consequently, legal authorities, roles and responsibilities, and lines of authority at all levels of government must be clearly defined, effectively communicated, and well- understood to facilitate rapid and effective decision making. This is also important for public and private sector organizations and international partners so everyone can better understand what is expected of them before and during a pandemic. The Strategy and Plan describe the Secretary of Health and Human Services as being responsible for leading the medical response in a pandemic, while the Secretary of Homeland Security is responsible for overall domestic incident management and federal coordination. However, since a pandemic extends well beyond health and medical boundaries, to include sustaining critical infrastructure, private sector activities, the movement of goods and services across the nation and the globe, and economic and security considerations, it is not clear when, in a pandemic, the Secretary of Health and Human Services would be in the lead and when the Secretary of Homeland Security would lead. Specifically, the Plan states that the Secretary of Health and Human Services, consistent with his/her role under the NRP as the coordinator for ESF-8, would be responsible for the overall coordination of the public health and medical emergency response during a pandemic, including coordinating all federal medical support to communities; providing guidance on infection control and treatment strategies to state, local, and tribal entities and the public; maintaining, prioritizing, and distributing countermeasures in the Strategic National Stockpile; conducting ongoing epidemiologic assessment and modeling of the outbreak; and researching the influenza virus, novel countermeasures, and rapid diagnostics. The Plan calls for the Secretary to be the principal federal spokesperson for public health issues, coordinating closely with DHS on public messaging pertaining to the pandemic. Also similar to the NRP, the Plan states that the Secretary of Homeland Security, as the principal federal official for domestic incident management, would be responsible for coordinating federal operations and resources; establishing reporting requirements; and conducting ongoing communications with federal, state, local, and tribal governments, the private sector, and nongovernmental organizations. It also states that in the context of response to a pandemic, the Secretary of Homeland Security would coordinate overall nonmedical support and response actions, sustain critical infrastructure, and ensure necessary support to the Secretary of Health and Human Services’ coordination of public health and medical emergency response efforts. Additionally, the Plan states that the Secretary of Homeland Security would be responsible for coordinating the overall response to the pandemic; implementing policies that facilitate compliance with recommended social distancing measures; providing for a common operating picture for all departments and agencies of the federal government; and ensuring the integrity of the nation’s infrastructure, domestic security, and entry and exit screening for influenza at the borders. Other DHS responsibilities include operating and maintaining the National Biosurveillance Integration System, which is intended to provide an all- source biosurveillance common operating picture to improve early warning capabilities and facilitate national response activities through better situational awareness. This responsibility, however, appears to be both a public health issue and an overall incident management issue, raising similar issues about the interrelationship of DHS and HHS roles and responsibilities. In addition, a pandemic could threaten our critical infrastructure, such as the capability to deliver electricity or food, by removing essential personnel from the workplace for weeks or months. The extent to which this would be considered a medical response with the Secretary of Health and Human Services in the lead, or when it would be under the Secretary of Homeland Security’s leadership as part of his/her responsibility for ensuring that critical infrastructure is protected, is unclear. According to HHS officials we interviewed, resolving this ambiguity will depend on several factors, including how the outbreak occurs and the severity of the pandemic. Officials from other agencies also need greater clarity about these roles and responsibilities. For example, USDA is not planning for DHS to assume the lead coordinating role if an outbreak of avian flu among poultry occurs sufficient in scope to warrant a presidential declaration of an emergency or major disaster. The federal response may be slowed as agencies resolve their roles and responsibilities following the onset of a significant outbreak. In addition, although DHS and HHS officials emphasize that they are working together on a frequent basis, these roles and responsibilities have not been thoroughly tested and exercised. Additional Key Leadership Roles and Responsibilities Are Evolving and Untested The executive branch has several efforts, some completed and others under way, to strengthen and clarify leadership roles and responsibilities for preparing for and responding to a pandemic influenza. However, many of these efforts are new, untested through exercises, or both. For example, on December 11, 2006, the Secretary of Homeland Security predesignated the Vice Commandant of the U.S. Coast Guard as the national PFO for pandemic influenza, and also established five pandemic regions, each with a regional PFO. Also, FCOs were predesignated for each of the regions. In addition to the five regional FCOs, a FEMA official with significant FCO experience has been selected to serve as the senior advisor to the national PFO. DOD has selected Defense Coordination Officers and HHS has selected senior health officials to work together within this national pandemic influenza preparedness and response structure. DHS is taking steps to further clarify federal leadership roles and responsibilities. Specifically, it is developing a Federal Concept Plan for Pandemic Influenza, which is intended to identify specific federal response roles and responsibilities for each stage of an outbreak. According to DHS, the Concept Plan, which is based on the Implementation Plan and other related documents, would also identify “seams and gaps that must be addressed to ensure integration of all federal departments and agencies prior to, during, and after a pandemic outbreak in the U.S.” According to DHS officials, they sent a draft to federal agencies in May for comment and have not yet determined when the Concept Plan will be issued. U.S. Coast Guard and FEMA officials we met with recognized that planning for and responding to a pandemic would require different operational leadership roles and responsibilities than for most other emergencies. For example, a FEMA official said that given the number of people who would be involved in responding to a pandemic, collaboration between HHS, DHS, and FEMA would need to be greater than for any other past emergencies. Officials are starting to build relationships among the federal actors for a pandemic. For example, some of the federal officials with leadership roles for an influenza pandemic met during the week of March 19, 2007, to continue to identify issues and begin developing solutions. One of the participants, however, told us that although additional coordination meetings are needed, it may be challenging since there is no dedicated funding for the staff working on pandemic issues to participate in these and other related meetings. The national PFO for pandemic influenza said that a draft charter has also been developed to establish a Pandemic Influenza PFO Working Group to help identify and address many policy and operational issues before a pandemic. According to a FEMA official, some of these issues include staff availability, protective measures for staff, and how to ensure that the assistance to be provided under the Stafford Act is implemented and coordinated in a unified and consistent manner across the country during a pandemic. As of June 7, 2007, the draft charter was undergoing some revisions and was expected to be sent to the Secretary of Homeland Security for review and approval around the end of June. Additionally, there are plans to identify related exercises, within and outside of the federal government, to create a consolidated schedule of exercises for the national PFO for pandemic influenza and regional PFOs and FCOs to participate in by leveraging existing exercise plans. DHS officials said that they expect FEMA would retain responsibility for maintaining this consolidated schedule. It is unclear whether the newly established national and regional positions for a pandemic will further clarify leadership roles. For example, in 2006, DHS made revisions to the NRP and released a Supplement to the Catastrophic Incident Annex—both designed to further clarify federal roles and responsibilities and relationships among federal, state, and local governments and responders. However, we reported in February 2007 that these revisions had not been tested and there was little information available on the extent to which these and other actions DHS was taking to improve readiness were operational. Additionally, DHS is currently coordinating a comprehensive review of the NRP and NIMS to assess their effectiveness, identify improvements, and recommend modifications. One of the issues expected to be addressed during this review is clarifying of roles and responsibilities of key structures, positions, and levels of government, including the role of the PFO and that position’s current lack of operational authority during an emergency. The review is expected to be done, and a revised NRP and NIMS issued, by the summer of 2007. Recent Congressional Actions Addressed Leadership Roles and Responsibilities In 2006, Congress passed two acts addressing leadership roles and responsibilities for emergency management—the Pandemic and All- Hazards Preparedness Act and the Post-Katrina Emergency Management Reform Act of 2006—which were enacted into law on December 19, 2006 and October 4, 2006, respectively. Pandemic and All-Hazards Preparedness Act and Its Implementation The Pandemic and All-Hazards Preparedness Act codifies preparedness and response federal leadership roles and responsibilities for public health and medical emergencies that are now in the NRP by designating the Secretary of Health and Human Services as the lead federal official for public health and medical preparedness and response, consistent with the NRP. The act also requires the Secretary to establish an interagency agreement, in collaboration with DOD, DHS, DOT, the Department of Veterans Affairs, and other relevant federal agencies, prescribing that consistent with the NRP, HHS would assume operational control of emergency public health and medical response assets in the event of a public health emergency. Further, the act requires that the Secretary develop a coordinated National Health Security Strategy and accompanying implementation plan for public health emergency preparedness and response. This health security strategy and accompanying implementation plan are to be completed by 2009 and updated every 4 years. The act also prescribes several new preparedness responsibilities for HHS. For example, the Secretary must develop and disseminate criteria for an effective state plan for responding to a pandemic influenza. Additionally, the Secretary is required to develop and require the application of measurable evidence-based benchmarks and objective standards that measure the levels of preparedness in such areas as hospitals and state and local public health security. The act seeks to further strengthen HHS’s public health leadership role by transferring the National Disaster Medical System from DHS back to HHS, thus placing these public health resources within HHS. It also creates the Office of the Assistant Secretary for Preparedness and Response (replacing the Office of the Assistant Secretary for Public Health Emergency Preparedness) and consolidates other preparedness and response functions within HHS in the new Assistant Secretary’s office. HHS has set up an implementation team involving over 200 HHS staff to implement the provisions of this act. According to a HHS official, an interim implementation plan is expected to be made available for public comment sometime during the summer of 2007. Post-Katrina Reform Act and Its Implementation In response to the findings and recommendations from several reports, the Post-Katrina Emergency Management Reform Act (referred to as the Post- Katrina Reform Act in this report) designated the FEMA Administrator as the principal domestic emergency management advisor to the President, the HSC, and the Secretary of Homeland Security. Therefore, the FEMA Administrator also has a leadership role in preparing for and responding to an influenza pandemic, including key areas such as planning and exercising. For example, under the Post-Katrina Reform Act, the FEMA Administrator is responsible for carrying out a national exercise program to test and evaluate preparedness for a national response to natural and man-made disasters. The act made FEMA a distinct entity within DHS for leading and supporting the nation in a risk-based, comprehensive emergency management system of preparedness, protection, response, recovery, and mitigation. As part of the reorganization, DHS transferred several offices and divisions of its National Preparedness Directorate to FEMA, including the Offices of Grants and Training and National Capital Region Coordination. FEMA’s National Preparedness Directorate contains functions related to preparedness doctrine, policy, and contingency planning and includes DHS’s exercise coordination and evaluation program and emergency management training. Other transfers included the Chemical Stockpile Emergency Preparedness Division, Radiological Emergency Preparedness Program, and the United States Fire Administration. The reorganization took effect on March 31, 2007, and it will likely take some time before it is fully implemented and key leadership positions within FEMA are filled. Rigorous and Robust Exercises Are Important for Testing Federal Leadership for a Pandemic Disaster planning, including for a pandemic influenza, needs to be tested and refined with a rigorous and robust exercise program to expose weaknesses in plans and allow planners to refine them. Exercises— particularly for the type and magnitude of emergency incidents such as a severe influenza pandemic for which there is little actual experience—are essential for developing skills and identifying what works well and what needs further improvement. Our prior work examining the preparation for and response to Hurricane Katrina highlighted the importance of realistic exercises to test and refine assumptions, capabilities, and operational procedures; and build upon strengths. In response to the experiences during Hurricane Katrina, the Post-Katrina Reform Act called for a national exercise program to evaluate preparedness of a national response to natural and man-made disasters. While pandemic influenza scenarios have been used to exercise specific response elements and locations, such as for distributing stockpiled medications, there has been no national exercise to test a multisector, multijurisdictional response or any exercises to test the working and operational relationships of the national PFO and the five regional PFOs and FCOs for pandemic influenza. According to a CRS report, the only national multisector pandemic exercise to date was a tabletop simulation involving members of the federal cabinet in December 2005. This tabletop exercise was prior to the release of the Plan in May 2006, the establishment of a national PFO and regional PFO and FCO positions for a pandemic, and enactment of the Pandemic and All-Hazards Preparedness Act in December 2006 and the Post-Katrina Reform Act in October 2006. The National Strategy and Its Implementation Plan Do Not Address All the Characteristics of an Effective Strategy, Thus Limiting Their Usefulness as Planning Tools The Strategy and Plan represent important efforts to guide the nation’s preparedness and response activities, setting forth actions to be taken by federal agencies and expectations for a wide range of actors, including states and communities, the private sector, global partners, and individuals. However, the Strategy and Plan do not address all of the characteristics of an effective national strategy as we identified in our prior work. While national strategies necessarily vary in content, the six characteristics we identified apply to all such planning documents and can help ensure that they are effective management tools. Gaps and deficiencies in these documents are particularly troubling in that a pandemic represents a complex challenge that will require the full understanding and collaboration of a multitude of entities and individuals. The extent to which these documents, that are to provide an overall framework to ensure preparedness and response to a pandemic influenza, fail to adequately address key areas, could have critical impact on whether the public and key stakeholders have a clear understanding and can effectively execute their roles and responsibilities. As shown in table 3, the Strategy and its Plan address one of the six characteristics of an effective national strategy. However, they only partially address four and do not address one of the characteristics at all. As a result, the Strategy and Plan fall short as an effective national strategy in important areas. The Strategy and Plan Partially Address Purpose, Scope, and Methodology A national strategy should address its purpose, scope, and methodology, including the process by which it was developed, stakeholder involvement, and how it compares and contrasts with other national strategies. Addressing this characteristic helps make a strategy more useful to organizations responsible for implementing the strategy, as well as those responsible for oversight. We found that the Strategy and Plan partially address this characteristic by describing their purpose and scope. However, neither document described in adequate detail their methodology for involving key stakeholders, how they relate to other national strategies, or a process for updating the Plan. In describing its purpose, the Strategy states that it was developed to provide strategic direction for the departments and agencies of the U.S. government and guide the U.S. preparedness and response activities to mitigate the impact of a pandemic. In support of the Strategy, the Plan states that its purpose is to translate the Strategy into tangible action and direct federal departments and agencies to take specific, coordinated steps to achieve the goals of the Strategy and outline expectations for state, local, and tribal entities; businesses; schools and universities; communities; nongovernmental organizations; and international partners. As a part of its scope, the Plan identifies six major functions: (1) protecting human health, (2) protecting animal health, (3) international considerations, (4) transportation and borders, (5) security considerations, and (6) institutional considerations. The Plan proposes that departments and agencies undertake a series of actions in support of these functional areas with operational details on how departments would accomplish these objectives to be provided by separate departmental plans. Additionally, the Strategy and Plan describe the principles and planning assumptions that guided their development. The Strategy’s guiding principles include recognition of the private sector’s integral role and leveraging global partnerships. The Plan’s principles are more expansive, listing 12 planning assumptions that it identifies as facilitating its planning efforts. For example, 1 of the assumptions is that illness rates would be highest among school-aged children (about 40 percent). Another element under this characteristic is the involvement of key stakeholders in the development of the strategy. Neither the Strategy nor Plan described the involvement of key stakeholders, such as state, local, and tribal entities, in the development of the Strategy or Plan, even though they would be on the front lines in a pandemic and the Plan identifies actions they should complete. The Plan contains 17 actions calling for state, local, and tribal governments to lead national and subnational efforts, and identifies another 64 actions where their involvement is needed. Officials told us that federal stakeholders had opportunities to review and comment on the Plan but that state, local, and tribal entities were not directly involved, although the drafters of the Plan were generally aware of their concerns. Stakeholder involvement during the planning process is important to ensure that the federal government’s and nonfederal entities’ responsibilities and resource requirements are clearly understood and agreed upon. Therefore, the Strategy and Plan may not fully reflect a national perspective on this critical national issue since nonfederal stakeholders were not involved in the process to develop the actions where their leadership, support, or both would be needed. Further, these nonfederal stakeholders need to understand their critical roles in order to be prepared to work effectively under difficult and challenging circumstances. Both documents address the scope of their coverage and include several important elements in their discussions, but do not address how they compare and contrast to other national strategies. The Strategy recognizes that preparing for a pandemic is more than a purely federal responsibility, and that the nation must have a system of plans at all levels of government and in all sectors outside of government that can be integrated to address the pandemic threat. It also extends its scope to include the development of an international effort as a central component of overall capacity. The Strategy lays out the major functions, mission areas, and activities considered under the extent of its coverage. For example, the Strategy’s scope is defined as extending well beyond health and medical boundaries, to include sustaining critical infrastructure, private sector activities, the movement of goods and services across the nation and the globe, and economic and security considerations. Although the Strategy states that it will be consistent with the National Security Strategy and the Strategy for Homeland Security, it does not specify how they are related. The Plan mentions the NRP and states that it will guide the federal pandemic response. Because a pandemic would affect all facets of our society, including the nation’s security, it is important to recognize and reflect an understanding of how these national strategies relate to one another. The Plan does not describe a mechanism for updating it to reflect policy decisions, such as clarifications in leadership roles and responsibilities and other lessons learned from exercising and testing or other changes. Although the Plan was developed with the intent of being initial guidance and being updated and expanded over time, officials in several agencies told us that specific processes or time frames for updating and revising it have not been established. In addition to incorporating lessons learned, such updates are important in ensuring that the Plan accurately reflects entities’ capabilities and a clear understanding of roles and responsibilities. Additionally, an update would also provide the opportunity for input from nonfederal entities that have not had an opportunity to directly provide input to the Strategy and Plan. Strategy and Plan Address Problem Definition and Risk Assessment National strategies need to reflect a clear description and understanding of the problems to be addressed, their causes, and operating environment. In addition, the strategy should include a risk assessment, including an analysis of the threats to and vulnerabilities of critical assets and operations. We found that the Strategy and Plan address this characteristic by describing the potential problems associated with a pandemic as well as potential threats and vulnerabilities. In defining the problem, both documents provide information on what a pandemic is and how influenza viruses are transmitted, and explain that a threat stems from an unprecedented outbreak of avian influenza in Asia and Europe, caused by the H5N1 strain of the influenza A virus. The President, in releasing the Strategy, stated that it presented an approach to address the threat of pandemic influenza, whether it results from the strain currently in birds in Asia or another influenza virus. Additionally, the problem definition includes a historical perspective of other pandemics in the United States. The Plan used the severity of the 1918 influenza pandemic as the basis for its risk assessment. A CBO study was used to describe the possible economic consequences of such a severe pandemic on the U.S. economy today. While the Plan did not discuss the likelihood of a severe pandemic or analyze the possibility of whether the H5N1 strain would be the specific virus strain to cause a pandemic, it stated that history suggests that a pandemic would occur some time in the future. As a result, it recognizes the importance of preparing for an outbreak. The Strategy and Plan included discussions of the constraints and challenges involved in a pandemic. For example, the Plan included challenges such as severe shortfalls in surge capacity in the nation’s health care facilities, limited vaccine production capabilities, the lack of real-time surveillance among most of the systems, and the inability to quantify the value of many infection control strategies. In acknowledging the challenges involved in pandemic preparedness, the Plan also describes a series of circumstances to enable preparedness, such as viewing pandemic preparedness as a national security issue, connectivity between communities, and communicating risk and responsibility. In this regard, the Plan recognizes that one of the nation’s greatest vulnerabilities is the lack of connectivity between communities responsible for pandemic preparedness. The Plan specifically cites vulnerabilities in coordination of efforts between the animal and human health communities, as well as between the public health and medical communities. In the case of public health and medical communities, the public health community has responsibility for communitywide health promotion and disease prevention and mitigation efforts, and the medical community is largely focused on actions at the individual level. The Strategy and Plan Partially Address Goals, Objectives, Activities, and Performance Measures A national strategy should describe its goals and the steps needed to achieve those results, as well as the priorities, milestones, and outcome- related performance measures to gauge results. Identifying goals, objectives, and outcome-related performance measures aids implementing parties in achieving results and enables more effective oversight and accountability. We found that the Strategy and Plan partially address this characteristic by identifying the overarching goals and objectives for pandemic planning. However, the documents did not describe relationships or priorities among the action items, and some of the action items lacked a responsible entity for ensuring their completion. The Plan also did not describe a process for monitoring and reporting on the action items. Further, many of the performance measures associated with action items were not clearly linked with results nor assigned clear priorities. The Strategy and Plan identify a hierarchy of major goals, pillars, functional areas, and specific activities (i.e., action items), as shown in figure 1. The Plan includes and expands upon the Strategy’s framework by including 324 action items. The Plan uses the Strategy’s three major goals that are underpinned by three pillars as its framework and expands on this organizing structure by presenting chapters on six functional areas with various objectives, action items, and performance measures. For example, pillar 2, surveillance and detection, under the transportation and borders functional area, includes an objective to develop and exercise mechanisms to provide active and passive surveillance during an outbreak, both within and outside our borders. Under this objective is an action item for HHS, in coordination with other specific federal agencies, to develop policy recommendations for transportation and borders entry and exit protocols, screening, or both and to review the need to develop domestic response protocols and screening within 6 months. The item’s performance measure is policy recommendations for response protocols, screening, or both. While some action items depend on other action items, these linkages are not always apparent in the Plan. For example, one action item, concerning the development of a joint strategy for deploying federal health care and public health assets and personnel, is under the preparedness and communication pillar. However, another action item concerning the development of strategic principles for deployment of federal medical assets is under the response and containment pillar within the same chapter. While these two action items are clearly related, the plan does not make a connection between the two or discuss their relationship. An HHS official who helped draft the Plan acknowledged that while an effort was made to ensure linkages among action items, there may be gaps in the linkages among interdependent action items within and across the Plan’s chapters on the six functional areas (i.e., the chapters that contain action items). Some action items, particularly those that are to be completed by state, local, and tribal governments or the private sector, do not identify an entity responsible for carrying out the action. Although the plan specifies actions to be carried out by states, local jurisdictions, and other entities, including the private sector, it gives no indication of how these actions will be monitored and how their completion will be ensured. For example, one such action item states that “all health care facilities should develop and test infectious disease surge capacity plans that address challenges including: increased demand for services, staff shortages, infectious disease isolation protocols, supply shortages, and security.” Similarly, another action item states that “all Federal, State, local, tribal, and private sector medical facilities should ensure that protocols for transporting influenza specimens to appropriate reference laboratories are in place within 3 months.” Yet the plan does not make clear who will be responsible for making sure that these actions are completed. While most of the action items have deadlines for completion, ranging from 3 months to 3 years, the Plan does not identify a process to monitor and report on the progress of the action items nor does it include a schedule for reporting progress. Agency officials told us that they had identified individuals to act as overall coordinators to monitor the action items for which their agencies have lead responsibility and provide periodic progress reports to the HSC. However, we could not identify a similar mechanism to monitor the progress of the action items that fall to state and local governments or the private sector. The first public reporting on the status of the action items occurred in December 2006 when the HSC reported on the status of the action items that were to have been completed by November 3, 2006—6 months after the release of the Plan. Of the 119 action items that were to be completed by that time, we found that the HSC omitted the status of 16 action items. Two of the action items that were omitted from the report were to (1) establish an interagency transportation and border preparedness working group and (2) engage in contingency planning and related exercises to ensure preparedness to maintain essential operations and conduct missions. Additionally, we found that several of the action items that were reported by the HSC as being completed were still in progress. For example, DHS, in coordination with the Department of State (State), HHS, the Department of the Treasury (Treasury), and the travel and trade industry, was to tailor existing automated screening programs and extended border programs to increase scrutiny of travelers and cargo based on potential risk factors within 6 months. The measure of performance was to implement enhanced risk-based screening protocols. Although this action item was reported as complete, the HSC reported that DHS was still developing risk-based screening protocols, a major component of this action. A DHS official, responsible for coordinating the completion of DHS-led action items, acknowledged that all action items are a work in progress and that they would continue to be improved, including those items that were listed as completed in the report. The HSC’s report included a statement that a determination of “complete” does not necessarily mean that work has ended; in many cases work is ongoing. Instead, the complete determination means that the measure of performance associated with an action item was met. It appears that this determination has not been consistently or accurately applied for all items. Our recent report on U.S. agencies’ international efforts to forestall a pandemic influenza also reported that eight of the Plan’s international-related action items included in the HSC’s report either did not directly address the associated performance measure or did not indicate that the completion deadline had been met. Most of the Plan’s performance measures are focused on activities such as disseminating guidance, but the measures are not always clearly linked with intended results. This lack of clear linkages makes it difficult to ascertain whether progress has in fact been made toward achieving the national goals and objectives described in the Strategy and Plan. Most of the Plan’s performance measures consist of actions to be completed, such as guidance developed and disseminated. Without a clear linkage to anticipated results, these measures of activities do not give an indication of whether the purpose of the activity is achieved. Further, 18 of the action items have no measure of performance associated with them. In addition, the plan does not establish priorities among its 324 action items, which becomes especially important as agencies and other parties strive to effectively manage scarce resources and ensure that the most important steps are accomplished. The Strategy and Plan Do Not Address Resources, Investments, and Risk Management A national strategy needs to describe what the strategy will cost; identify where resources will be targeted to achieve the maximum results; and describe how the strategy balances benefits, risks, and costs. Guidance on costs and resources needed using a risk management approach helps implementing parties allocate resources according to priorities, track costs and performance, and shift resources, as appropriate. We found that neither the Strategy nor Plan contain these elements. While neither document addresses the overall cost to implement the Plan, the Plan refers to the administration’s budget request of $7.1 billion and a congressional appropriation of $3.8 billion to support the objectives of the Strategy. In November 2005, the administration requested $7.1 billion in emergency supplemental funding over 3 years to support the implementation of the Strategy. In December 2005, Congress appropriated $3.8 billion to support budget requirements to help address pandemic influenza issues. The Plan states that much of this funding would be directed toward domestic preparedness and the establishment of countermeasure stockpile and production capacity, with $400 million directed to bilateral and multilateral international efforts. However, the 3- year $7.1 billion budget proposal does not coincide with the period of the Plan. Additionally, whereas the Plan does not allocate funds to specific action items, our analysis of budget documents indicates that the funds were allocated primarily toward those action items related to vaccines and antivirals. Developing and sustaining the capabilities stipulated in the Plan would require the effective use of federal, state, and local funds. Given that funding needs may not be readily addressed through existing mechanisms and could stress existing government and private resources, it is critical for the Plan to lay out funding requirements. For example, the Plan states that one of the primary objectives of domestic vaccine production capacity would be for domestic manufacturers to produce enough vaccine for the entire U.S. population within 6 months. However, it states that production capacity would depend on the availability of future appropriations. Despite the fact that the production of enough vaccine for the population would be critical if a pandemic were to occur, the Plan does not provide even a rough estimate of how much the vaccine could cost for consideration in future appropriations. Despite the numerous action items and specific implementing directives and guidance directed toward federal agencies, states, organizations, and businesses, neither document addresses what it would cost to complete the actions that are stipulated. Rather, the Plan states that the local communities would have to address the medical and nonmedical effects of the pandemic with available resources, and also that pandemic influenza response activities may exceed the budgetary resources of responding federal and state government agencies. The overall uncertainty of funding to complete action items stipulated in the Plan has been problematic. For example, there were more than 50 actions in the Plan that were to be completed before the end of 2006 for which DOD was either a lead or support agency. We reported that because DOD had not yet requested funding, it was unclear whether DOD could address the tasks assigned to it in the Plan and pursue its own preparedness efforts for its workforce departmentwide within current resources. The Strategy and Plan Partially Address Organizational Roles, Responsibilities, and Coordination A national strategy should address which organizations would implement the strategy, their roles and responsibilities, and mechanisms for coordinating their efforts. It helps to answer the fundamental question about who is in charge, not only during times of crisis, but also during all phases of emergency management, as well as the organizations that will provide the overall framework for accountability and oversight. This characteristic entails identifying the specific federal departments, agencies, and offices involved and, where appropriate, the different sectors, such as state, local, private, and international sectors. We found that the Strategy and Plan partially address this characteristic by containing broad information on roles and responsibilities. But, as we noted earlier, while the Plan describes coordination mechanisms for responding to a pandemic, it does not clarify how responsible officials would share leadership responsibilities. In addition, it does not describe mechanisms for coordinating preparations and completing the action items, nor does it describe an overall accountability and oversight framework. The Strategy identifies lead agencies for preparedness and response. Specifically, HHS is the lead agency for medical response; USDA for veterinary response; State for international activities; and DHS for overall domestic incident management, sustainment of critical infrastructure and key resources, and federal coordination. The Plan also briefly describes the preparedness and response roles and responsibilities of DOD, the Department of Labor, DOT, and Treasury. The Plan states that these and all federal cabinet agencies are responsible for their respective sectors and developing pandemic response plans. In addition, the Strategy and Plan broadly describe the expected roles and responsibilities of state, local, and tribal governments; international partners; the private and nonprofit sectors; and individuals and families. For example, in the functional area of transportation and borders, the Plan states that it expects state and local communities to involve transportation and health professionals to identify transportation options, consequences, and implications in the event of a pandemic. The Plan states that the primary mechanism for coordinating the federal government’s response to a pandemic is the NRP. In this regard, the Plan acknowledges that sustaining mechanisms for several months to over a year will present unique challenges, and thus day-to-day monitoring of the response to a pandemic influenza would occur through the national operations center with an interagency body composed of senior decision makers from across the government and chaired by the White House. Additionally, the Plan states that policy issues that cannot be resolved at the department level would be addressed through the HSC-National Security Council policy coordination process. As stipulated in the Plan, the specifics of this policy coordination mechanism were included in the May 2006 revisions to the NRP. The Plan also generally identifies lead and support roles for the action items federal agencies are responsible for completing, but it is not explicit in defining these roles or processes for coordination and collaboration. While it identifies which federal agencies have lead and support roles for completing 305 action items, the Plan does not define the roles of the lead and support agencies. Rather, it leaves it to the agencies to interpret and negotiate their roles. According to DOT officials we met with, this lack of clarity, coupled with staff turnover, left them unclear about their roles and responsibilities in completing action items. Thus, they had to seek clarification from DHS and HHS officials to assist them in defining what it meant to be the lead agency for an action item. Additionally, the Plan does not describe specific processes for coordination and collaboration between federal and nonfederal organizations and sectors for completing the action items. Related to this issue, we recently reported that some of DOD’s combatant commands, tasked with providing support in the event of a pandemic, had received limited detailed guidance from the lead agencies about what support they may be asked to provide during a pandemic. This has hindered these commands’ ability to plan to provide support to lead federal agencies domestically and abroad during a pandemic. The Plan also does not describe the role played by organizations that are to provide the overall framework for accountability and oversight, such as the HSC. According to agency officials, the HSC is monitoring executive branch agencies’ efforts to complete the action items. However, there is no specific documentation describing this process or institutionalizing it. This is important since some of the action items are not expected to be completed during this administration. Also, a similar oversight process for those actions items for which nonfederal entities have the lead responsibility does not appear to exist. The Strategy and Plan Partially Address Integration and Implementation A national strategy should make clear how it relates to the goals, objectives, and activities of other strategies and to subordinate levels of government and their plans to implement the strategy. A strategy might also discuss, as appropriate, various strategies and plans produced by state, local, private, and international sectors. A clear relationship between the strategy and other critical implementing documents helps agencies and other entities understand their roles and responsibilities, foster effective implementation, and promote accountability. We found that the Strategy and Plan partially address this characteristic. Although the documents mention other related national strategies and plans, they do not provide sufficient detail describing the relationships among these strategies and plans nor do they describe how subordinate levels of government and independent plans proposed by the Plan would be integrated to implement the Strategy. Since September 11, 2001, various national strategies, presidential directives, and national initiatives have been developed to better prepare the nation to respond to incidents of national significance, such as a pandemic influenza. As noted in figure 2, these include the National Security Strategy and the NRP. However, although the Strategy states that it is consistent with the National Security Strategy and the National Strategy for Homeland Security, it does not state how it is consistent or describe its relationship with these two strategies. In addition, the Plan does not specifically address how the Strategy or other related pandemic plans should be integrated with the goals, objectives, and activities of the national initiatives already in place. Whereas the Plan states that it supports Homeland Security Presidential Directive 8, which required the development of a domestic all-hazards preparedness goal—the National Preparedness Goal (Goal)—it does not describe how it supports the directive or its relationship to the Goal. The current interim Goal is particularly important for determining what capabilities are needed for a catastrophic disaster. It defines 36 major capabilities that first responders should possess to prevent, protect from, respond to, and recover from a wide range of incidents and the most critical tasks associated with these capabilities. An inability to effectively perform these critical tasks would, by definition, have a detrimental effect on protection, prevention, response, and recovery capabilities. The interim Goal also includes 15 planning scenarios, including one for pandemic influenza that outlines universal and critical tasks to be undertaken for planning for an influenza pandemic and target capabilities, such as search and rescue and economic and community recovery. Yet, the Strategy and Plan do not integrate this already-developed planning scenario and related tasks and capabilities. One federal agency official who assisted in drafting the Plan told us that the Goal and its pandemic influenza scenario had been considered but omitted because the Goal’s pandemic influenza scenario is geared to a less severe pandemic—such as those that occurred in 1957 and 1968—while the Plan is based on the more severe 1918-level mortality and morbidity rates. Further, the Strategy and Plan do not provide sufficient detail about how the Strategy, action items, and proposed set of independent plans are to be integrated with other national strategies and framework. Without clearly providing this linkage, the Plan may limit a common understanding of the overarching framework, thereby hindering the nation’s ability to effectively prepare for, respond to, and recover from a pandemic. For example, the Plan contains 39 action items that are response related (i.e., specific actions are to be taken within a prescribed number of hours or days after an outbreak). However, these action items are interspersed among the 324 action items, and the Plan does not describe the linkages of these response-related action items with the NRP or other response related plans. Further, DHS officials have recognized the need for a common understanding across federal agencies and better integration of agencies plans to prepare for and respond to a pandemic. DHS officials are developing a Federal Concept Plan for Pandemic Influenza to enhance interagency preparedness, response, and recovery efforts. The Plan also requires the federal departments and agencies to develop their own pandemic plans that describe the operational details related to the respective action items and cover the following areas: (1) protection of their employees; (2) maintenance of their essential functions and services; (3) how they would support both the federal response to a pandemic and those of states, localities, and tribal entities; and (4) the manner in which they would communicate messages about pandemic planning and response to their stakeholders. Further, it is unclear whether all the departments will share some or all of the information in their plans with nonfederal entities. While some agencies-such as HHS, DOD, and the Department of Veterans Affairs-have publicly released their pandemic plans, at least one agency, DHS, has indicated that it does not intend to publicly release its plan. Since DHS is a lead agency for planning for and responding to a pandemic, this gap may make it more challenging to fully advance joint and integrated planning across all levels of government and the private sector. The Plan recognizes and discusses the need for integrating planning across all levels of government and the private sector to ensure that the plans and response actions are complementary, compatible, and coordinated. In this regard, the Plan provides initial planning guidance for state, local, and tribal entities; businesses; schools and universities; and nongovernmental organizations for a pandemic. It also includes various action items that when completed, would produce additional planning guidance and materials for these entities. However, the Plan is unclear as to how the existing guidance relates to broad federal and specific departmental and agency plans as well as how the additional guidance would be integrated and how any gaps or conflicts that exist would be identified and addressed. Conclusions Although it is likely that an influenza pandemic will occur in the future, there is a high level of uncertainty about when a pandemic might occur and its level of severity. The administration has taken an active approach to this potential disaster by establishing an information clearinghouse for pandemic information; developing numerous planning guidelines for governments, businesses, nongovernmental organizations, and individuals; issuing the Strategy and Plan; completing many action items contained in the Plan; and continuing efforts to complete the remaining action items. A pandemic poses some unique challenges. Other disasters, such as hurricanes, earthquakes, or terrorist attacks, generally occur within a short period and the immediate effects are experienced in specific locations. By contrast, a pandemic would likely occur in multiple waves, each lasting weeks or months and affecting communities across the nation. Initial actions may help limit the spread of an influenza virus, reflecting the importance of a swift and effective response. Therefore, the effective exercise of shared leadership roles and responsibilities could have substantial consequences, both in the short and long term. However, these roles and responsibilities continue to evolve, leaving uncertainty about how the federal government would lead preparations for and response to a pandemic. Since the release of the Plan in May 2006, no national pandemic exercises of federal leadership roles and responsibilities have been conducted. Without rigorous testing, training, and exercising, the administration lacks information to determine whether current and evolving leadership roles and responsibilities are clear and clearly understood or if more changes are needed to ensure clarity. The Strategy and Plan are important because they broadly describe the federal government’s approach and planned actions to prepare for and respond to a pandemic, as well as expectations for states and communities, the private sector, and global partners. Although they contain a number of important characteristics, the documents lack several key elements. As a result, their usefulness as a management tool for ensuring accountability and achieving results is limited. For example, because the Strategy and Plan do not address the resources and investments needed to implement the actions called for, it is unclear what resources are needed to build capacity and whether they would be available. Further, because they did not include stakeholders that are expected to be the primary responders to a pandemic in the development of the Strategy and Plan, these documents may not fully reflect a national perspective on this critical national issue, and stakeholders and the public may not have a full understanding of their critical roles. In addition, the linkages among pandemic planning efforts and with all-hazards plans and initiatives need to be clear so that the numerous parties involved can operate in an integrated manner. Finally, because many of the performance measures do not provide information about the impacts of proposed actions, it will be difficult to assess the extent to which we are better prepared or to identify areas needing additional attention. Opportunities exist to improve the usefulness of the Plan because it is viewed as an evolving document and is intended to be updated on a regular basis to reflect ongoing policy decisions, as well as improvements in domestic preparedness. Currently, however, time frames or mechanisms for updating the Plan are undefined. While the HSC publicly reported on the status of approximately 100 action items that were to have been completed by November 2006, the Plan lacks a prescribed process for monitoring and reporting on the progress of the action items or what has been accomplished as a result. Therefore, it is unclear when the next report will be issued or how much information will be released. In addition, some of the information reported was incorrect. This lack of transparency makes it difficult to inform a national dialogue on the progress made to date or what further steps are needed. It also inhibits congressional oversight of strategies, funding priorities, and critical efforts to enhance the nation’s level of preparedness. DHS officials believe that their efforts to develop a Federal Concept Plan for Pandemic Influenza may help to more fully address some of the characteristics that we found the Strategy and Plan lack. According to those officials, the proposed Concept Plan may help, for example, better integrate the organizational roles, responsibilities, and coordination of interagency partners. They recognized, however, that the Concept Plan would not fully address all of the gaps we have identified. For example, they told us that the Concept Plan may not address actual or estimated costs or investments of the resources that will be required. Overall, they agreed that more needs to be done, especially in view of the long time requirements and challenging issues presented by a potential pandemic influenza. Recommendations for Executive Action To enhance preparedness efforts for a possible pandemic, we are making the following two recommendations: We recommend that the Secretaries of Homeland Security and Health and Human Services work together to develop and conduct rigorous testing, training, and exercises for pandemic influenza to ensure that federal leadership roles are clearly defined and understood and that leaders are able to effectively execute shared responsibilities to address emerging challenges. Once the leadership roles have been clarified through testing, training, and exercising, the Secretaries of Homeland Security and Health and Human Services should ensure that these roles are clearly understood by state, local, and tribal governments; the private and nonprofit sectors; and the international community. We also recommend that the Homeland Security Council establish a specific process and time frame for updating the Implementation Plan for the National Strategy for Pandemic Influenza. The process for updating the Plan should involve key nonfederal stakeholders and incorporate lessons learned from exercises and other sources. The Plan should also be improved by including the following information in the next update: the cost, sources, and types of resources and investments needed to complete the action items and where they should be targeted; a process and schedule for monitoring and publicly reporting on progress made on completing the actions; clearer linkages with other strategies and plans; and clearer descriptions of relationships or priorities among action items and greater use of outcome-focused performance measures. Agency Comments and Our Evaluation We provided a draft of this report to DHS, HHS, and the HSC for review and comment. DHS provided written comments, which are reprinted in appendix II. In commenting on the draft report, DHS concurred with the first recommendation and stated that DHS is taking action on many of the shortfalls identified in the report. For example, DHS stated that it is working closely with HHS and other interagency partners to develop and implement a series of coordinated interagency pandemic exercises and will include all levels of government as well as the international community and the private and nonprofit sectors. Additionally, DHS stated that its Incident Management Planning Team intends to use our list of desirable characteristics of an effective national strategy as one of the review metrics for all future plans. DHS also provided us with technical comments, which we incorporated in the report as appropriate. HHS informed us that it had no comments and concurred with the draft report. The HSC did not comment on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. We will then send copies of this report to the appropriate congressional committees and to the Assistant to the President for Homeland Security; the Secretaries of HHS, DHS, USDA, DOD, State, and DOT; and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http:/www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6543 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology Our reporting objectives were to review the extent to which (1) federal leadership roles and responsibilities for preparing for and responding to a pandemic are clearly defined and (2) the National Strategy for Pandemic Influenza (Strategy) and the Implementation Plan for the National Strategy for Pandemic Influenza (Plan) address the characteristics of an effective national strategy. To determine to what extent federal leadership roles and responsibilities for preparing for and responding to a pandemic are clearly defined, we drew upon our extensive body of work on the federal government’s response to hurricanes Katrina and Rita as well as our prior work on pandemic influenza. We also studied the findings in reports issued by Congress, the Department of Homeland Security’s Office of the Inspector General, the Homeland Security Council (HSC), and the Congressional Research Service. Additionally, we reviewed the Strategy and Plan and a variety of federal emergency documents, including the National Response Plan’s base plan and supporting annexes and the implementation plans developed by the Departments of Homeland Security and Health and Human Services. HSC officials declined to meet with us, stating that we should rely upon information provided by agency officials. We interviewed officials in the departments of Agriculture, Defense, Health and Human Services, Homeland Security, Transportation, and State and the Federal Emergency Management Agency and the U.S. Coast Guard. Some of these officials were involved in the development of the Plan. To review the extent to which the Strategy and Plan address the characteristics of an effective national strategy, we analyzed the Strategy and Plan; reviewed key relevant sections of major statutes, regulations, directives, national strategies, and plans discussed in the Plan; and interviewed officials in agencies that the Strategy and Plan identified as lead agencies in preparing for and responding to a pandemic. We assessed the extent to which the Strategy and Plan jointly addressed the six desirable characteristics, and the related elements under each characteristic, of an effective national strategy by using the six characteristics developed in previous GAO work. Table 4 provides the desirable characteristics and examples of their elements. National strategies with these characteristics offer policymakers and implementing agencies a management tool that can help ensure accountability and more effective results. We have used this methodology to assess and report on the administration’s strategies relating to terrorism, rebuilding of Iraq, and financial literacy. To assess whether the documents addressed these desirable characteristics, two analysts independently assessed both documents against each of the elements of a characteristic. If the analysts did not agree, a third party reviewed, discussed, and made the final determination to rate that element. Each characteristic was given a rating of either “addresses,” “partially addresses,” or “does not address.” According to our methodology, a strategy “addresses” a characteristic when it explicitly cites all, or nearly all, elements of the characteristic and has sufficient specificity and detail. A strategy “partially addresses” a characteristic when it explicitly cites one or a few of the elements of a characteristic and has sufficient specificity and detail. It should be noted that the “partially addresses” category includes a range that varies from explicitly citing most of the elements to citing as few as one of the elements of a characteristic. A strategy “does not address” a characteristic when it does not explicitly cite or discuss any elements of a characteristic, any references are either too vague or general to be useful, or both. We reviewed relevant sections of major statutes, regulations, directives, and plans discussed in the Plan to better understand if and how they were related. Specifically, our review included Homeland Security Presidential Directive 5 on the Management of Domestic Incidents; the National Response Plan; and the Robert T. Stafford Disaster Relief and Emergency Assistance Act of 1974 (as amended) as well as other national strategies. We conducted our review from May 2006 through June 2007 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Susan Ragland, Assistant Director; Allen Lomax; David Dornisch; Donna Miller; Catherine Myrick; and members of GAO’s Pandemic Working Group made key contributions to this report. Related GAO Products Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Emergency Management Assistance Compact: Enhancing EMAC’s Collaborative and Administrative Capacity Should Improve National Disaster Response. GAO-07-854. Washington, D.C.: June 29, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Influenza Pandemic: Efforts to Forestall Onset Are Under Way; Identifying Countries at Greatest Risk Entails Challenges. GAO-07-604. Washington, D.C.: June 20, 2007. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. The Federal Workforce: Additional Steps Needed to Take Advantage of Federal Executive Boards’ Ability to Contribute to Emergency Operations. GAO-07-515. Washington, D.C.: May 4, 2007. Financial Market Preparedness: Significant Progress Has Been Made, but Pandemic Planning and Other Challenges Remain. GAO-07-399. Washington, D.C.: March 29, 2007. Public Health and Hospital Emergency Preparedness Programs: Evolution of Performance Measurement Systems to Measure Progress. GAO-07-485R. Washington, D.C.: March 23, 2007. Homeland Security: Preparing for and Responding to Disasters. GAO-07- 395T. Washington, D.C.: March 9, 2007. Influenza Pandemic: DOD Has Taken Important Actions to Prepare, but Accountability, Funding, and Communications Need to be Clearer and Focused Departmentwide. GAO-06-1042. Washington, D.C.: September 21, 2006. Hurricane Katrina: Better Plans and Exercises Needed to Guide the Military’s Response to Catastrophic Natural Disasters. GAO-06-643. Washington, D.C.: May 15, 2006. Continuity of Operations: Agencies Could Improve Planning for Telework during Disruptions. GAO-06-740T. Washington, D.C.: May 11, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Statement by Comptroller General David M. Walker on GAO’s Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Influenza Pandemic: Applying Lessons Learned from the 2004-05 Influenza Vaccine Shortage. GAO-06-221T. Washington, D.C.: November 4, 2005. Influenza Vaccine: Shortages in 2004-05 Season Underscore Need for Better Preparation. GAO-05-984. Washington, D.C.: September 30, 2005. Influenza Pandemic: Challenges in Preparedness and Response. GAO-05- 863T. Washington, D.C.: June 30, 2005. Influenza Pandemic: Challenges Remain in Preparedness. GAO-05-760T. Washington, D.C.: May 26, 2005. Flu Vaccine: Recent Supply Shortages Underscore Ongoing Challenges. GAO-05-177T. Washington, D.C.: November 18, 2004. Emerging Infectious Diseases: Review of State and Federal Disease Surveillance Efforts. GAO-04-877. Washington, D.C.: September 30, 2004. Infectious Disease Preparedness: Federal Challenges in Responding to Influenza Outbreaks. GAO-04-1100T. Washington, D.C.: September 28, 2004. Emerging Infectious Diseases: Asian SARS Outbreak Challenged International and National Responses. GAO-04-564. Washington, D.C.: April 28, 2004. Public Health Preparedness: Response Capacity Improving but Much Remains to Be Accomplished. GAO-04-458T. Washington, D.C.: February 12, 2004. HHS Bioterrorism Preparedness Programs: States Reported Progress but Fell Short of Program Goals for 2002. GAO-04-360R. Washington, D.C.: February 10, 2004. Hospital Preparedness: Most Urban Hospitals Have Emergency Plans but Lack Certain Capacities for Bioterrorism Response. GAO-03-924. Washington, D.C.: August 6, 2003. Severe Acute Respiratory Syndrome: Established Infectious Disease Control Measures Helped Contain Spread, But a Large-Scale Resurgence May Pose Challenges. GAO-03-1058T. Washington, D.C.: July 30, 2003. SARS Outbreak: Improvements to Public Health Capacity Are Needed for Responding to Bioterrorism and Emerging Infectious Diseases. GAO-03- 769T. Washington, D.C.: May 7, 2003. Infectious Disease Outbreaks: Bioterrorism Preparedness Efforts Have Improved Public Health Response Capacity, but Gaps Remain. GAO-03- 654T. Washington, D.C.: April 9, 2003. Flu Vaccine: Steps Are Needed to Better Prepare for Possible Future Shortages. GAO-01-786T. Washington, D.C.: May 30, 2001. Flu Vaccine: Supply Problems Heighten Need to Ensure Access for High- Risk People. GAO-01-624. Washington, D.C.: May 15, 2001. Influenza Pandemic: Plan Needed for Federal and State Response. GAO-01-4. Washington, D.C.: October 27, 2000. Global Health: Framework for Infectious Disease Surveillance. GAO/NSIAD-00-205R. Washington, D.C.: July 20, 2000.
An influenza pandemic is a real and significant potential threat facing the United States and the world. Pandemics occur when a novel virus emerges that can easily be transmitted among humans who have little immunity. In 2005, the Homeland Security Council (HSC) issued a National Strategy for Pandemic Influenza and, in 2006, an Implementation Plan. Congress and others are concerned about the federal government's preparedness to lead a response to an influenza pandemic. This report assesses how clearly federal leadership roles and responsibilities are defined and the extent to which the Strategy and Plan address six characteristics of an effective national strategy. To do this, GAO analyzed key emergency and pandemic-specific plans, interviewed agency officials, and compared the Strategy and Plan with the six characteristics GAO identified. The executive branch has taken an active approach to help address this potential threat, including establishing an online information clearinghouse, developing planning guidance and checklists, awarding grants to accelerate development and production of new technologies for influenza vaccines within the United States, and assisting state and local government pandemic planning efforts. However, federal government leadership roles and responsibilities for preparing for and responding to a pandemic continue to evolve, and will require further clarification and testing before the relationships of the many leadership positions are well understood. The Strategy and Plan do not specify how the leadership roles and responsibilities will work in addressing the unique characteristics of an influenza pandemic, which could occur simultaneously in multiple locations and over a long period. A pandemic could extend well beyond health and medical boundaries, affecting critical infrastructure, the movement of goods and services across the nation and the globe, and economic and security considerations. Although the Department of Health and Human Services' (HHS) Secretary is to lead the public health and medical response and the Department of Homeland Security's (DHS) Secretary is to lead overall nonmedical support and response actions, the Plan does not clearly address these simultaneous responsibilities or how these roles are to work together, particularly over an extended period and at multiple locations across the country. In addition, the Secretary of DHS has designated a national Principal Federal Official (PFO) to facilitate pandemic coordination as well as five regional PFOs and five regional Federal Coordinating Officers. Most of these leadership roles and responsibilities have not been tested under pandemic scenarios, leaving it unclear how they will work. Because initial actions may help limit the spread of an influenza virus, the effective exercise of shared leadership roles and responsibilities could have substantial consequences. However, only one national multisector pandemic-related exercise has been held and that was prior to the issuance of the Plan. While the Strategy and Plan are an important first step in guiding national preparedness, they do not fully address all six characteristics of an effective national strategy. Specifically, they fully address only one of the six characteristics, by reflecting a clear description and understanding of problems to be addressed, and do not address one characteristic because the documents do not describe the financial resources needed to implement actions. Although the other characteristics are partially addressed, important gaps exist that could hinder the ability of key stakeholders to effectively execute their responsibilities, including state and local jurisdictions that will play crucial roles in preparing for and responding to a pandemic were not directly involved in developing the Plan, relationships and priorities among actions were not clearly described, performance measures focused on activities that are not always linked to results; insufficient information is provided about how the documents are integrated with other key related plans, and no process is provided for monitoring and reporting on progress.
Background Employment training projects that target economically disadvantaged adults can receive funding from a wide variety of sources. A large number of job training projects are federally funded; states fund some projects, as well. Other job training projects are funded privately. Major sources of federal employment training funds include the Job Training Partnership Act (JTPA), the Job Opportunities and Basic Skills Training (JOBS) program, and the Food Stamp Employment and Training program. Job training assistance may also draw resources from higher education, such as Pell grants or vocational education funding under the Perkins Act. Even when a job training project receives most of its direct funding from one federal or state agency, its clients may receive support services from other sources. For example, a project participant may have training paid for by JTPA but child care services paid for with JOBS funds. Evaluations of employment training efforts have focused either on a single funding stream or, less frequently, on individual training sites. Both types of study are complicated by a large number of intervening factors. Because differences in client populations and local economic conditions partially determine the impact of job training, no uniform standards establish what should be expected from any job training program or project. As a result, research efforts have largely focused on determining whether job training is effective in increasing employment and wages above the level participants could be expected to achieve without training. Some of these studies looked at the effect of large-scale federal initiatives operating across many sites nationwide. A few researchers looked at smaller-scale efforts, either at one particular site or at several sites. In addition, other studies examined the effectiveness of providing or subsidizing certain support services for a specific clientele who may not be in job training. Although all these studies provide insight into job training initiatives, little systematic research has been done on the reasons training projects succeed or fail, especially at the individual project level. Speculation about project success, either at one site or across projects, has generally been at a theoretical or conceptual level and has been limited to one or a few factors rather than a comprehensive approach. Nonetheless, a few case studies of selected training projects have pointed to several factors that may influence the quality of training or the success in job placement at specific training centers. For example, in 1991 the Department of Labor studied 15 randomly selected JTPA sites and examined factors that influenced the quality of training. The researchers concluded that quality training would generally include (1) basic skills training, preferably integrated closely with occupational training; (2) individual case management by project employees; (3) training for participants in what is expected in the working world; (4) high-quality classroom instruction; and (5) assurance that the jobs for which the participants are being trained are available in the local labor market. Similarly, a study of successful JTPA sites by SRI International, which also used case studies, concluded that links to the local labor market are important in facilitating job placement.In our report on JTPA training for dislocated workers, we identified links to the local labor market, an individualized approach to services, and personal support and follow-up as common themes across eight exemplary projects. Studies of vocational education programs have found such overlapping themes as school climate, administration, and leadership, to be important to success. While we relied partially on these and other studies to guide our initial case study protocol, our study differs from most previous efforts in several respects. First, we focused specifically on services to economically disadvantaged adults; we excluded services to dislocated workers and youth. Second, while previous studies focused on a single funding stream, we expanded our focus to include any successful project regardless of funding source; of the six projects we selected, one received no JTPA funding, one received nearly all its funding from JTPA, and the others supplemented JTPA funding with funds from other sources. Third, because we assumed that good leadership and management would be essential to any project’s success, we focused on tangible components, or features, of the program or service delivery, rather than on organizational structure or dynamics. Finally, instead of narrowing our approach to a single project phase, such as training or placement, or a single service delivery method, such as the case management method, we employed a comprehensive approach to allow us to identify commonalities across the successful projects we examined. Figure 1 shows the locations of the projects we visited. The six job training projects all focus on enabling their economically disadvantaged participants to obtain employment with benefits that would allow them to become self-sufficient; however, the projects vary considerably in the participants they serve and in the specific services they provide to meet those participants’ needs. The Arapahoe County Employment and Training Division (Arapahoe) administers JTPA in Colorado’s Arapahoe and Douglas Counties; it also administers the JOBS program in Arapahoe County. Located in Aurora, Colorado, a suburb of Denver, Arapahoe’s job training programs and services are intended to increase employment and earnings for economically disadvantaged adults within these counties and reduce welfare dependency. During 1994, Arapahoe served 541 disadvantaged adults, with a job placement rate of about 69 percent for those completing occupational skills training. The project uses a case management approach, with assessment and follow-up performed in-house and basic skills and job-specific training provided by area contractors. (See app. II for a detailed description of this project.) Reno’s CET, one of more than 30 centers in the nationwide CET network, is a community-based, nonprofit organization providing job training to disadvantaged adults, primarily Hispanic migrant farmworkers. Participants pay tuition for their training and may receive federal, state, or local financial aid. The Reno CET provides on-site training in three specific training areas: building maintenance, automated office skills, and shipping and receiving. It also provides remedial education and English language instruction. In 1994, the Reno location served 94 participants and achieved a 92-percent job placement rate for project completers. (See app. III for a detailed description of this project.) Encore!, located in Port Charlotte, Florida, prepares single parents; displaced homemakers; and single, pregnant women for high-wage occupations in order to help them become self-sufficient. This project is largely funded by a federal grant under the Carl D. Perkins Vocational and Applied Technology Education Act of 1990 and is strongly linked to the Charlotte Vocational Technical Center (Vo-Tech). Encore!’s primary components are a 6-week prevocational workshop and a year-round support system for participants during their vocational training. The workshop is intended to prepare participants for skills training. About 99 percent of all Encore! participants complete their vocational training at Vo-Tech. In the 1993-94 school year, 194 Encore! participants were enrolled at Vo-Tech. For this same year, the Vo-Tech campuswide placement rate was 95 percent. (See app. IV for a detailed description of this project.) Focus: HOPE, a civil and human rights organization in Detroit, was founded in 1968 to resolve the effects of discrimination. Its machinist training program, started in 1981, is intended to break down discrimination in machinist trades and high-tech manufacturing industries and to provide disadvantaged adults with marketable skills. Focus: HOPE has three on-site training levels—FAST TRACK, the Machinist Training Institute (MTI), and the Center for Advanced Technologies (CAT). It serves inner-city adults and relies on federal and state grants as well as on private contributions. For the 1993-94 year, there were 185 participants in MTI, and 75 percent completed the program. Of these, 99 percent were placed. (See app. V for a detailed description of this project.) Support and Training Results in Valuable Employment (STRIVE) is a primarily privately funded employment training and placement project for inner-city adults in New York City who have experienced difficulty securing and maintaining employment. STRIVE’s founders believe gainful employment is the most critical element to individuals and families living in disenfranchised neighborhoods of New York City who hope to achieve self-sufficiency. STRIVE Central—one of 10 community-based organizations in New York’s STRIVE Employment Group—is located in East Harlem and prepares participants for the work place through a strict, demanding 3-week attitudinal training workshop. STRIVE Central provides no occupational training; however, STRIVE provides a long-term commitment of at least 2 years to help graduates maintain and upgrade their employment. During 1994, STRIVE Central trained 415 adults and placed 77 percent of these project graduates. (See app. VI for a detailed description of this project.) The Private Industry Council (TPIC) is a private, nonprofit organization providing employment training services to low-income residents in the city of Portland, Oregon, and the counties of Washington and Multnomah. The federal government provides 85 percent of TPIC’s funding through JTPA. TPIC’s mission is to promote individual self-sufficiency and a skilled workforce by eliminating barriers to productive employment, and the project delivers most services for disadvantaged adults from three neighborhood centers. During the 1994 program year, TPIC served a total of 682 disadvantaged adults. Of those completing occupational skills training, about 77 percent were placed. (See app. VII for a detailed description of this project.) Key Features of Job Training Strategy Shared by Successful Projects Although the common strategy may be implemented differently, each project incorporates four key features into its strategy: (1) ensuring that participants are committed to training and getting a job; (2) removing barriers, such as lack of child care, that might limit participants’ ability to get and keep a job; (3) improving participants’ employability skills, such as getting to a job regularly and on time, working well with others while there, and dressing and behaving appropriately; and (4) linking occupational skills training with the local labor market. Projects Ensure Client Commitment to Training and Getting a Job Each of the projects tries to secure participant commitment before enrollment and continues to encourage that commitment throughout training. Staff at several projects believe the voluntary nature of their projects is an important factor in fostering strong client commitment. Just walking through the door, however, does not mean that a participant is committed to the program. Further measures to encourage, develop, and require this commitment are essential. All of the projects we visited use some of these measures, such as (1) making sure participants know what to expect, so they are making an informed choice when they enter; (2) creating opportunities for participants to screen themselves out if they are not fully committed; and (3) requiring participants to actively demonstrate the seriousness of their commitment. The initial step the projects take to ensure client commitment is to reveal the project’s expectations to potential participants before enrollment so that they can make an informed choice about entering the program. Through orientation sessions, assessment workshops, and one-on-one interviews with project staff, participants receive detailed information about project expectations. Project officials say they do this to minimize any misunderstandings that could lead to participant attrition. Officials at both STRIVE and Arapahoe told us they do not want to spend scarce dollars on individuals who are not committed to completing their programs and moving toward full-time employment; they believe it is important to target their efforts to those most willing to take full advantage of the project’s help. For example, at STRIVE’s preprogram orientation session, staff members give potential participants a realistic preview of the project. STRIVE staff explain their strict requirements for staying in the project—attending every day, on time; displaying an attitude open to change and able to take criticism; and completing all homework assignments. At the end of the session, STRIVE staff tell potential participants to take the weekend to think about whether they are serious about obtaining employment, and if so, to return on Monday to begin training. STRIVE staff told us that typically 10 percent of those who attend the orientation do not return on Monday. Several of the other projects we visited also create opportunities for participants to screen themselves out of the project if they are not fully committed to it. Both CET and Focus: HOPE allow potential participants to try out their training at no charge to ensure the project is suitable for them. Focus: HOPE reserves the right to reject potential participants on the basis of their attitude, but it does not routinely do this. Instead, staff will provisionally accept the participant into one of the training programs but put that participant on notice that his or her attitude will be monitored. All six projects require participants to actively demonstrate the seriousness of their commitment to both training and employment. For example, all projects require participants to sign an agreement of commitment outlining the participants’ responsibilities while in training, and all projects monitor attendance throughout participants’ enrollment. In addition, some project officials believe that requiring participants to contribute to training is important to encouraging commitment. For example, STRIVE project staff told us that their policy of providing participants with one daily subway token is designed to emphasize the partnership between STRIVE and the client by demonstrating STRIVE’s support to get the client to training, but also requiring a contribution from him or her for the trip home. Similarly, Focus: HOPE requires participants—even those receiving cash subsidies—to pay a small weekly fee for their training, typically $10 a week. A Focus: HOPE administrator explained that project officials believe students are more committed when they are “paying customers,” and this small payment discourages potential participants who are not seriously committed to training. Projects Tailor Their Approach to Remove Barriers to Training and Employment A number of employment training studies emphasize removing employment barriers as a key to successful outcomes. As indicated by their client assessments, the projects we visited define a barrier as anything that precludes an individual from participating in and completing training, as well as anything that could potentially inhibit his or her ability to obtain and maintain a job. For example, if a client lacks appropriate basic skills, then providing basic skills training can allow him or her to build those skills and enter occupational training. Similarly, if a client does not have adequate transportation, he or she will not be able to get to the training. Because all of the projects we visited have attendance requirements, a lack of adequate child care would likely affect the ability of a client who is a parent to successfully complete training. Moreover, a client who is living in a domestic abuse situation may find it difficult to focus on learning a new skill or search for a job. All six projects we visited use a comprehensive assessment process to identify the particular barriers each client faces. This assessment can take many forms, including orientation sessions, workshops, one-on-one interviews, interactions with project staff, or a combination of these. For example, at TPIC’s assessment workshop, participants complete a five-page barrier/needs checklist on a wide variety of issues, including food, housing, clothing, transportation, financial matters, health, and social/support issues. At the end of this workshop, participants must develop a personal statement and a self-sufficiency plan that they and the case manager use as a road map to address barriers throughout training. Encore! and Arapahoe have similar processes for identifying and addressing barriers participants face. Rather than relying on a formal workshop or orientation process, CET identifies participants’ needs through one-on-one interviews with project staff when a client enters the project. Throughout the training period, instructors, the job developer, and other project staff work to provide support services and address clients’ ongoing needs. All of the projects arrange for clients to get the services they need to address barriers, but—because of the wide range of individual participant needs—none of them provides all possible services on-site. For example, although all six projects recognize the importance of basic skills training,they arrange for this training in different ways. Arapahoe contracts out for basic skills training; CET, Encore!, and Focus: HOPE provide this service on-site; and TPIC and STRIVE refer clients to community resources. Only Focus: HOPE provides on-site child care; however, the other five projects help clients obtain financial assistance to pay for child care or refer them to other resources. Because some of the projects we visited attract many clients who have similar needs, these projects provide certain services on-site to better tailor their services to that specific population. For example, because it serves Hispanic migrant farmworkers with limited English proficiency, CET provides an on-site English-as-a-second-language program. Likewise, because a major barrier for many of Encore!’s clients is low self-esteem resulting from mental abuse, physical abuse, or both, Encore! designed its 6-week workshop to build self-esteem and address the barriers these women face so that they are then ready to enter occupational training. In addition to services provided during training, most of the projects followed up with clients after they completed training to ensure that barriers did not reappear or that new ones did not arise that would affect clients’ ability to maintain employment. STRIVE and CET follow up on a regular basis after job placement to monitor participants’ progress and determine whether additional assistance is needed to ensure job retention. For example, STRIVE has a commitment to contact its participants on a quarterly basis for 2 years following program completion. During these contacts, STRIVE personnel assess progress and suggest ways that participants can continue to progress in their job. For 6 months, CET’s job developer makes monthly calls to employers who have hired CET graduates to troubleshoot any problems that may have arisen and to monitor progress. The job developer also follows up with graduates for 2 years after program completion. Projects Improve Employability Skills Essential for Employment Research confirms the necessity for employability skills, especially for individuals without work experience. For example, the Secretary of Labor’s Commission on Achieving Necessary Skills’ 1991 report, What Work Requires of Schools, which included discussions and meetings with employers, unions, employees, and supervisors, verified that skills such as taking responsibility, self-management, and working well with others are required to enter employment. Because so many of these projects’ participants have not had successful work experiences, they often do not have the basic knowledge others might take for granted about how to function in the workplace. They need to learn what behaviors are important and how to demonstrate them successfully. These behaviors include getting to work regularly and on time; dressing appropriately; working well with others; accepting constructive feedback; resolving conflicts appropriately; and, in general, being a reliable, responsible employee. Each project we visited coaches participants in employability skills through on-site workshops or one-on-one sessions. For example, CET provides a human development program that addresses such issues as life skills, communication strategies, and developing good work habits. Similarly, Arapahoe helps each client develop employment readiness competencies, such as interpersonal relations, a work ethic, demonstrating a positive attitude and behavior, and appropriate dress, either through a workshop or one-on-one with client case managers. TPIC starts working on employability skills right away when clients attend the required assessment workshop. This workshop covers employer expectations, self-defeating behaviors, giving and receiving feedback on one’s work, communication and listening skills, decision-making, work attitudes, time management, handling conflict on the job, and dealing with difficult people. Some of the projects we visited also develop employability skills within the context of the occupational skills training, with specific rules about punctuality, attendance, and, in some cases, appropriate clothing consistent with the occupation for which clients are training. STRIVE concentrates almost exclusively on employability skills and, in particular, attitudinal training. This project has a very low tolerance for behaviors such as being even a few minutes late for class, not completing homework assignments, not dressing appropriately for the business world, and not exhibiting an appropriate attitude. We observed staff dismissing clients from the program for a violation of any of these elements, telling them they may enroll in another offering of the program when they are ready to change their behavior. Project staff work hard to rid clients of their “victim mentality”—that is, believing that things are beyond their control—and instill in them a responsibility for themselves, as well as make them understand the consequences of their actions in the work place. For example, we observed one client who exhibited inappropriate behavior in class by consistently rolling her eyes and tuning out the instructor. The instructor called her attention to this behavior, but the client denied it. When this client argued with the instructor about her behavior, he removed her from class to counsel her, but she persisted in arguing with him. Within minutes, she was dismissed from the project. Another example of getting clients to think about consequences at STRIVE is through dress-down day. STRIVE has a dress-down day to simulate such situations in the work place and to get a sense of what its clients consider appropriate dressing down. On one such occasion, a client came to class wearing a T-shirt with a marijuana leaf pattern on the front of it. The project instructor called the class’ attention to this client’s manner of dress to explain the importance of the image one creates with dress and the message sent to an employer with an inappropriate outfit. During the lunch break, the client bought a more appropriate T-shirt. Projects Link Occupational Skills Training to the Local Labor Market Five of the six projects we visited provide occupational training, using information from the local labor market to guide their selection of training options for participants. These projects focus on occupations that the local labor market will support. Project staff strive to ensure that the training they provide will lead to self-sufficiency—jobs with good earnings potential as well as benefits. In addition, all but one of the six projects use their links to local employers to assist clients with job placement. While their approaches to occupational training and job placement differ, the common thread among the projects is their ability to interpret the needs of local employers and provide them with workers who fit their requirements. All five of the projects that provide occupational training are selective in the training options they offer clients, focusing on occupational areas that are in demand locally. For example, CET and Focus: HOPE have chosen to limit their training to one or a few very specific occupational areas project staff know the local labor market can support. Focus: HOPE takes advantage of the strong automotive manufacturing base in the Detroit area by offering training in a single occupation serving the automotive industry—machining. With this single occupational choice, Focus: HOPE concentrates primarily on meeting the needs of the automotive industry and the local firms that supply automotive parts. Participants are instructed by skilled craftspeople—many senior instructors at Focus: HOPE are retirees who are passing on the knowledge they acquired during their careers. The machines used in training are carefully chosen to represent those that are available in local machine shops—both state-of-the-art and older, less technically sophisticated equipment. Job developers sometimes visit potential work sites, paying close attention to the equipment in use. This information is then used to ensure a good match between program participant and employer. CET offers three occupational training areas—automated office skills, building maintenance, and shipping and receiving—on the basis of the needs of the local labor market. CET previously offered training in electronics but eliminated this training because the local electronics industry did not absorb the continual supply of CET graduates. Because Reno has a considerable number of apartment buildings and hotels, CET replaced the electronics program with a building maintenance program. CET uses local industry connections to keep its curricula current and to help ensure that its clients meet employers’ needs. For example, one CET instructor told us he takes his classes on field trips to area businesses to help keep his knowledge current and to give program participants a firsthand look at the business world. While offering a wide range of training options, Vo-Tech, which trains Encore! participants, is linked to the local labor market in part by its craft advisory committees. These committees involve 160 businesses in determining course offerings and curricula. Vo-Tech recently discontinued its bank teller program shortly after a series of local bank mergers decreased demand for this skill. It began offering an electronics program when that industry started to expand in the Port Charlotte area. Vo-Tech also annually surveys local employers on its graduates’ skills and abilities, using the feedback to make changes to its programs. When feedback from local employers in one occupation indicated that Vo-Tech graduates were unable to pass state licensing exams, the school terminated the instructors and hired new ones. All of the projects we visited assist clients in their job search. Five of the six projects had job developers or placement personnel who work to understand the needs of local employers and provide them with workers who fit their requirements. For example, at Focus: HOPE the job developers may visit local employers to discuss their skills needs, since virtually all graduates of Focus: HOPE are hired into machinist jobs locally. The placement staff working with Encore! graduates noted that there are more positions to fill than Vo-Tech graduates. They believe that, because of their close ties with the community and the relevance of their training program, they have established a reputation of producing well-trained graduates. This reputation leads employers to trust their referrals. Agency Comments The Department of Labor commented that our report substantiates findings from its studies of exemplary practices in job training programs serving disadvantaged adults and dislocated workers. Labor also said that this information would be useful to practitioners in the employment training community as the community continues to improve its programs. Labor had three suggestions for improving the usefulness of the report to the employment training community. The first suggestion was to identify a contact person at each of our case study projects. We have included this information in the appendixes. Second, Labor suggested we list all of the projects that were nominated but not included in our case studies. We agree this would be potentially helpful to other projects and plan to provide such a list to Labor for it to disseminate as appropriate. Last, Labor noted that the leveraging of community resources, along with the use of community supportive services to enhance the overall program investment, is also an important feature of projects in general and should be highlighted as such. While we agree that some of the projects we visited used community resources extensively and that this practice enhanced their ability to serve disadvantaged adults in their programs, not all the projects used this approach. For this reason, we did not include it as a part of the common strategy. Labor’s comments are printed in appendix VIII. We are sending copies of this report to the Secretary of Labor; the Director, Office of Management and Budget; relevant congressional committees; and other interested parties. If you or your staff have any questions concerning this report, please call me at (202) 512-7014 or Sigurd R. Nilsen at (202) 512-7003. GAO contacts and staff acknowledgments are listed in appendix IX. Scope and Methodology We designed our study to identify factors and strategies associated with successful employment training projects for disadvantaged adults. To do so, we reviewed the current literature and visited training projects nominated as exemplary, conducted extensive interviews, and reviewed training processes. We applied a standardized process to identify common strategies across projects. We did our work between March 1995 and March 1996 in accordance with generally accepted government auditing standards. Project Selection Strategy To identify projects to review, we studied the literature and recent employment training award nominations for projects deemed successful. We also requested nominations of exemplary employment training projects from each of the 50 states’ and the District of Columbia’s workforce development councils. In seeking nominations, we defined exemplary projects as those with outstanding results measured by performance indicators such as participant completion rates, job placement and retention rates, and placement wages. Because no nationwide standard exists with which to judge a project’s success, we did not establish a baseline standard for placement rate, completion rate, or other measure to qualify as an acceptable nomination. Instead, we asked the nominator to provide a rationale for the specific nomination—in other words, why the project was considered successful. The nomination process identified about 120 successful projects, including 82 submissions from 32 states and the District of Columbia, and about 38 projects identified in the literature or as recipients of national training awards. Finalists were chosen for further consideration on the basis of how closely they satisfied key selection criteria. These criteria included focusing on serving disadvantaged adults, having project service and outcome data available, and having strong justification supporting the nomination. We contacted project finalists to collect additional information on client demographics, funding sources and amounts, services provided, and outcomes obtained. We selected the projects judgmentally to provide a mixture of (1) geographic locations, (2) urban and rural locations, (3) project sizes, (4) targeted populations, and (5) funding sources. Data Collection and Analysis We did our fieldwork using a systematic standardized case study methodology. To collect the data, teams of at least three people spent 2 to 5 days at each nominated site. During these project visits, we interviewed participants, project officials, training providers, and local employers. Additionally, we toured facilities, observed project operations, and reviewed a sample of participant records. To guide our interviews and observations, we employed a detailed topic outline. This outline was derived from concepts contained in the literature and included ways these concepts might be operationalized in the field. To ascertain relevant concepts to be investigated in the field, we reviewed numerous publications examining successful job training practices. We focused our review on the employment training literature that explored the reasons particular projects or organizations were viewed as successful, rather than concentrating on empirical research that measured changes in earnings or employment. Using the theories and observations that emerged from this literature, we developed a list of concepts relating to project operations and structure that included easy access to services, tailoring of services to client needs, and strong linkages to the labor market. Applying these concepts to practices, we developed a list of the ways in which they might be operationalized in the field. When we were examining, for example, the concept of easy access to services, we reviewed the projects’ outreach and recruiting strategies, and we looked for clear points of entry into the project, pathways between programs within the projects, and a streamlined intake process. For tailoring of project services, we focused on the types of services the project provided, how the services were delivered, and how the various services were integrated into the rest of the project. As part of the structured methodology, we conducted extensive team debriefings daily during data collection to record and discuss the observations of the day and to perform quality control of our data collection effort. At regular intervals during the data collection phase, the entire work group met to perform a cross-case analysis of the obtained data. During this analysis, concepts were assigned alphanumeric values on the basis of a team rating of that element’s presence or absence at a given project. We also used this method to evaluate the criticality of that element to site operation. Through this cross-case analysis, concepts occasionally emerged that warranted further field testing. Items in our interview guide were augmented with the newly surfaced concepts and the presence of these constructs was tested at the remaining projects. For example, the issue of client readiness/commitment was one of those new concepts that emerged early in our data collection. At subsequent projects, when we focused on participant commitment, we examined the structure of their orientation and other intake and assessment processes as well as the nature of the periodic interactions between participant and project staff. At the end of data collection and scoring, we reviewed the ratings across the six projects and agreed on the key features essential for project success. Findings presented in this report represent those elements considered essential for the projects’ success at all six project sites. Some limitations exist in this type of case study methodology. Case studies can provide insights into how a practice works in a specific context, but findings from a case study cannot necessarily be extended to training programs generally. Furthermore, because participation in each of the projects we visited was voluntary, we did not observe the strategies employed under a system in which participation would be mandatory.The numerical data we present—for example, job placement rates—were collected directly from the projects, and we made no attempt to verify their accuracy except where data were available from existing federal databases. In addition, we did not gather evidence to confirm or refute the validity of the nomination. Arapahoe County Employment and Training Division, Aurora, Colorado The Arapahoe County Employment and Training Division (Arapahoe) administers the Job Training Partnership Act (JTPA) in Arapahoe and Douglas Counties in Colorado. Arapahoe has been involved with employment training for about 20 years since the Comprehensive Employment and Training Act transferred federal funds and decision-making authority to the local level. Job training programs and services sponsored by the Arapahoe/Douglas Private Industry Council, which includes Arapahoe, are intended to increase employment, increase earnings, and reduce welfare dependency within these counties. Arapahoe uses various resources to develop its participants’ potential to achieve self-sufficiency. These include (1) employment and training resources, such as the Aurora Job Service and the Colorado Vocational Rehabilitation Services; (2) educational resources, such as Arapahoe Community College and Aurora Public Schools; and (3) community resources, such as the Aurora Mental Health Center, Aurora Food Stamp Office, and Aurora Housing Authority. Under a contract with the Arapahoe County Department of Social Services, Arapahoe administers the Job Opportunities and Basic Skills Training (JOBS) program for that county and the Food Stamp Employment and Training Program. Further, Arapahoe leverages federal funds, using grants from local contributors to enhance its resources. For example, state and local governments must match federal JOBS funds—the federal government provides 50 percent of the funding, and state and local governments provide 30 and 20 percent, respectively. Participant Characteristics During the 1994 program year, Arapahoe served 541 adults. About 80 percent of these participants were dually enrolled in JTPA and JOBS. A project official explained that about 90 percent of the JOBS clients are eligible for JTPA and are, consequently, enrolled in both programs. JTPA participants must meet income eligibility guidelines established by federal regulations as well as residency and age requirements. The criteria for JOBS referrals give priority to people who have been on Aid to Families With Dependent Children for 3 of the last 5 years; those under 24 years old without a high school or general equivalency diploma or a work history; and people whose youngest child is at least 16 years old. About half of the 541 clients were new and the other half were carried over from the previous year. Approximately 78 percent of Arapahoe participants in 1994 were receiving public assistance, and the majority were women (85 percent). Fifty-two percent of participants were white, 32 percent were African American, 11 percent were Hispanic, 2 percent were Native American, and 3 percent were Asian American. A project official estimated that more than half of Arapahoe’s clients need basic skills remediation in order to benefit from occupational skills training. Project Structure Arapahoe primarily functions as a training broker using a case management model. Assessment (18 hours) is done on-site and workshops (35 hours) include a job search skills workshop and a motivational workshop. All prospective participants attend an orientation session to learn about services available; Arapahoe staff emphasize that participation in planned activities is required once a person chooses to enter the project and is accepted. At an intermission, attendees are free to leave if they feel the program is not right for them or if they are unwilling to make a commitment to training and employment. Case managers work with each participant to determine which training is best and to identify and remove barriers to self-sufficiency. Support services are tailored to individual needs and may include allowances for transportation, child care, and clothing. Case managers may also refer clients to other community organizations for support services. As a result of preliminary assessments, such as a training readiness survey and interviews, Arapahoe assigns a case manager to each participant and enrolls participants in a 3-day assessment workshop. This workshop includes such testing as the Career Assessment Inventory and the Holland Self-Directed Search. After the client completes the assessment workshop and an Individual Service Strategy/Employment Plan, case managers refer participants for basic skills remediation or begin working with them on a training plan. Arapahoe contracts with area schools to provide basic and occupational skills training. For example, Arapahoe’s two contractors for basic skills training operate on a cost-reimbursable basis and also report on student attendance and course progress. Clients study basic skills at their own pace but are required to attend class for 20 hours each week. Arapahoe also provides clients with vouchers for occupational skills training in areas where there is the strongest likelihood of employment and with contractors who have demonstrated performance in training and job placement. The vouchers pay for training expenses—beyond basic skills training—not to exceed $2,500 over a 24-month period. Case managers are required to keep in contact with clients at a minimum of twice monthly so that assessment is ongoing and clients have access to referrals for counseling and support services, including tutoring. Career counseling is a vital part of Arapahoe’s training model because clients enter the program from diverse backgrounds and receive training in differing fields of their choice at different area training facilities. If participants are unsure about a career, case managers provide them with some job shadowing experiences. Case managers encourage clients to obtain some form of credential, such as an associate’s degree or a technical certificate. Arapahoe staff also maintain links with local employers to ensure the type of training provided will help clients achieve self-sufficiency. Project Outcomes Arapahoe measures its performance by enrollment statistics, job placement rates, follow-up employment rates, and follow-up earnings. For program year 1994, Arapahoe’s placement benchmark was about 48 percent, and 57 percent of all adults who left the program (either JTPA-eligible or dually enrolled in JTPA and JOBS) found employment. About 69 percent of all participants who completed occupational training were placed. These job placement rates are calculated on the basis of the number of clients who obtain unsubsidized employment of 20 or more hours a week when they leave the program. For all adults who left the program in 1994, the average placement hourly wage was $7.09. For more information on the Arapahoe County Employment and Training Division, contact Elroy Kelzenberg, Deputy Director, 11059 East Bethany Drive, Suite 201, Aurora, Colorado 80014, or call (303) 752-5820. Center for Employment Training, Reno, Nevada The Reno, Nevada, Center for Employment Training (CET), established in 1987, is a community-based, nonprofit organization providing job training to disadvantaged adults. The Reno CET is one of over 30 centers nationwide, with the corporate headquarters in San Jose, California. Its mission is based on the philosophy of self-determination, and it seeks to promote the development and education of low-income people by providing them with marketable skills training and supportive services that contribute to economic self-sufficiency. The corporate office provides accounting and administrative support and sets broad policy for the corporation as a whole. Because the training offered in a particular skill expands and contracts with the job market for that skill, CET maintains the flexibility to readily increase training slots for skills in high demand or to phase out or decrease training activity for skills whose demand is less than expected. Each center is locally managed and chooses the skills training that it will offer. The Reno CET focuses on three specific training areas that are in demand in the local labor market: automated office skills, building maintenance (carpentry, electrical, and plumbing), and shipping and receiving. Local CETs are funded through tuition charges to participants. During the admissions process, CET staff evaluate applicants to determine whether they are eligible for subsidized training under one of CET’s federal, state, or local funding sources. Participants may receive financial assistance from sources such as Pell grants, JTPA state funds, the JTPA Farm Worker Program (Title IV), and grants from the city of Reno. Participant Characteristics During program year 1994, the Reno CET trained 94 participants. A project official said that most of CET’s participants are minority, functionally illiterate, welfare recipients. The majority of CET clients in Reno are Hispanic (80 percent), have reading and math skills below the eighth grade level (80 percent), and have limited English proficiency (82 percent). Participants range in age from 21 to 55 years. Roughly half are male. The majority (60 percent) of participants have, at some time, been migrant farmworkers. Project Structure In addition to providing on-site training skills, the Reno CET also provides remedial education, English language instruction, and citizenship classes. Its curriculum includes job search techniques and employability and life skills. All participants are ensured help in finding employment, but they must commit to coming to training each day, on time, and demonstrate that they can relate well to their instructors and fellow students. A staff training team meets regularly to discuss participants’ progress in developing job skills. CET staff administer the Employability Competency System test to all prospective participants to assess reading and math skills. Tests are intended to identify participants’ strengths and weaknesses rather than to disqualify participants. CET staff also review applications to assess an applicant’s reading comprehension and spelling. They work with participants to develop an individualized instruction and service plan that clarifies participants’ vocational goals and remediation needs as well as required supportive services. In addition, staff help participants gain access to local community-based organizations for social services that help overcome potential barriers to training and employment. CET teaches basic and vocational skills simultaneously. For example, participants in the building maintenance program learn math in the context of rulers and measurement. Training, which simulates the work environment with industry standards, is organized into different levels of competency. Participants must pass a test for each level before progressing to the next. Because the competency levels are generally independent and self-paced, participants may begin training at almost any time. Depending on an individual’s skill choice, needs, and abilities, training can generally be completed in about 6 months. Good work habits—such as punctuality, attendance, reliability, and job responsibility—are emphasized throughout training. Participants are not referred to a job unless they have the proper habits and attitudes to ensure success in their work setting. CET’s job developer gives participants employment assistance and advises them on curriculum choices, drawing on knowledge of what prospective employers expect from CET graduates. The job developer also teaches job search techniques and instructs participants on how to set goals, complete job applications, develop resumes, list references, and interview for employment. In addition, the job developer periodically follows up on participants for a period of 1 month to 2 years after program completion. CET offers lifetime placement assistance unless the individual has consistently quit jobs or had an unacceptable attendance record. Project Outcomes The ultimate CET goal for each participant is permanent, unsubsidized job placement with good benefits. The Reno CET goal is to place 90 percent of graduates in full-time, career-level employment. For program year 1994, the placement rate was 92 percent for those who finished training. Graduates who obtain any full-time job are considered successful placements even when the job does not require the skill in which the graduates were trained. For more information on the Center for Employment Training in Reno, contact Marcel Schaerer, Division Director, 520 Evans Avenue, Reno, Nevada 89512, or call (702) 348-8668. Encore!, Port Charlotte, Florida Encore! prepares single parents, displaced homemakers, and single pregnant women for high-wage occupations in order to help them become self-sufficient. This project, started in 1986, serves many people who would otherwise be dependent on welfare or employed in low-wage jobs. The Charlotte Vocational Technical Center (Vo-Tech) administers Encore! Vo-Tech’s mission is to offer quality vocational education to Charlotte County residents and to help students obtain gainful employment. Together, Encore! and Vo-Tech seek to motivate participants to reach their highest potential by removing barriers and preparing participants for the competitive world of work. A federal grant under the Carl D. Perkins Vocational and Applied Technology Education Act of 1990 provides Encore! funds for child care, transportation, tuition, books, and uniforms for qualified students training for high-wage, nontraditional occupations, such as women studying auto technology. Community organizations provide scholarships to support students for training not covered by this federal grant, and participants may apply for other financial assistance, such as Pell grants. Vo-Tech provides Encore! facilities (a portable building in which the program is housed), utilities, and supplies. While the Perkins grant covers salary and staff development costs for the project coordinator, Vo-Tech provides the project a part-time work-study student aide as well as the expertise of Vo-Tech faculty and staff. The local community also supports Encore! The Charlotte County Medical Society Alliance has “adopted” Encore! and raises money for the project through such functions as dinners and golf tournaments. Community members also donate clothing suitable for school, job interviews, or the work place, which is distributed to participants at no charge through Carol’s Closet, located within the Encore! project. The Charlotte County Habitat for Humanity program pays particular attention to the housing needs of Encore! participants. Additionally, the Charlotte County Board of Women Realtors’ nonprofit DREAM HOUSE program is designed to help Encore! participants achieve home ownership by helping them renovate and purchase older homes. Participant Characteristics Encore! participants are generally economically disadvantaged, lack marketable job skills, have low self-esteem, and have few employability skills. Because Encore! serves single parents, displaced homemakers, and single pregnant women, most participants are female. In the 1993-94 school year, 93 percent of the 194 Encore! participants enrolled at Vo-Tech were female. The majority of participants (84 percent) were white. Most participants—93 percent—had either a high school or general equivalency diploma. Eighty-six percent had children under the age of 18. Project Structure Encore!’s primary components are a 6-week prevocational workshop (48 hours) and a year-round support system for participants during their vocational training. The workshop, which is held twice a year, includes assessment, career exploration, self-esteem building, goal setting, and budgeting; it is intended to prepare participants for skills training so that they can make the commitment needed to succeed in training and employment. The Encore! project coordinator works with participants to identify and address any barriers that may impede their skills training and job placement. Encore! participants receive vocational assessment and counseling from both the project coordinator and Vo-Tech staff. On the basis of this assessment, participants develop an Individualized Career Plan and may work to improve their basic skills through Vo-Tech’s self-paced remedial program or begin one of the certificate programs Vo-Tech offers. Most Encore! participants enter skills training at Vo-Tech and maintain regular contact with the project coordinator. Vo-Tech offers a wide range of programs, including business (general office, clerical, secretarial, accounting, and data); construction (air/heat/refrigeration, drafting, electrical, and carpentry); health (dental assisting, patient care assisting, and practical nursing); and service (auto technology, child care, cosmetology, culinary arts, electronics, nail technology, and ornamental horticulture). Each program has a craft advisory board linking the needs of the local labor market to the program curriculum. For participants enrolled at Vo-Tech, the Encore! project coordinator monitors progress through a system of employability skills points. Participants lose points for absenteeism, tardiness, and other negative behavior. When a participant’s points near a designated threshold level, the project coordinator provides supportive counseling to the participant. Vo-Tech also requires each student to attend employability skills workshops that address job search skills, resume writing, interview strategies, and getting along on the job. Other workshops, which students may attend voluntarily, address time management, stress management, maintaining a professional image, group dynamics, and the changing world of work. The major priority of Encore! and Vo-Tech is to help all participants obtain gainful employment. Vo-Tech emphasizes employability skills, such as job-seeking and job-keeping strategies, to foster this goal. Encore! participants also participate in videotaped mock interviews and obtain help in preparing a professional resume. Encore! encourages participants to register with Job Service of Florida, which has stationed a job specialist at Vo-Tech. Vo-Tech’s instructional program, which is competency-based, has a strong reputation with area employers; consequently, this reputation also helps Encore! participants obtain employment. Vo-Tech conducts job placement follow-up with graduates and nongraduates in accordance with strict guidelines from the Florida Department of Education. The survey is conducted through a statewide computer search, mail, and telephone inquiry. Data are assembled by program area, bound together, and made available to faculty for analysis. Through Vo-Tech, the Encore! project coordinator also contacts participants at 1- and 2-year intervals. The project coordinator said that while most participants are generally still employed when contacted, they may have moved on to another job. Project Outcomes About 99 percent of all Encore! participants complete their vocational training at Vo-Tech. While Encore! does not track the job placement performance of its participants separately, for the 1993-94 school year, the Vo-Tech campuswide placement rate was 95 percent. Vo-Tech defines successful placements as obtaining a job, entering military service, or continuing schooling. For more information on Encore!, contact Carol Watters, Program Coordinator, 18300 Toledo Blade Boulevard, Port Charlotte, Florida 33948, or call (941) 629-6819. Focus: HOPE, Detroit, Michigan Focus: HOPE, founded in 1968, is a metropolitan Detroit civil and human rights organization established to resolve the effects of discrimination and build an integrated society. It serves the community through several programs, including its machinist training programs, an on-site Center for Children, Food for Seniors, and a Food Prescription Program (a commodity supplemental food program operating through the U.S. Department of Agriculture). Focus: HOPE also provides employment opportunities at its incorporated, for-profit companies, which have been developed as a part of the Focus: HOPE network. The Focus: HOPE complex is spread across 30 acres and 12 separate buildings. In addition to a paid staff of about 750, the network has a roster of about 46,000 volunteers; about a fourth of these volunteers provide services during any given week. The organization relies on individual donations and contributions from corporations, foundations, and trust funds. It also receives grants from the Departments of Labor, Defense, and Commerce, as well as surplus machinery used in training from the federal and state governments. In 1994, the primary funding source for the Machinist Training Institute (MTI) was state economic development/job training funds. Participants may receive needs-based grants to cover tuition from a variety of sources, including Pell grants and JTPA, the city of Detroit, and machinist trade associations. Since opening in 1981, Focus: HOPE’s MTI has prepared participants for careers in manufacturing. Its training effort is intended to break down discrimination in machinist trades and high-tech manufacturing industries, and to provide disadvantaged individuals with marketable skills. MTI, qualified as an institution of higher education, simulates the work place; its curriculum integrates academics and hands-on experience. In addition to MTI, Focus: HOPE has two other levels of training: FAST TRACK and the Center for Advanced Technologies (CAT). FAST TRACK prepares participants for MTI, and MTI graduates may move on to the CAT program. CAT, a fairly new program, will have its first graduates in May 1996. These three levels of programming could, in theory, support a participant from an eighth-grade skills level to a master’s degree in manufacturing engineering. Participant Characteristics Focus: HOPE’s training programs serve inner-city adults who want to participate and have the basic skills required to succeed in machinist training. During the 1993-94 program year, approximately 63 percent of the participants in FAST TRACK were male and 92 percent were African American; their ages ranged from 17 to 23. Participants in MTI were also primarily African American males, but were generally older (26 or 27 years old). Project officials noted that many MTI participants have a history of low-skill, low-wage jobs, often in the fast food industry; others are young adults just entering the labor market with no work history. Because CAT participants have attended MTI, their characteristics are similar to those of MTI participants. Project Structure Focus: HOPE’s training programs emphasize development of manufacturing-related skills. Depending on an applicant’s skill level, an applicant may be placed in one of Focus: HOPE’s three progressive training levels: FAST TRACK, MTI, or CAT. These different levels allow participants to experience machining, become familiar with the expectations of the program, and decide whether they are willing to make a commitment to training. The different levels of training also permit Focus: HOPE staff to assess participants’ potential for success in more advanced on-site training. At the completion of each level, Focus: HOPE’s placement personnel actively help participants through the job search process. For example, MTI job development staff visit machine shops to discover job openings, discuss employer skills needs, and obtain feedback on graduate performance. Prospective FAST TRACK and MTI participants are assessed using the Test of Adult Basic Education and the Bennett Mechanical Comprehension Test. Applicants must also pass a physical examination, including a drug screen. The admission process also includes interviews with financial aid personnel and appropriate program managers. These interviews serve to assess applicants’ motivation and likelihood of sustaining a full-time learning experience. Barriers to successful training are also addressed. If the applicant is accepted for training, supportive services, including academic, personal, and financial aid counseling, are available. Additionally, staff refer participants to other Focus: HOPE services or other community resources as needed. FAST TRACK, which begins a new class every 2 weeks, was initiated in 1989 because Focus: HOPE had difficulty recruiting participants with adequate basic skills for machinist training. FAST TRACK provides instruction in math, reading, and computer literacy and addresses the general readiness of high school graduates for meaningful employment and postsecondary education. FAST TRACK participants must have basic skills at the eighth-grade level; over an intensive 7-week course, they may improve basic skills to a 9th- or 10th-grade level. FAST TRACK was designed not only to boost participants’ academic skills but also to improve employability skills. Participants are rated in four categories—attendance, cooperation, interpersonal skills, and work performance. While FAST TRACK graduates are assured entry into the first level of MTI, project officials told us that graduates are often able to obtain employment simply because of improved basic and employability skills. On average, two-thirds who enter FAST TRACK complete its curriculum. MTI participants must have at least a 9th-grade reading level and a 10th-grade math level. Participants spend about half their time in the classroom and the other half on the shop floor. MTI is divided into three tiers. First, a 5-week (176 hours) “vestibule” program provides instruction in communication and technical skills. An additional 26-week basic machining program allows participants to work from blueprints to produce a finished product. Finally, a 26-week advanced machining program provides selected participants more instruction. These participants also learn by working for pay on actual production contracts. Focus: HOPE’s latest training effort, CAT, aims to produce engineers who can operate more effectively in an agile manufacturing environment and integrates hands-on training with academic studies in a production setting. CAT is a national demonstration project, and its curriculum was developed in conjunction with educational and industry partners. Currently, CAT’s participants are selected from MTI’s advanced machining graduates. In CAT, one of the partner universities can confer an associate’s degree after 3 years, a bachelor’s degree after 4-1/2 years, and a master’s degree after 6 years. Project Outcomes Focus: HOPE defines successful participants as those who obtain and hold steady employment that includes benefits. For the 1993-94 year, of 185 participants in MTI, 139 (75 percent) completed the program. Of these graduates, 137 (99 percent) were placed in employment at an average hourly wage of $9.50. For more information on Focus: HOPE, contact Kenneth Kudek, Assistant Director, 1355 Oakman Boulevard, Detroit, Michigan 48238, or call (313) 494-4170. STRIVE Central, New York City STRIVE—the acronym for Support and Training Results in Valuable Employment—provides participants tools to navigate the current job market. This employment training and placement project, started in 1985, is for inner-city adults in New York City who have experienced difficulty securing and maintaining employment. STRIVE staff, many of whom have lived the client experience and are project graduates themselves, work to prepare, train, place, and support participants in obtaining unsubsidized entry-level jobs. STRIVE Central is one of 10 community-based organizations in New York’s STRIVE Employment Group; the STRIVE model has also been replicated in Pittsburgh, Chicago, and Boston. The STRIVE network is primarily privately funded, predominately through a grant from the Clark Foundation that requires a two-for-one dollar match from other sources, such as local employers. Services are free to both employers and participants, and STRIVE officials noted that 90 percent of STRIVE’s resources are allocated to direct services. STRIVE Central, the initial STRIVE site, is located in the basement of an inner-city housing project in East Harlem and is readily accessible to members of that community; STRIVE Central has also opened a satellite location in West Harlem. STRIVE was founded in response to chronicly high unemployment rates in East Harlem, the Greater Harlem community, and other disenfranchised neighborhoods of New York City. Social problems including homelessness, substance abuse, crime, and teen pregnancies affect these communities. STRIVE’s founders believed gainful employment is the most critical element to individuals and families hoping to obtain self-sufficiency and empowerment. STRIVE’s mission is to demonstrate the impact attitudinal training and postplacement support have on the long-term employment of inner-city adults. Participant Characteristics STRIVE serves inner-city adults, aged 18 to 40, who are unemployed and want to work. The project targets services to people whose difficulty obtaining employment stems primarily from poor attitudes and inappropriate behaviors. While STRIVE has no income eligibility requirements, it often serves the most needy—those on public assistance, single parents, former substance abusers, ex-offenders, victims of abuse, and high school dropouts. STRIVE encourages participants to shed the victim mentality, become self-sufficient, and acquire a solid work ethic. In 1994, STRIVE Central trained 415 individuals. During 1994, STRIVE served similar numbers of women (208) and men (207); however, project officials stated this was an aberration because STRIVE has historically served more women than men. Most participants were African American (71 percent), and 16 percent were Hispanic. Thirty-four percent of participants received public assistance and 33 percent were single parents. Most of the 1994 participants were high school graduates (64 percent) or had obtained a general equivalency diploma (18 percent); the rest were high school dropouts. Project Structure STRIVE’s training focuses on the behaviors needed for successful employment—such as punctuality, the spirit of cooperation, and the ability to take constructive criticism, and the attitudes that sometimes impede these behaviors—rather than skills such as typing, word processing, and data entry. STRIVE prepares participants for the work place through a strict, demanding 3-week workshop (120 hours) that emphasizes attitudinal training. Each workshop begins with a “group interaction” session for prospective participants. This 3-hour orientation session helps applicants determine whether they are willing to undergo STRIVE’s training and also allows trainers an opportunity to assess the attitudes and abilities of applicants. For example, trainers call attention to late arrivals by questioning the reasons for lateness before the whole group. This could prove to be embarrassing for tardy applicants—their ability to stay in the program depends on handling that embarrassment in a professional manner. Because of the attitudinal issues discussed, and the “no nonsense” manner in which the issues are dealt with, some of the applicants decide that STRIVE is not for them and do not return for the training workshop. Consequently, while STRIVE generally accepts anyone interested in the program, participants screen themselves out as a result of the orientation session; participants may also leave at any time during the 3-week workshop, and some are asked to leave if STRIVE staff believe that they are not sufficiently committed to the program or willing to make changes in their lives. During the intake and application process, STRIVE staff may also make referrals on the basis of their identification of participants’ barriers to successful employment. For example, applicants may be referred to STRIVE partners that serve teens only or referred directly to community services for such problems as mental health needs, substance abuse, or day care needs. If the applicant does not seem to have attitude problems but simply needs assistance in finding employment, the applicant may be referred directly to STRIVE’s job developers, who know about employment opportunities through regular contact with area employers. In addition to attitudinal training, STRIVE emphasizes job placement and postplacement support. STRIVE’s job development staff help participants find employment that offers benefits, skills development, and opportunities for advancement; however, all graduates must successfully apply for and obtain their own positions. No job is viewed as “dead end,” because participants often need jobs that can provide the beginning of a work history as well as a pathway for advancement. After placement, STRIVE staff continue to work with clients to upgrade their employment. STRIVE provides a long-term commitment to program graduates because graduates often lack such support. Postplacement support includes assistance with personal and work problems in addition to future education and career planning. Project staff make individual contacts with graduates on a quarterly basis for 2 years as well as regular contacts with employers who hire graduates in order to obtain feedback on training requirements and/or offer further training assistance. Moreover, STRIVE graduates can request lifetime services. Project Outcomes STRIVE defines successful participants as those who obtain and hold steady employment. STRIVE’s operational standards are to place, in unsubsidized employment, at least 80 percent of the individuals who complete the intensive 3-week training, and for 75 to 80 percent of those placed to retain employment for at least 2 years. From May 1985 through December 1994, the East Harlem site has helped 2,424 individuals secure employment. According to project officials, nearly 80 percent of those individuals have maintained employment. In 1994, STRIVE Central trained 415 persons, 318 (77 percent) of whom were placed. For more information on STRIVE, contact Lorenzo Harrison, Deputy Director, 1820 Lexington Avenue, New York, New York 10029, or call (212) 360-1100. The Private Industry Council, Portland, Oregon The Private Industry Council (TPIC) is a private, nonprofit organization providing employment and training services to low-income residents in Portland, Oregon, as well as Washington and Multnomah Counties. The federal government provides 85 percent of TPIC’s funding through JTPA. TPIC is also a subcontractor for the JOBS program and dually enrolls participants in both JTPA and JOBS. TPIC’s mission is to promote individual self-sufficiency and a skilled workforce by eliminating barriers to productive employment. TPIC delivers most services for disadvantaged adults from three neighborhood service centers—Northeast Employment and Training Center, Southeast Employment and Training Center, and East County Employment and Training Center. These centers, through case management, provide comprehensive services that remove barriers to long-term employment and self-sufficiency. According to TPIC officials, the three centers target certain populations: The Northeast Center targets African American males and welfare recipients, the Southeast Center targets the homeless population and refugees, and the East County Center primarily serves a Hispanic population and has bilingual English- and Spanish-speaking staff. TPIC also administers a program that serves older workers, the Tri-County Employment and Training Program, as well as programs serving youth. TPIC’s coordinated approach to case management is intended to provide clients with the basic and vocational skills necessary to obtain and keep employment. TPIC’s training system links all entities involved in either preparing adults for the workforce or providing supplemental services that are necessary for a person to become self-sufficient. These entities include businesses, government agencies, community colleges and school districts, and community-based organizations. Participant Characteristics TPIC targets the JTPA-eligible population—people with barriers to employment such as ex-offenders, the long-term unemployed, and high school dropouts. TPIC officials explained that these harder-to-serve clients generally have multiple barriers to employment and are more expensive to train. During program year 1994, TPIC’s JTPA program for disadvantaged adults primarily served women (63 percent). Sixty-one percent of participants were white, 17 percent were African American, 16 percent were Hispanic, 3 percent were Native American, and 3 percent were Asian American. Twenty-nine percent of participants were welfare recipients and 21 percent were high school dropouts. Project Structure TPIC provides case management and on-site assessment (36 hours) and links clients with vocational training opportunities. The three neighborhood centers follow a similar approach to program delivery. Each holds a mandatory orientation session, generally twice a month, during which case managers explain the services provided, the types of training available, and the links to training. At the orientation, TPIC staff explain that they maintain a businesslike environment that demands qualities such as timeliness and drug-free participation. Case managers work with individuals to assess their ability to benefit from services. Clients must commit to standards such as attending class every scheduled day, arriving on time, following basic rules for good grooming, and abiding by the guidelines for smoking outside the building. Clients subsequently screen themselves out of training if they are not willing to abide by these standards. When appropriate, case managers make referrals to other community resources for assistance with barriers to employment. Through the assessment process, which takes 3 weeks, staff help participants examine their capabilities, needs, and vocational potential. This objective assessment includes a review of a participant’s family situation, interests, and aptitudes. Additionally, assessment includes employability skills and contains a basic work place curriculum that focuses on skills such as problem solving and conflict resolution. Clients are also required to develop a self-sufficiency plan and a specific job goal. They must research labor market information and conduct interviews to gather information on careers in which they are interested. The Southeast Center, for example, requires two interviews: one with a person who does the job the participant is interested in and another with a school that provides training for that job. Following assessment, case managers assist participants by connecting them to training that includes English as a Second Language, basic skills, vocational skills, on-the-job training, competency training, work experience, and internships. None of the TPIC sites offers on-site basic skills or occupational skills training. A project official estimated that more than half of TPIC participants need some basic skills training, which may be obtained at a local community college or elsewhere in the community at no cost, before they can benefit from occupational skills training. For skills training, TPIC refers participants to its contracted skills training and provides tuition assistance—generally no more than $2,500 for each participant. A project official noted that clients often come to TPIC with an idea of what skills they want; during the assessment process, the case manager and job developer work with these desires but also steer clients to where opportunities may be or try to broaden their scope. TPIC participants have access to all job opportunities listed through the state employment office, and job developers also help participants find employment. Participants may be involved in a “job club,” which further motivates them and provides job search assistance. TPIC also provides retention services—following up with both the participants and the employers. Project Outcomes TPIC defines successful participants as those who obtain self-sufficiency; for this, TPIC has set a specific, minimum starting wage goal of $7 an hour. All TPIC programs rely on outcome-based measures to determine program performance. Outcomes for the adult training employment programs include the number of clients served and placement, retention, and starting salary rates. During the 1994 program year, TPIC’s JTPA program for disadvantaged adults served 90 percent of the participants it had planned to serve—a total of 682. Of the 355 participants who left during the program year, about 68 percent found employment; however, of those completing occupational skills training, about 77 percent were placed. For more information on The Private Industry Council, contact Maureen Thompson, Vice President, 720 South West Washington, Suite 250, Portland, Oregon 97205, or call (503) 241-4600. Comments From the Department of Labor GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, the following individuals made important contributions to this report: Catherine Baltzell, Karen Barry, Dianne Murphy Blank, Gary Galazin, Diana Gilman, Benjamin Jordan, Barbara Moroski-Browne, Cynthia Neal, James Owcarzak, Lynda Racey, Robert Rogers, Doreen Swift, and Kathleen Ward. Bibliography Abt Associates, Inc., Evaluation of the Food Stamp Employment Program. Bethesda, Md.: Abt Associates, Inc., June 1990. Berger, Mark C., and Dan A. Black. “Child Care Subsidies, Quality of Care, and the Labor Supply of Low-Income, Singe Mothers.” Review of Economics and Statistics, 74(4) (Nov. 1992), pp. 635-42. Burghardt, John, and Anne Gordon. More Jobs and Higher Pay: How an Integrated Program Compares With Traditional Programs. New York: Rockefeller Foundation, 1990. U.S. Department of Health and Human Services, Administration for Children and Families. Summary of Final Evaluation Findings From FY 1989, Demonstration Partnership Program Projects Monograph Series 100-89, Case Management Family Intervention Models. Washington, D.C.: 1992. U.S. Department of Labor. What’s Working (and What’s Not), A Summary of Research on the Economic Impact of Employment and Training Programs. Washington, D.C.: Jan. 1995. _____. Reemployment Services: A Review of Their Effectiveness. Washington, D.C.: 1994. _____. Improving the Quality of Training Under JTPA. Research and Evaluation Report Series 91-A. Washington, D.C.: 1991. U.S. Department of Labor, Employment and Training Administration. What Works for Dislocated Workers. Washington, D.C.: 1991. U.S. Department of Labor, Secretary’s Commission on Achieving Necessary Skills. What Work Requires of Schools. Washington, D.C.: 1991. Dickinson, Katherine P., and others. JTPA Best Practices in Assessment, Case Management, and Providing Appropriate Services. Menlo Park, Calif.: SRI International and Social Policy Research Associates, June 1994. U.S. General Accounting Office. Best Practices Methodology: A New Approach for Improving Government Operations. GAO/NSIAD-95-154, May 1, 1995. _____. Child Care: Child Care Subsidies Increase Likelihood That Low-Income Mothers Will Work. GAO/HEHS-95-20, Dec. 30, 1994. _____. Dislocated Workers: Exemplary Local Projects Under the Job Training Partnership Act. GAO/HRD-87-70BR, Apr. 8, 1987. _____. Job Corps: High Costs and Mixed Results Raise Questions About Program’s Effectiveness. GAO/HEHS-95-180, June 30, 1995. _____. Job Training Partnership Act: Actions Needed to Improve Participant Support Services. GAO/HRD-92-124, June 12, 1992. _____. Job Training Partnership Act: Long-Term Earnings and Employment Outcomes. GAO/HEHS-96-40, Mar. 4, 1996. _____. Job Training Partnership Act: Services and Outcomes for Participants With Differing Needs. GAO/HRD-89-52, June 9, 1989. _____. JOBS and JTPA: Tracking Spending, Outcomes, and Program Performance. GAO/HEHS-94-177, July 15, 1994. _____. Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results. GAO/T-HEHS-95-53, Jan. 10, 1995. _____. Welfare to Work: Approaches That Help Teenage Mothers Complete High School. GAO/HEHS/PEMD-95-202, Sept. 29, 1995. _____. Welfare to Work: Measuring Outcomes for JOBS Participants. GAO/HEHS-95-86, Apr. 17, 1995. _____. Welfare to Work: Most AFDC Training Programs Not Emphasizing Job Placement. GAO/HEHS-95-113, May 19, 1995. _____. Welfare to Work: State Programs Have Tested Some of the Proposed Reforms. GAO/PEMD-95-26, July 14, 1995. Manpower Demonstration Research Corporation. Papers for Practitioners: Improving the Productivity of JOBS Programs. New York: Manpower Demonstration Research Corporation, 1993. _____. GAIN: Two-Year Impacts in Six Counties—California’s Greater Avenues for Independence Program. New York: Manpower Demonstration Research Corporation, May 1993. Mathematica Policy Research, Inc. International Trade and Worker Dislocation: Evaluation of the Trade Adjustment Assistance Program. Princeton, N.J.: Mathematica Policy Research, Inc., Apr. 1993. National Association of Counties. The Challenge of Quality: Participant Selection, Recruitment and Assignment. JTPA Issues. Washington, D.C.: National Association of Counties, 1990. National Governors’ Association. Research Findings on the Effectiveness of State Welfare-to-Work Programs. Washington, D.C.: National Governors’ Association, 1994. North Carolina Department of Community Colleges, Planning and Research Section. 1992 Critical Success Factors for the North Carolina Community College System: Third Annual Report. Raleigh, N.C.: North Carolina Department of Community Colleges, 1992. Orfield, Gary, and Helene Slessarev. Job Training Under the New Federalism, chapter 13. Chicago: University of Chicago Press, 1986. Orr, Larry L., and others. The National JTPA Study: Impacts, Benefits, and Costs of Title II-A. Bethesda, Md.: Abt Associates, Inc., Mar. 1994. University of California, Berkeley, National Center for Research in Vocational Education. Exemplary Programs Serving Special Populations. Vols. I and II. Berkeley: University of California, 1992. _____. Institutional-Level Factors and Excellence in Vocational Education: A Review of the Literature. Berkeley: University of California, 1991. Wardlow, George, and others. Institutional Factors Underlying Excellence in Vocational Education. St. Paul: University of Minnesota, 1990. Wardlow, George, and Gordon Swanson. Institutional-Level Factors and Excellence in Vocational Education: A Review of the Literature. Berkeley: National Center for Research in Vocational Education, University of California, 1991. Related GAO Products Job Training Partnership Act: Long-Term Earnings and Employment Outcomes (GAO/HEHS-96-40, Mar. 4, 1996). Job Corps: High Costs and Mixed Results Raise Questions About Program’s Effectiveness (GAO/HEHS-95-180, June 30, 1995). Welfare to Work: Most AFDC Training Programs Not Emphasizing Job Placement (GAO/HEHS-95-113, May 19, 1995). Welfare To Work: Measuring Outcomes for JOBS Participants (GAO/HEHS-95-86, Apr. 17, 1995). Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results (GAO/T-HEHS-95-53, Jan. 10, 1995). Child Care: Child Care Subsidies Increase Likelihood That Low-Income Mothers Will Work (GAO/HEHS-95-20, Dec. 30, 1994). JOBS and JTPA: Tracking Spending, Outcomes, and Program Performance (GAO/HEHS-94-177, July 15, 1994). Job Training Partnership Act: Actions Needed to Improve Participant Support Services (GAO/HRD-92-124, June 12, 1992). Job Training Partnership Act: Services and Outcomes for Participants With Differing Needs (GAO/HRD-89-52, June 9, 1989). Dislocated Workers: Exemplary Local Projects Under the Job Training Partnership Act (GAO/HRD-87-70BR, Apr. 8, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the merits of 6 highly successful employment training programs for economically disadvantaged adults. GAO found that successful employment training projects: (1) serve adults with little high school education, limited basic skills and English language proficiency, few marketable job skills, and past histories of substance abuse and domestic violence; (2) only enroll students who are committed to completing the job training and seeking full-time employment; (3) ensure that clients are committed to training and getting a good job, and as a result, require them to sign an agreement of commitment outlining their responsibilities; (4) provide child care, transportation, and basic skills training, to enable clients to complete program training and acquire employment; (5) improve their clients' employability through on-site workshops and one-on-one sessions and by developing professional workplace attitudes; (6) have strong links with the local labor market and use information from the local market to guide training options; and (7) aim to provide their clients with training that will lead to higher earnings, good benefits, and overall self-sufficiency.
Background Counterfeit Goods Pose a Cost to U.S. Economy and a Threat to Health and Safety Intellectual property is the result of human innovation and creation to develop products that people consume every day, whether it is the music we listen to, the books we read, the cars we drive, or the medicine we take. The protection of IP is recognized as important to continuing that innovation and creativity, and the United States has several laws aimed at protecting IP rights. Copyrights, patents, and trademarks are the most common forms of protective rights for IP. Protection is granted by guaranteeing owners limited exclusive rights to whatever economic reward the market may provide for their creations and products. According to the U.S. Intellectual Property Rights Coordinator, industries that relied on IP protection were estimated to account for over half of all U.S. exports, represented 40 percent of U.S. economic growth, and employed about 18 million Americans in 2006, making IP protection important to protecting our nation’s economy. It is difficult to reliably measure criminal activity, but industry groups suggest that counterfeiting and piracy are on the rise and that a broader range of products, from auto parts to razor blades, and from medicines to infant formula, are subject to counterfeit production. The threat to America’s health and safety from the theft of IP and counterfeiting of products is an increasing concern for many reasons—counterfeit batteries can explode, counterfeit car parts can fail to perform, and counterfeit pharmaceuticals can lack the ingredients necessary to cure deadly diseases. In addition to public health and safety concerns, the annual losses that companies face from IP violations are substantial. CBP Leads IP Enforcement at the Border Multiple federal agencies play a role in combating counterfeiting and piracy, and their efforts were wrapped into the administration’s Strategy Targeting Organized Piracy, launched in October 2004. One objective of this effort is to improve border enforcement, and CBP is the agency primarily responsible for such enforcement, given its authority to detain and examine shipments and seize goods that violate U.S. law. CBP’s current mission has two goals: preventing terrorists and terrorist weapons from entering the United States while facilitating the flow of legitimate trade and travel; and its priority mission is to ensure homeland security. CBP is responsible for enforcing antiterrorism, trade, immigration, and agricultural policy, laws, and regulations at more than 300 ports of entry. Two CBP offices play a role in carrying out policies and procedures related to IP enforcement: Office of International Trade (OT) – Established in October 2006, this office consolidates the trade policy, program development, and compliance measurement functions of CBP into one office. This office is responsible for providing uniformity and clarity for the development of CBP’s national strategy to facilitate legitimate trade and managing the design and implementation of strategic initiatives related to trade compliance and enforcement, including IP rights. Office of Field Operations (OFO) – This office houses CBP’s border operations and is comprised of 20 field offices under which are CBP’s 325 ports of entry. Overseeing more than 25,000 employees, including more than 20,000 CBP officers, OFO is responsible for carrying out CBP’s cargo and passenger-processing activities related to security, trade, immigration, and agricultural inspection. Daily management of port operations is highly decentralized, with field offices overseeing but not directly managing port operations. CBP’s port operations oversee an array of cargo- and passenger-processing environments, and port management structures are not uniform. For example, some ports’ management oversees a single port of entry while others oversee multiple ports of entry (e.g., a seaport and nearby airport). Seaports – CBP operations in the sea environment primarily consist of cargo container processing, but may include passenger processing for cruise ships. Cargo containers arriving at seaports may be transported to interior ports for processing via an import mechanism called the in-bond system. CBP receives manifest information 24 hours before lading at foreign ports so that it may screen cargo container data to identify high- risk shipments. Airports – CBP processes passengers and cargo at U.S. airports with international flights. CBP’s air environment includes air cargo, express consignment carriers, such as Federal Express, and international mail. Air cargo shipments are generally larger in size than express consignment or international mail shipments. CBP receives manifest information in the air environment 4 hours prior to arrival. Land Border Crossings – CBP processes passengers, commercial truck and rail, and personal vehicles at land border crossings. CBP receives manifest information for commercial truck and rail shipments 30 minutes to 2 hours prior to arrival, depending on the transport mode. The volume of goods and people that CBP processes for entry into the United States every year is substantial and has been steadily increasing. For example, the number of “entries summaries” filed with CBP rose from nearly 24 million in fiscal year 2001 to nearly 30 million in fiscal year 2005. In fiscal year 2005, CBP processed approximately 20 million sea, truck, and rail containers and about 450 million passengers and pedestrians. At the same time, the value of import trade has been growing, rising from about $1.2 trillion in fiscal year 2001 to about $1.7 trillion in fiscal year 2005, according to CBP statistics. The largest share of imports by value arrives at the United States via ocean-going cargo containers, followed by air transport, as illustrated in figure 1. According to CBP, the proportion of import value by transport mode has remained relatively static since fiscal year 1999. Although all goods imported into the United States are subject to examination, CBP examines only a small portion of them. Most exams are conducted for security reasons, and these have been increasing each year, while the number of trade-specific exams has not grown since 2001, as shown in figure 2. In addition, CBP conducts a small portion of exams under its Compliance Measurement Program, which has components to address both security and trade compliance. After the formation of DHS and in light of homeland security priorities, CBP determined that it needed to focus its trade enforcement resources and activities on the most pressing trade issues. CBP has six Priority Trade Issues, one of which involves IP. According to CBP’s Intellectual Property Rights Trade Strategy, the agency’s goal is to improve the effectiveness of its IP enforcement activities; ensure a uniform enforcement approach across its multiple port locations; and focus on making seizures with high aggregate values, that threaten health and safety or economic security, or that have possible ties to terrorist activity. CBP is also responsible for enforcing International Trade Commission exclusion orders that arise from an administrative process in which the commission determines that certain products constitute infringement of a relevant U.S. law and should be excluded entry to the United States. CBP coordinates its efforts with DHS’s U.S. Immigration and Customs Enforcement (ICE), which investigates IP violations and builds cases for prosecution. CBP Undertakes a Series of Steps, Amid Challenges, to Enforce IP Rights at the Border CBP takes a series of steps to enforce IP rights at the border and faces a number of challenges throughout this process. Two key offices, OT and OFO, are responsible for carrying out these steps, which include (1) targeting suspicious shipments; (2) examining detained shipments to determine if they carry infringing or excluded goods; and (3) enforcing IP laws by seizing goods and, if warranted, assessing penalties against importers. CBP uses two primary methods to target IP violations: computer-based and manual targeting. Their use varies, depending on the individual port approaches and the transportation mode being targeted, and both methods have certain strengths and limitations. OT and OFO both use computer-based methods to target vast numbers of commercial shipments, but the primary computer method used for IP purposes has led to a relatively small percent of IP seizures. Manual targeting by port employees is more ad hoc and flexible than computer-based targeting, given its reliance on employee skill and availability, but determining its effect on IP enforcement is difficult. Ports conduct exams on targeted shipments, and determining whether infringement has occurred during an exam can be a challenging process that requires training and input from experts. Because of differing port practices, information on IP exams has been unevenly recorded across ports, according to officials, making it difficult for CBP to fully analyze the effect of its targeting efforts in this area. CBP’s process for enforcing IP concludes with ports seizing infringing goods and, if warranted, referring cases to ICE and assessing penalties. According to CBP officials, storing and destroying infringing goods has been costly, and the penalty process has resulted in few collections. Agency IP enforcement efforts have primarily been focused on goods for which the trademark or copyright has been recorded with CBP. CBP Operates in a Challenging Environment to Combat the Entry of Counterfeit Goods Because importing counterfeit goods is an inherently deceptive activity, it presents challenges to CBP’s ability to process large volumes of goods and determine those that are legitimate. CBP regularly confronts nefarious importers who attempt to smuggle IP-infringing goods into the country through various means. For example, according to a CBP press release, in August 2006, the Norfolk seaport found fake Nike shoes concealed inside a refrigerated container behind the jellyfish and salt kelp declared on the manifest. In addition, CBP contends with counterfeiter practices that make it difficult to detect shipments containing IP-infringing goods or distinguish legitimate from unauthorized goods. The law enforcement sensitive report described some of these practices in greater detail. Amid these challenges, however, OT and OFO both have responsibility for executing a series of actions to (1) target potential IP infringing goods as they enter the United States, (2) examine suspicious goods, and (3) enforce IP laws through seizures and penalties, if warranted. Figure 3 depicts an overview of CBP’s process. CBP Uses Various Targeting Methods, Depending on Port Practices and Transport Modes, Each Having Strengths and Limitations CBP uses both computer-based and manual targeting approaches to target counterfeit goods either before or as they enter the United States—both methods have strengths and limitations related to scope, data accuracy and overall sophistication, and resource requirements. CBP ports vary in the degree to which they use these methods, depending on their overall approach to IP enforcement and the modes of transport that they oversee. The primary computer-based targeting method used for IP purposes has uncovered a relatively small share of IP seizures. The effect of manual targeting is more difficult to determine. In addition, CBP undertakes other targeting and compliance measurement actions that, while not intended for IP enforcement, have uncovered IP violations. Computer-Based Targeting CBP’s computer-based targeting for IP violations is handled primarily through its Cargo Selectivity program, a system that targets commercial shipments, which are typically transported to the United States by sea, air, and truck. Commercial shipments may arrive to the United States via various modes of transport, but the largest shipments arrive via ocean- going cargo containers. When data about a particular shipment match the targeting criteria entered into the system, the port of entry is electronically notified, and the shipment is flagged for examination. Cargo Selectivity is not used to target noncommercial shipments, which typically enter the United States via express consignment, international mail, passengers, or private vehicle. Cargo Selectivity criteria can be developed to target on a nationwide basis or on a local, port-specific basis. National criteria for targeting suspected IP violations are developed and overseen by the Strategic Trade Center in Los Angeles (formerly in the Office of Strategic Trade and now under OT), where International Trade Specialists with expertise in particular industries develop criteria based on their analysis of recent seizure activity and information from industry representatives. CBP officials said that Cargo Selectivity criteria are designed to target known or suspected violators and that criteria can be written to target on all or some of a limited number of data elements. Ports develop local criteria in a similar fashion; however, they differ in the degree to which they use national or local criteria. CBP’s Automated Targeting System (ATS) is another computer-based targeting tool, but it is not used to systematically target for IP violations. ATS is a primary component of CBP’s approach for security targeting. As we note later in this report, CBP has conducted one pilot test of an ATS module for targeting IP violations. Manual Targeting Manual targeting describes a range of activities in which individual employees at the ports or the Los Angeles Strategic Trade Center—based on their own knowledge, analysis, and experience—identify certain shipments for examination. According to CBP officials and based on our observations at select ports, manual targeting may involve CBP staff flagging shipments by: conducting queries (such as in ATS) or executing analysis of electronically-filed manifest or entry data; reviewing entry paperwork (including paper and electronic entries), which may be generated as a computer-based targeting; visually observing packages in a processing or storage environment; and receiving information from other CBP employees within their own port or at other ports regarding a given shipment, perhaps based on information obtained from law enforcement agencies or rights holders. Both Targeting Methods Have Strengths and Limitations Both Cargo Selectivity and manual targeting methods have certain strengths and limitations relating to their scope, usefulness, and feasibility: Cargo Selectivity allows CBP to quickly screen vast volumes of commercial shipments on a nationwide basis, but it doesn’t work for all types of shipments, and it has uncovered a relatively small share of IP violations since fiscal year 2003. Cargo Selectivity can use only a limited number of elements to target potentially infringing shipments, and getting criteria into the system takes time. According to CBP officials, the lack of sophistication and cumbersome process limits the system’s overall usefulness for performing IP targeting. Also, CBP must use caution when developing its Cargo Selectivity targeting criteria in order to minimize the number of suspect shipments that are false positives, which can create unmanageable workloads for the ports, delay the movement of legitimate goods, and burden importers with exam costs. CBP data shows that IP targeting using Cargo Selectivity accounted for only about 3 percent of seizure actions made by CBP during fiscal years 2003 through 2006 and about 10 percent of the total estimated value of goods seized. More information on CBP’s seizure outcomes is provided later in this report. Ports use manual targeting to overcome some of the limitations of Cargo Selectivity. CBP officials at several ports we visited expressed the view that there is no substitute for the skills and experience of a well-trained CBP officer, but other officials noted that CBP can’t rely on manual targeting to process vast volumes of trade. According to CBP officials, manual targeting is heavily dependent on employee availability and expertise; therefore, its use for IP targeting at some ports may be limited, particularly as CBP increasingly focuses its staff resources on security matters. CBP lacks data to fully determine the extent to which its seizure outcomes have resulted from manual targeting, but the portion could be large, given the relatively small portion that stems from Cargo Selectivity. Other Targeting or Compliance Measurement Actions May Uncover IP Violations CBP undertakes other actions that, while not intended for IP enforcement, have uncovered IP violations, such as targeting for other trade violations or security reasons or actions that measure compliance with laws and regulations. For example, CBP’s Compliance Measure Program, a statistical sampling program, is designed to examine randomly selected shipments for their compliance with a range of laws and regulations, including IP laws. According to CBP, IP violations have been found in a very small percent—less than one-tenth of 1 percent—of such exams. In addition, Cargo Selectivity, when used to target for reasons such as terrorism or other trade issues, has revealed IP violations. Specifically, an additional 3 percent of seizure actions and 10 percent of estimated seizure value was uncovered from non-IP related criteria during fiscal years 2003 to 2005. Finally, any shipment that ATS identifies as high risk is automatically subjected to nonintrusive examinations using radiation detection and gamma-ray or X-ray scanning technologies. Such examinations may reveal unexpected results, such as contents that appear to differ from what is described in the manifest or entry data. When IP violations are suspected, the container is referred for further review to port personnel who handle trade enforcement. CBP could not provide data to show how often IP violations were found in this way. Targeted Shipments Are Subject to Examination, but Determining Infringement Can Be Difficult, and Exam Results Are Unevenly Recorded Once shipments have been targeted for examination, CBP personnel at the ports are to examine shipments and record the results of their exams in CBP’s data systems. Because of high counterfeit quality and complex U.S. IP laws, determining whether IP infringement has occurred can be difficult. CBP provides some training to assist ports in this endeavor. Because of variations in port practices for recording IP exam results, CBP’s exam data are uneven, according to CBP officials, and limits the agency’s ability to assess CBP’s targeting effectiveness. Physical Exams Are Performed to Determine IP Violations CBP uses the term “exam” to refer to a range of actions, including paperwork reviews, nonintrusive exams, and physical exams in which CBP examines all or a portion of the targeted goods. According to CBP officials, physical exams are the best means for assessing potential IP infringement. The procedures for conducting physical examinations differ according to the mode of transport and the movement of goods for examination. Sea Cargo Examinations. In the sea cargo environment, sea containers are generally transported and examined away from their point of arrival at CBP exam facilities located at some distance from the port itself. When multiple shipments are contained in a single cargo container, these are first moved to a container freight station for debundling and then targeted shipments within the container are moved to the examination warehouse. Air Cargo Examinations. In the air environment, the examination location may vary. For example, air cargo shipments may be examined at their arrival location or moved to a CBP exam facility while international mail and express consignment shipments may be examined at their arrival location. Land Border Crossing Examinations. At land border crossings, vehicles and packages are moved to examination areas while still under CBP’s control. CBP personnel who perform physical exams at the ports have discretion over how intensive an exam will be. For example, according to CBP and ICE officials, decisions are made about how many boxes will be opened from a single ocean-going cargo container; which boxes will be opened; and whether the container will be partly or fully unloaded. CBP personnel are to follow a common set of procedures grounded in law and regulation once shipments are opened for examination. When CBP decides to examine goods, it has a 5-day period, following the date on which the goods were presented for examination, to decide whether it will detain or release them. If examining personnel can immediately identify goods as counterfeit, perhaps because they have recently seized similar goods or the violations are obvious, CBP initiates procedures to detain the goods. However, because this is often not the case, samples of the merchandise may be provided to commodity experts at the ports called Import Specialists who evaluate the goods for IP infringement. As a result of their evaluation, Import Specialists will either order the goods to be detained for further review or released. When CBP decides to detain goods, it must notify the importer of the detention within 5 days after the decision was made, and may also notify the affected rights holder. The notice to the importer advises, among other things, the reason for the detention and the nature of any information or tests that CBP requires in order to process the matter. The importer and rights holder can take various actions, and communication with CBP may ensue for a period of 30 days. According to CBP officials, some rights holders are willing to negotiate with importers, which may involve financial compensation for the rights holder. If the importer’s actions fail to secure release of the goods within 30 days or CBP finds them to be infringing, the agency proceeds with seizure and forfeiture actions. Identifying IP Violations Can Be Difficult Because of high counterfeit quality and the complexity of U.S. laws, making a determination of IP infringement in some instances is difficult. To help ports assess whether goods are authentic, CBP’s agency regulations provide rights holders the option to record trademarks, trade names, or copyrights with CBP. CBP currently charges $190 for its recordation application. Through the fee-based recordation process, CBP collects information from an IP owner about specific registered trademarks, copyrights, or trade names, and then enters that information into an electronic database accessible by CBP officers at ports across the country. However, CBP officials said that some IP owners do not record their rights with CBP, meaning that CBP lacks information about their products and access to individuals within the company who can address potential infringement. Moreover, when counterfeit quality is quite good, even the rights holder may have to conduct research to distinguish real from fake. In addition, the complexity of U.S. IP laws and CBP’s array of seizure authorities present challenges to port staff, according to CBP officials. CBP has an array of detention and seizure authorities for IP violations; port personnel must be aware of these authorities and ensure their actions are in accord with them. CBP advises the ports that the most appropriate seizure authority will depend on the type of IP right infringed, whether the right is federally registered, whether the right is recorded with CBP, and the type of alleged infringement. CBP is authorized, in some instances, to seize goods for which the right is registered with appropriate rights- granting authorities but not recorded with CBP; however, OT’s lead IP attorney stated that because the statutory bases for such enforcement are established by criminal statutes that invoke certain limitations and evidentiary requirements, such seizures would be available only in cases involving clear instances of counterfeiting or piracy and would require CBP to establish more elements of the infringement than is required for recorded rights. Therefore, CBP directs ports to focus their IP enforcement on recorded goods. For certain other types of violations, CBP does not commence seizure actions either because of agency policy or the lack of proper authority. CBP and rights holders offer training to port personnel to assist them in evaluations. For example, OT’s IP attorneys train port personnel, including CBP Officers, Import Specialists, and others, on the legal and regulatory authorities in this area. Between 35 and 40 ports received training in each of fiscal years 2005 and 2006. In addition, private sector training is periodically arranged for CBP ports. Rights holder representatives give CBP staff advice about how to determine whether goods are counterfeit. For example, rights holders may tell CBP about security features they embed in their products or other methods they use to protect their IP. Uneven Quality of Exam Data Limits CBP’s Ability to Analyze IP Enforcement Efforts CBP maintains data systems in which port staff are required to record the results of exams they conduct, but the uneven quality of exam data has made it difficult for the agency to accurately analyze the results of IP targeting and to fully track all incidents of IP violations, according to CBP officials. CBP requires port staff to document exam results, whether or not violations were found, in order to contribute to the agency’s repository of enforcement knowledge and for future targeting purposes. Officials in OT and its legacy offices familiar with the agency’s data for IP-related exams told us the data are not uniform or timely and contains inaccuracies. For example, these officials said they have found instances where goods were seized for IP violations, but exam records did not indicate that any discrepancies were found. Also, officials said that staff at some ports quickly record exam results, but staff at other ports do not record exam results until several months after the goods have been detained or seized, if ever. Among ports that quickly record results, sometimes port staff must later revise the exam results if further investigation determines that the goods are legitimate. In addition, although port staff are directed to indicate in CBP’s data systems whether IP violations were found and can add additional information about laws that were violated or a narrative description of the goods, officials in OT and its legacy offices that are familiar with the data said that whether and how they record this information varies. These officials were not sure what accounted for variations in the quality of exam results across ports, and we did not independently assess CBP’s policies or port practices for recording exam results. CBP’s Key IP Enforcement Actions Entail Seizing Goods and Issuing Penalties CBP’s process for enforcing IP rights concludes with the seizure of counterfeit goods and, if warranted, the assessment of penalties against the IP infringer. A seizure action, as defined by CBP, entails CBP taking control and/or possession of articles imported contrary to law. Penalties result in monetary fines imposed on the violator. Once CBP officers have examined goods and determined that they are counterfeit, the legal process to seize goods is initiated. This process, carried out by Fines, Penalties, and Forfeiture offices at the ports, entails (1) resolving or deciding CBP compliance actions; (2) providing advice to other CBP officers on the various trade violations; (3) securing or maintaining seized property; and (4) making sure CBP’s automated system accurately reflects description, location, value, and forfeiture status of seized goods. CBP calculates the domestic value of seizures to track the value of goods it intercepts. For penalties it issues under 19 U.S.C. 1526(f), CBP calculates the Manufacturer’s Suggested Retail Price (MSRP)—which would have been the value of the merchandise if it were genuine. CBP determines whether additional enforcement in the form of civil penalties should be assessed. It is CBP’s policy not to assess a penalty, however, until all forfeiture proceedings have been completed, and the property, except for representative samples, has been destroyed. When assessing 19 U.S.C. 1526(f) penalties against a first-time violator, the amount of the penalty may be equal to or less than the MSRP. In the event of a second or subsequent seizure, the amount of the penalty is assessed at twice the value of the merchandise, based on the MSRP at the time of seizure. For seizures valued over $100,000 in domestic value, OFO maintains responsibility for reviewing the case before any legal action is taken. For penalties that are assessed, CBP officials said that substantial resources are dedicated to processing penalty cases, but they also said that penalty amounts are seldom fully collected. CBP’s collection issues are discussed later in this report. Depending on the size and value of a seizure, CBP officers coordinate with ICE agents who then consult the Department of Justice’s U.S. Attorneys Offices to determine if criminal investigation and prosecution is warranted. For example, a repeat violator may warrant criminal action if ICE has enough information to initiate a criminal investigation and build a case. Seized goods have to be secured, as they have potential value but cannot be allowed to enter U.S. commerce. Storage may be prolonged by law enforcement actions, but the goods are generally destroyed or otherwise disposed of according to law when determined to be illegal and are no longer needed. According to CBP officials, as seizures have increased, the agency’s storage and destruction costs have grown and become increasingly burdensome. CBP reports that it spent over $9.1 million to destroy seized property between fiscal years 2001 and 2006. CBP officials said that the environmental regulations for disposing of goods, particularly in states like California, prevent CBP from disposing of certain counterfeit goods in landfills. Often the goods are destroyed through incineration. IP Enforcement Outcomes Have Been Concentrated in Certain Areas and Reflect Pockets of Activity The bulk of CBP’s enforcement outcomes have been concentrated within certain modes of transport and product types and reflect pockets of activity among a limited number of ports. Although the total number of seizure actions has grown since fiscal year 2001, nearly doubling between fiscal years 2005 and 2006, most of these actions involved small-value seizures made from air-based modes of transport and may reflect a shift in smuggling techniques toward larger numbers of smaller-sized shipments. The total estimated domestic value of goods seized since fiscal year 2001, however, has fluctuated due to variations in seizure activity involving large, ocean-going containers in which the highest-value seizures tend to be made. While the types of goods seized have varied over time, wearing apparel, cigarettes, footwear, and handbags have accounted for the majority of estimated seizure value in the past 6 years. Ten or fewer ports, including some of the nation’s largest ports and others that are significantly smaller, have accounted for the bulk of seizure and penalty outcomes since 2001. Of these ports, six have made consistent contributions each year to IP enforcement outcomes. CBP measures IP seizure activity two ways: number of seizure actions and estimated domestic value of goods seized. The number of goods in one seizure action can range from a few items shipped via international mail to hundreds of boxes of goods in a ocean-going cargo container. CBP maintains the official seizure statistics for DHS, including those made by CBP, ICE, or jointly by the two agencies. CBP captures data on the transport mode or processing environment in which goods were seized, but in compiling seizure data, neither OT nor its legacy offices routinely verify this data field. In addition, there are certain limitations in CBP’s seizure data, such as the precision of the estimates. However, we found CBP’s data to be sufficiently reliable to indicate broad trends. Majority of IP Seizure Actions Occurred in Air Transportation Environment While Value of Seized Goods Was Concentrated in Sea Environment For fiscal years 2003 through 2006, most IP seizure actions have been concentrated in the air transportation environment, while most seizure value has been concentrated in ocean-going cargo containers. As shown in figure 4, about 78 percent of total seizure actions during fiscal years 2003 through 2006 occurred in air transportation processing environments. Significantly fewer seizure actions were made from ocean- going cargo containers or land-based transport modes, such as truck, train, or auto. Conversely, ocean-going cargo container seizures represented about 60 percent of total estimated seizure value during those years, with significantly smaller portions of value generated by air- and land-based seizures. Even though about one-fourth of U.S. imports enter by land- based modes of transport, seizure actions and values in this mode were less than 5 percent of total seizures, as measured by either indicator. CBP seizure data show that the number of seizure actions has grown steadily from fiscal years 2001 to 2006, while domestic values have fluctuated. As shown in figure 5, there were over 3,500 seizure actions in fiscal year 2001, increasing to over 14,000 in 2006. Also shown in figure 5, the domestic value of goods seized in these years has fluctuated, reaching a high in 2006 of more than $155 million. Although the overall trend for both measurements is upward, these outcomes represent a small fraction of overall imports. For instance, in fiscal year 2005, the domestic value of IP seizures represented less than one-tenth of 1 percent (0.02 percent) of the total value of imports of goods in product categories that are likely to involve IP protection. It is impossible to know whether these seizure outcome trends reflect improved enforcement actions or an increase in the share of counterfeit trade entering the United States during those years. According to our analysis, growth in the number of seizure actions was fueled by increases in small-value seizures made in express consignment and international mail facilities. CBP publicly cites an 83 percent increase in the number of seizure actions from fiscal year 2005 to 2006 as an indicator of its growing IP enforcement success. However, our analysis shows that this growth was driven primarily by smaller-value seizures. For example, to date, the largest number of seizures for international mail occurred in fiscal year 2006, with the number of seizure actions and their domestic value more than doubling from the previous year. CBP officials said they believed that while some of these seizures are personal shipments ordered from Internet sites that sell counterfeit merchandise, others have contained quantities of goods too large for personal use. Some officials also speculated that counterfeiters may be breaking commercial shipments into smaller components to avoid detection and face more limited losses in the event of a seizure. However, CBP has not conducted any systematic analysis to support these observations. Our analysis also shows that several factors influence trends in seizure value. Spikes in seizure value in fiscal years 2004 and 2006, accounting for approximately 30 percent of seizure value in each of those years, were largely due to shipments moving through the in-bond system. Also, CBP data show that about 15 percent of seizure value during fiscal years 2003 through 2006 stemmed from seizures made by ICE during its investigations, and OT and ICE officials said that ICE may have played a role in some of the seizures attributed to CBP. Of note in fiscal year 2006, a combined CBP and ICE initiative resulted in the seizure of 77 cargo containers of fake Nike Air Jordan shoes and 1 container of fake Abercrombie & Fitch clothing that were transported in part using the in- bond system (they entered the Los Angeles seaport, transited through Arizona, and were supposedly destined for export to Mexico). The estimated domestic value of these goods was about $19 million, representing about 12 percent of total domestic seizure value in fiscal year 2006. Seizures Have Been Concentrated among Certain Types of Products Although the types and quantities of seized goods vary over time, seizures over the past 6 years have been highly concentrated among certain types of products. For example, seizures of footwear, wearing apparel, handbags/wallets/backpacks, and cigarettes accounted for over 60 percent of the aggregate value of goods seized over the past 6 years. Table 1 shows that footwear and wearing apparel accounted for 57 percent of domestic value of goods seized in fiscal year 2006, with the high percent of footwear seized partially resulting from the large CBP/ICE seizure described earlier. Health care products and pharmaceuticals accounted for only 3 percent of such value. When asked why commodity seizures vary from year to year, CBP officials stated that counterfeiters produce fake goods depending on marketplace demand at any given time. However, it is difficult to determine whether CBP and ICE seizures are representative of the types of counterfeit products entering the United States in any given year or merely reflect counterfeit products that CBP detected. CBP reports that seizures are also concentrated in one trading partner— for fiscal years 2001 through 2006 combined, exports from China account for 65 percent of total seized IP domestic value. Hong Kong and Taiwan are distant seconds to China, accounting for 6 and 5 percent of seizures in that period, respectively. The combined share of goods seized that were exported from China and Hong Kong grew about 40 percent per year during this period. Also listed among trading partners for which goods were seized for IP violations in fiscal years 2001 through 2006 are Korea, Pakistan, Russia, South Africa, and Singapore. Ten or Fewer Ports Have Accounted for the Bulk of IP Enforcement Outcomes CBP’s IP enforcement outcomes, including seizures and penalties, have been highly concentrated among a limited number of ports across the country. We analyzed CBP’s seizure and penalty data for fiscal years 2001 through 2006 and found that the bulk of enforcement activity was carried out by 10 or fewer ports in each of four enforcement categories we reviewed: (1) total number of seizures, (2) total seizures by domestic value, (3) total penalty cases opened, and (4) total amount of penalties assessed. The top 10 ports for each category differ, but there is considerable overlap among the ports so ranked in each category. Table 2 also shows that during fiscal years 2001 through 2006, the top 10 ports ranked by total seizure actions accounted for nearly two-thirds of those actions, and the top 10 ports ranked by total domestic seizure values accounted for nearly three-fourths of those values. Table 2 shows even greater concentration among the top 10 ports for penalty cases opened and penalty amounts assessed. Our analysis indicates that, while seizure actions have been more broadly disbursed among CBP’s ports, fewer of them have accounted for seizure values, and even fewer ports have assessed penalties for the seizures they make. The mix of ports in these rankings is surprising because they include some of the nation’s largest ports as well as several that are significantly smaller. Moreover, as we discuss later, some of the ports among which seizures are concentrated are not among the top ports in terms of IP import value; and conversely, some high IP-importing ports have shown more limited seizure outcomes. Because of inconsistencies in the way CBP records seizures by port, it can be difficult to determine precisely which processing environments accounted for seizure outcomes. We were able to determine the seizure reporting practices of the seven ports that we visited. For example, at one port, CBP uses one port “code” to record IP seizures made at the seaport and the nearby airport. Seizure data for another port includes seizures made at the seaport and from the international mail processing environment but does not include other air-based seizures made at the nearby airport, which are reported separately. Neither OT nor its legacy OFO office at the headquarters level could provide us information to understand the seizure reporting practices of CBP’s ports. Among these top 10 ports, there is further concentration of activity. For example, we found that four ports ranked in the top 10 for all four categories and two ports ranked in the top 10 for three of the four categories. This indicates that the majority of enforcement outcomes were produced by a handful of ports over the 6-year time period. Appendix II contains additional analysis of these six ports’ enforcement outcomes. Outside of the most active seizing ports, seizure outcomes have been more disbursed and prone to fluctuation. CBP’s seizure data showed that around 243 ports of entry made seizures at least once during the past 6 years. About 190 of those ports reported fewer than 100 aggregate seizure actions since 2001, and over 100 reported less than $100,000 in 6 years’ aggregate domestic value of seizures. Some of these ports had very little activity for 2001 through 2006, while others had occasional spikes in their seizure actions or domestic seizure values. CBP’s Approach to Improving IP Enforcement Lacks Integration and Has Produced Limited Results CBP lacks an integrated approach across key offices for improving border enforcement outcomes, having focused on certain efforts that have produced limited results while not taking the initiative to understand and address the variations among ports’ enforcement outcomes. While CBP’s strategic plan notes the importance of IP enforcement under two of CBP’s strategic goals, the plan lacks performance measures to guide CBP’s IP border enforcement efforts and assess its agencywide progress. CBP efforts to improve its IP border enforcement process, led by OT or its legacy offices, have produced mixed results: (1) poor data collection during port-based pilots limits OT’s ability to evaluate the effectiveness of a new risk model for IP targeting; (2) audits conducted to assess certain importers’ IP-related controls have resulted in some penalties, but many audits are not yet complete; and (3) actual collections on IP penalties remain far below the assessed amounts. Meanwhile, CBP has not attempted to understand variations within or among ports’ enforcement outcomes. Using CBP’s own import and seizure data, we identified some factors that may have influenced enforcement outcomes and some enforcement anomalies among key ports that bear further investigation. We also performed a simple analysis that could help CBP identify ports with higher or lower than average seizure outcomes. For example, of 25 ports that accounted for over 75 percent of the value of total IP-type imports in fiscal year 2005, just 8 of them had higher than average shares of IP seizures (by value) compared with their IP imports (by value). Such analysis could be further refined by incorporating information about ports’ import composition or by making comparisons across similar processing environments. Lacking an integrated approach, such analysis would likely be conducted by OT, but the responsibility for overseeing and influencing port operations lies with OFO. CBP’s Strategic Plan Lacks Performance Measures for IP Enforcement In its agency strategic plan, CBP notes the importance of IP enforcement under two strategic goals, however it has not included any performance measures to guide its IP border enforcement efforts and assess progress on its agencywide efforts. Leading organizations use strategic plans and performance measures to communicate their vision and priorities on an agency-wide basis and establish accountability. Under its strategic goal to facilitate legitimate trade and travel, CBP identifies the enforcement of relevant U.S. laws under its jurisdiction, including IP, as one of its strategic objectives. Although IP is included as a strategic objective, there are no specific IP performance measures under this goal. CBP also addresses IP enforcement under its goal to protect America and its citizens by prohibiting the introduction of illicit contraband, counterfeit goods, and other harmful materials. However, performance measures for this goal relate to prohibited agricultural items and narcotics. For example, there are three separate measures that address narcotics seizures, but none specific to IP. CBP does measure and report internally on the number of IP seizure actions and estimated seizure value and has other indicators related to IP enforcement, but these measures and indicators are not included as performance measures in its strategic plan. These measures and indicators are found in CBP’s IP Rights Trade Strategy, an internal document classified as “For Official Use Only” with limited distribution across CBP. In our discussions with CBP, agency officials responsible for developing and overseeing the IP Rights Trade Strategy referred to it as an internal planning document on IP enforcement. The internal nature of this document, unlike an agency’s strategic plan with performance measures, limits its usefulness in holding CBP accountable to Congress for its performance on IP enforcement. Legacy OT Offices Have Led Certain IP Improvement Efforts, but These Have Produced Mixed Results Offices that are now part of OT have taken the lead in carrying out certain efforts to improve IP border enforcement, but these efforts have produced limited results. In its IP Rights Trade Strategy, CBP states that its goal is to support the administration’s Strategy Targeting Organized Piracy (STOP) and to focus on high value seizures and seizures related to public health and safety issues. Among the initiatives outlined in the document, key IP border enforcement efforts include (1) improving computer-based IP targeting, primarily by developing a statistical risk-assessment model to complement Cargo Selectivity; (2) using audits to assess certain importers’ controls for preventing IP infringing imports; and (3) issuing guidance to give ports more discretion on when to assess IP penalties. Although computer-based targeting and audits are considered STOP priorities, none of these efforts have thus far significantly impacted CBP’s efforts to focus on high value seizures or seizures related to public health and safety. Effectiveness of New Targeting Model Cannot Be Determined A key component of CBP’s IP enforcement improvement efforts has been the development of a statistically driven risk assessment model. However, problems with implementation in the first field-based pilot test, and poor data collection in a second pilot test prevents OT from fully evaluating the model’s results. CBP developed this model to improve its ability to target unknown IP violators. The model figures prominently in CBP’s strategic plan and has been continually highlighted as one of CBP’s main contributions to STOP. CBP’s risk model differs from Cargo Selectivity in two key ways: first, it uses statistical analysis of past seizures and certain other information to target future shipments, whereas Cargo Selectivity relies on human analysis to develop criteria. Second, CBP officials told us the model is designed to identify unknown violators based on its analysis of past seizure patterns, whereas Cargo Selectivity primarily targets known or suspected violators. However, pilot tests conducted in 2005 and 2006 revealed problems and produced limited results: In the first pilot, run for 1 month by the Los Angeles Strategic Trade Center, the model was more efficient at targeting IP violations than Cargo Selectivity, according to CBP. However, CBP’s implementation methodology resulted in certain targeted shipments being released before they could be examined, affecting CBP’s ability to fully evaluate the model’s accuracy. Also, the model sometimes targeted the shipments of actual rights holders, forcing CBP analysts to intervene to prevent exams of authentic goods. The pilot helped CBP refine the risk threshold at which it should conduct examinations based on the model’s targeting. The second pilot was run for about 3 months at one seaport and two land border crossings, but OT is unable to determine how the model worked because of weaknesses and inconsistencies in the data that participating ports collected for the pilot, according to OT officials. For example, for some targeted shipments, the data does not contain any exam results, even though goods were ultimately seized from those shipments. In other instances, the data contains exam results for shipments that were too low risk to have been targeted by the model. Although CBP has already cited the model as an accomplishment, it does not know how well the model works. OT officials said they plan to develop and carry out a third pilot by having the Los Angeles Strategic Trade Center keep track of what the model has targeted and what the exams revealed. However, this may be difficult because the center will have to communicate directly with ports involved in the pilot to determine what exams were conducted and what results were found. It is not clear whether further revisions will improve the model or what role the model will play in CBP’s overall IP targeting strategy. After the first pilot of the risk model, OFO officials said they began developing a set of rules to target IP violations using ATS. These rules were developed with input from the former Office of Strategic Trade (now part of OT) and were tested concurrently with the risk model during the second pilot. However, data collection weaknesses also limit CBP’s ability to assess the effectiveness of these rules, and their role in future targeting is unclear. Audits Have Resulted in Some Seizures and Penalties, but Their Efficacy Is Limited Another prominent undertaking that is discussed in CBP’s strategic plan and represents a second CBP contribution to STOP is the use of audits to uncover IP violations among select importers. These audits, initiated by the former Office of Strategic Trade (now part of OT), are referred to as “post-entry” audits because they examine records of goods that have been imported and released into commerce. This single-issue audit is designed to evaluate the adequacy of an importer’s internal controls over IP imports, test whether the internal controls provide reasonable assurance that the company is compliant with CBP laws and regulations, and potentially find previously undetected IP violations made by the importer. These audits are most likely to be effective when dealing with compliance problems of established importers and less so when dealing with importers that purposely evade federal scrutiny, as is the case for importers involved in counterfeit trade. The audits began in 2005 and have produced some enforcement outcomes, but they and the post-audit decision process have been time consuming. OT’s Regulatory Audit Division, in consultation with the Los Angeles Strategic Trade Center, selected approximately 20 importers as audit candidates in each of fiscal years 2005 and 2006, some of which had a history of IP violations. Of these 41 audits, 23 have been completed, 12 are in progress, 4 were suspended because they involved companies that ICE was investigating, and 2 have not yet started. CBP has selected about half of the next 20 companies to be audited for fiscal year 2007. The completed audits found evidence that some companies had imported IP infringing goods, some counterfeit goods had been released into commerce, and other infringing goods remained in company warehouses. According to agency officials, based on these violations, CBP decided to assess penalties against certain companies, but the intra-agency review process for the penalties took about 1 year because of intra-agency discussions about the legal basis for assessing penalties on goods that had already entered commerce. Although CBP has moved forward with penalties in certain cases—four companies were assessed penalties totaling over $5.7 million—future penalty decisions must be deliberated on the facts in each case, according to CBP officials. CBP officials said they periodically monitor the import performance of these audited companies and found one additional instance of IP infringement. When audit findings are significant, CBP works with the importer to develop a Compliance Improvement Plan to help prevent further IP violations. At CBP’s December 2006 Trade Symposium, the Assistant Commissioner for Trade discussed CBP’s plan to conduct a greater share of its trade enforcement using these post-entry audits rather than cargo exams. The time consuming nature of these audits and their greater efficacy with established importers than with smugglers indicates the difficulties that the agency will face in making this kind of enforcement shift. New Penalty Guidance Reduces the Number of Uncollectible Penalties, but Actual Collections Remain Low A third undertaking has been the development of new guidance to reduce the number of uncollectible penalties that CBP assesses, thereby freeing up port resources for other activities. CBP officials reported that significant resources are dedicated to processing penalty cases; however, they noted that few penalties are collected and such enforcement has little deterrent effect. We reviewed CBP penalty data and found that less than 1 percent of penalty amounts assessed for IP violations were collected annually during fiscal years 2001 through 2006 (see table 3). Various factors contribute to CBP’s limited collection rates on IP penalties, including petitions for mitigation or dismissal by the violator, dismissal due to criminal prosecutions, and the nature of counterfeit importation. CBP officials said that many violators petition to have a penalty mitigated or dismissed, and these actions often reduce the amount of the penalty that CBP collects. One agency official explained that some penalties are dismissed as a result of the case going to criminal prosecution, in which the U.S. Attorney negotiates to have a penalty dropped in exchange for information or other evidence that will support the criminal case. Also, the deceptive nature of counterfeit importation makes it difficult for CBP to track violators and enforce penalties. To address the problem of poor collections, OFO communicated new CBP guidance to ports in 2006 that caused new penalty cases and assessed amounts to drop, but this has had limited impact on narrowing the gap between the amounts assessed and collected. CBP’s guidance addresses 10 problem scenarios under which penalties are unlikely to be collected so that Fines, Penalties, and Forfeiture offices will be able to avoid assessing penalties in those circumstances. The objective of the guidance is to reduce resources spent assessing uncollectible penalties as well as to reduce the gap between penalties assessed and collected. The total penalties assessed declined by two-thirds between fiscal years 2005 and 2006, from about $424 million to about $137 million, which CBP officials attribute to the new guidance. Despite an increase in the amount of penalties collected between fiscal year 2005 to 2006, from about $406,000 to about $614,000, less than 1 percent of the total penalties assessed was collected for each of the fiscal years we reviewed. CBP Has Not Analyzed Variations in Port Enforcement Outcomes Despite some of its improvement efforts, CBP has not undertaken any systematic analysis of its seizure and penalty activity to understand variations in enforcement outcomes within or across ports or to learn from ports that have been relatively more successful in capturing fraudulent IP goods. For example, CBP has not analyzed the potential reasons for fluctuations in seizure outcomes at key ports that have impacted its overall seizure outcomes, and it has not identified inconsistencies between seizure and penalty outcomes at certain ports. Performing this type of analysis raises questions about individual ports’ performance outcomes and may help to inform CBP about potentially effective port practices for IP enforcement. For example, in our analysis of 25 ports that brought in over 75 percent of the total value of IP-type imports in fiscal year 2005, we identified 8 whose seizure rate (i.e., the percent of IP seizures out of IP imports at that port, by value) was higher than the average for the top 25 IP-importing ports. We also found that some of the largest IP-importing ports had very small seizure rates relative to other IP-importing ports. Such analysis can be performed using existing data and could help CBP focus on identifying the handful of ports with relatively stronger enforcement records and determining whether their strategies and practices could be expanded and used at other ports. CBP Has Not Analyzed Yearly Fluctuations in Port Enforcement Outcomes CBP has not analyzed year-to-year fluctuations in ports’ seizure activity or other enforcement outcomes that may seem inconsistent with a port’s overall IP enforcement profile. In some years, fluctuations at key ports have had a significant impact on the agency’s overall enforcement results. For example, the number of seizure actions at one port has accounted for a growing share of all seizure actions since fiscal year 2001, representing nearly half of all seizure actions in fiscal year 2006, while seizure actions at another port have gone down over time, representing 18 percent of total seizure actions in fiscal year 2002 and less than 1 percent in fiscal year 2006. Two ports have been leading contributors to total estimated seizure values. However, while the value of seizures made by one of these ports has generally risen from fiscal year 2001 to 2006, the value of seizures made by the other port has fluctuated significantly, representing about 60 percent of total seizure value in fiscal year 2002 and about 17 percent of total seizure value in fiscal year 2005. The identities of these ports have not been included in this report for law enforcement reasons. In addition, CBP has not analyzed individual ports’ seizure and penalty outcomes to identify potential inconsistencies in the use of certain enforcement tools. We identified some ports with high seizure outcomes that have not had correspondingly high penalty outcomes. While ports may not necessarily assess penalties on all of their seizure outcomes, particularly if the seizure value is limited, we identified ports with relatively high average seizure values but limited penalty outcomes. For instance, three ports ranked among the top 10 ports for seizure value for fiscal years 2001 through 2006, with average seizure values ranging from about $83,000 to about $123,000, but none of these ports ranked among the top 10 for penalty cases opened, and only one of them ranked among the top 10 for penalty amounts assessed. The penalty cases these ports opened during this period ranged between 3 and 7 percent of their seizures. In comparison, the second highest ranked port for seizure value, with an average seizure value of about $88,000, was also one of the most active ports for penalty actions, opening penalty cases for about 37 percent of its seizures. Moreover, during our audit, we identified a sharp drop in penalty outcomes for one of the top 10 ports, starting in fiscal year 2004, that neither OT nor OFO had identified. In investigating the reason for the drop, OFO determined that the port was not following proper procedures for assessing and reporting its IP penalties. We asked OT officials to explain these anomalies between seizure and penalty outcomes, and they responded that such analysis had not been conducted. We asked officials in OT and its legacy offices whether they had analyzed port enforcement outcomes in this way and to what they attributed these fluctuations and inconsistencies. These officials said that imports vary among ports, seizures fluctuate according to changes in the nature of counterfeit trade, and ports have different priorities for performing IP enforcement. Overall outcomes or fluctuations might also be due to variations in ports’ enforcement techniques or changes in IP enforcement resources or the availability of skilled personnel. For example, an official at one port we visited told us that during fiscal year 2004, the port made an important change in its methods for performing trade-related targeting, by disbanding a seven-person team that had focused solely on trade targeting and shifting these resources to security targeting. During our February 2005 visit to this port, the port official stated that this change had contributed to significantly lower seizure activity at the port in 2005. When we discussed seizure outcomes at another port with OT officials—which have resulted predominantly from the manual targeting skills of port staff—they compared the workload created by these seizures with their limited impact given that these are predominantly small value seizures. OT officials said neither they, nor their legacy offices, have conducted any analysis of individual ports’ seizure and penalty outcomes. Additional analysis of selected ports’ enforcement outcomes are contained in appendix II. CBP Has Not Analyzed Relative IP Enforcement Outcomes among Ports CBP has also not conducted any analysis of ports’ relative enforcement outcomes. One way to conduct this analysis is to identify ports whose seizure rate—the share of their IP seizures (by value) out of their total IP imports (by value)—is relatively higher or lower than other ports. For example, in fiscal year 2005, we focused on the top 25 ports that account for over 75 percent of the value of all IP imports into the United States. Of these 25 ports, we found that 8 had seizure rates higher than the group average seizure rate of 0.015 percent. Surprisingly, the port with the highest seizure rate, seizing about 0.123 percent of its fiscal year 2005 IP import value, was also the port with the smallest value of IP imports among the ports we examined. This port accounted for only about 1 percent of all IP import value that year, but over 7 percent of total seizure value. In contrast, the IP seizure rates for several ports with relatively larger IP import values were well below the group average. Figure 6 shows the top 25 IP-importing ports by value in fiscal year 2005 and ranks them by their seizure rate that year. The eight ports in which the seizure rate is greater than the average (shown by the dotted line on the figure) are highlighted in grey. The figure also shows that many ports had very low seizure rates relative to the average. Port names have not been included for law enforcement reasons. Figure 6 illustrates the differences across ports in terms of IP seizures compared with IP imports in fiscal year 2005, the most recent year for which we had IP import data. We also examined these data over previous years and found similar results. For example, Port A had the first or second highest rate of any port in the top 25. Also, the eight ports with above-average ratios in fiscal year 2005 are consistent with prior years. Because of the wide range of factors that affect IP seizures (including the unknown amount of goods actually involved in IP violations transiting a given port), these data alone are not a complete measure of port performance. Ports can be compared relative to each other, but port performance in even those ports that appear relatively more successful could still be potentially improved. For example, the majority of these top 25 ports had IP seizures that accounted for less than one hundredth of a percent of their IP import value. Table 4 further illustrates the variability in seizure actions and the range of seizure rates among the top IP-importing ports. Lack of Integration across Key Offices Impedes Further Improvements in IP Enforcement CBP lacks an integrated approach across key offices for further improving border enforcement outcomes. Our analysis of CBP’s data is one illustration of the kind of work that the agency could do using existing data to understand differences in IP enforcement outcomes across ports, potentially identifying areas where improvements could be made, and more effectively managing its resources to meet the agency’s goals and objectives. CBP could refine this analysis by comparing across similar ports and processing environments, such as seaports to seaports and airports to airports, however inconsistencies in how CBP currently captures seizure activity across ports, as previously discussed, makes this refinement more difficult. CBP could use its experience and practical knowledge of some of the factors affecting trade flows at their ports to further inform the analysis. When we discussed our analysis with CBP, OT officials said they have not conducted this type of analysis because they do not have any responsibility or authority to oversee and influence port operations, which is under the purview of OFO. However, OT is responsible for overseeing CBP’s implementation of its Priority Trade Initiatives, and it has access to data that could inform OFO’s understanding of port IP enforcement practices and outcomes and inform CBP’s resource allocation decisions. Conclusions Enforcing IP protection at U.S. borders has become progressively more challenging for CBP as the volume of trade entering the United States increases, and as the of types of IP-infringing goods expand. CBP needs to maximize its IP enforcement efforts in an environment strained by the need to balance its primary mission to protect homeland security with trade facilitation and enforcement of U.S. trade laws, among other objectives. While CBP has publicly cited increases in enforcement outcomes based on larger numbers and higher values of IP seizures, indicating its success, it has not fully disclosed the composition of those seizures or analyzed what has accounted for the increases. However, as indicated by our review, a limited number of ports have driven overall IP enforcement activity. CBP’s strategic plan lacks measures to guide agencywide IP enforcement efforts, and the efforts of two key offices responsible for carrying out IP enforcement, OT and OFO, are not well integrated. So far, the agency’s main efforts to improve IP enforcement have involved initiatives carried out primarily by OT, and these have produced limited results. However, neither OT nor OFO has been engaged in trying to assess the effectiveness of CBP’s core IP border enforcement efforts—namely, the targeting, examination, seizure, and penalty activities undertaken by CBP’s front-line port operations. CBP’s ability to fully assess ports’ IP enforcement activities is hampered by inconsistencies in the way IP enforcement data are recorded at the ports. In addition, OT does not have responsibility or authority to oversee or influence port operations. As demonstrated by our own analysis of CBP data, the agency has sufficient information available despite the data’s limitations to conduct a more comprehensive review of IP border enforcement outcomes in ways that would provide insights about targeting, examination, seizure, and penalty assessment practices across ports. Certain improvements to existing data could make this type of review even more powerful. In addition, such analysis could prove useful as CBP responds to congressional directives to develop resource allocation models in order to determine the optimal staffing levels needed to carry out its commercial operations and other missions. CBP would be able to make more measurable links between its strategic objectives and enforcement outcomes, leading to more effective management practices and allocation of limited resources. Given the challenging environment in which CBP must process a vast influx of goods into the United States every day, it is particularly important that the agency consistently collect key data, perform useful analysis of the data, and use the data to better inform policies and practices and make decisions to focus its use of limited resources. Recommendations for Executive Action To develop a more effective approach to IP border enforcement, we recommended in the law enforcement sensitive report that the CBP Commissioner direct the Offices of International Trade and Field Operations to work together to take the following three actions: Clarify agencywide goals related to IP enforcement activity by working with the Office of Management and Budget to include in its agency’s strategic plan measures to guide and assess IP enforcement outcomes; Improve data on IP enforcement activity by: Determining the completeness and reliability of existing IP enforcement data and identifying aspects of the data that need to be improved; Ensuring uniformity in port practices to overcome any weaknesses in Use existing data to understand and improve IP border enforcement Analyzing IP enforcement outcomes across ports and other useful categories, such as modes of transportation; Reporting the results of this analysis internally to provide performance feedback to the ports, better link port performance to performance measures in CBP’s Strategic Plan, and inform resource allocation decisions. Agency Comments and Our Evaluation We provided a draft of the law enforcement sensitive report to DHS for review by CBP and ICE. Through DHS, CBP commented that it generally agreed with our recommendations and provided certain additional comments. Specifically, CBP said it would consider developing IP enforcement measures and include them in the agency’s strategic plan to clarify agencywide goals. CBP concurred with our second recommendation to improve its IP enforcement data. Regarding our third recommendation, CBP generally agreed to use existing data to understand and improve IP border enforcement activity. It stated that strong data analysis and targeting are critical to successful IP enforcement and agreed to improve the linkages between the agency’s IP enforcement objectives and port performance. However, CBP did not regard our analysis of ports’ IP seizures relative to IP imports as a useful tool for addressing the threat of IP infringement and identified a range of limitations with our analysis. We disagree. This simple presentation can be a powerful tool to generate discussion among IP policymakers and port management and staff about IP seizure patterns, risks, and outcomes. CBP has not yet attempted to complete even this basic analysis or initiate such discussions. Doing so would invariably lead to potential refinements of the analysis; we suggest a number of these in the report, some of which CBP identifies in its comments. We encourage CBP to refine this analysis and develop and use other types of analysis to ensure that ports are helping the agency achieve its IP enforcement objectives. CBP made certain additional comments about the draft law enforcement sensitive report. For example, CBP said that our report focused on transaction-level seizures and penalties, while not fully recognizing the importance of additional elements of its IP enforcement approach. We focused on seizures and penalties because they are the core IP enforcement activities for which CBP is responsible, and we found that CBP had not systematically analyzed its own performance of these activities. However, we also discuss the risk model, the audits, and the new penalty guidance; and these elements are reasonable components of a broader approach, but we believe their effect on overall IP enforcement has been limited. Moreover, we believe that CBP needs to continue to ensure the robustness of its core enforcement activities. CBP also stated that our report failed to note that the formation of OT in October 2006 was designed in part to address the lack of integration between OT and OFO. We disagree that the formation of this office has, in itself, addressed the lack of integration that we identify in the report regarding IP enforcement. Although OT consolidates CBP’s trade policy and program development functions, including those related to IP enforcement, front-line implementation of policies and programs continue to be carried out at the ports under the leadership of OFO. OT and OFO will need to work closely together to overcome the historic lack of integration regarding IP policy and program development and execution. CBP and ICE also provided technical comments, which we incorporated in the law enforcement sensitive report as appropriate. We also provided DHS a draft copy of this public report for a sensitivity review by CBP and ICE. CBP and ICE agreed that we have appropriately removed information that they considered law enforcement sensitive. We are sending copies of this report to appropriate congressional committees and the Librarian of Congress; the Secretaries of the Departments of Commerce, Homeland Security, Justice, and State; the Commissioner of U.S. Customs and Border Protection; the Assistant Secretary for U.S. Immigration and Customs Enforcement; the Directors of the U.S. Patent and Trademark Office and the U.S. Food and Drug Administration; the Chairman of the U.S. International Trade Commission; and the U.S. Trade Representative. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. We provided copies of the law enforcement sensitive report to the Secretary of the Department of Homeland Security, the Commissioner of U.S. Customs and Border Protection, the Assistant Secretary for U.S. Immigration and Customs Enforcement, and appropriate congressional committees with a need to know. If you or your staff have any questions about this report, please contact me at (202) 512-4347 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology The Ranking Minority Member of the Senate Subcommittee on Oversight of Government Management, the Federal Workforce and the District of Columbia, Committee on Homeland Security and Governmental Affairs asked us to (1) examine key aspects of the U.S. Customs and Border Protection’s (CBP) process to carry out enforcement of intellectual property (IP) at U.S. borders; (2) analyze CBP’s border enforcement outcomes during fiscal years 2001 through 2006; and (3) evaluate CBP’s approach for improving border enforcement. To examine key aspects of CBP’s border enforcement process, we reviewed documents and interviewed officials at CBP headquarters in Washington, D.C., and at selected port locations. At headquarters, we met with officials in the Office of International Trade (OT) and Office of Field Office Operations (OFO) and obtained documents. We reviewed agency strategic plans, annual performance accountability reports, laws, regulations, and policies that addressed agency goals and the roles and responsibilities of these two key offices and field locations in enforcing IP rights at the border, including CBP’s internal plan for addressing IP enforcement as a Priority Trade Issue. We also reviewed documents and interviewed officials at OT’s Los Angeles Strategic Trade Center, which has been responsible for developing and implementing CPB’s approach to IP enforcement since approximately 2002. We analyzed data that the center provided on the use and outcomes of using Cargo Selectivity criteria to target for IP violations. In addition, we reviewed documents and interviewed CBP officials at four field offices and seven ports (the names of the ports where we conducted field work are not included in this report for law enforcement reasons). We selected these ports in order to observe CBP’s IP enforcement practices at ports with high and medium import levels (measured by value) and across a range of processing environments. Five of these ports were among the top 25 IP-importing ports in fiscal year 2005, and five of them were among the six most active CBP ports as measured by selected IP enforcement outcomes. We reviewed documents on CBP’s organizational structures at these ports and the ports’ approaches for carrying out their overall missions, including IP enforcement. The CBP officials we interviewed at these ports were responsible for identifying, examining, seizing, and assessing penalties on goods that involve IP violations. In addition, we reviewed relevant papers, studies, and our reports that addressed key aspects of CBP’s mission and its IP border enforcement responsibilities. We also met with officials from the Department of Homeland Security’s Immigration and Customs Enforcement, the Department of Commerce’s Patent and Trademark Office, the Department of Justice, the U.S. Copyright Office, and the U.S. International Trade Commission, to discuss their respective roles in registering IP rights and investigating and prosecuting IP violations. To analyze CBP’s border enforcement outcomes, we obtained and reviewed seizure and penalty enforcement data covering fiscal years 2001 through 2006. First, to measure the value of IP-related trade entering U.S. ports, we identified products, using detailed product codes, that are likely to embody intellectual properties and obtained CBP data on the importation of these products by port over fiscal years 2002 to 2005. In order to identify these products, we reviewed the broad product groups that CBP uses to categorize seizure actions and domestic value and then identified all individual products that are covered by these groups within the U.S. harmonized tariff schedule. We discussed our detailed product list with CBP, which concurred with our list, and we obtained import data from them according to the list. We reviewed these data for internal consistency and any known limitations on its quality. We determined that these data were sufficiently reliable to provide the value of IP imports by port over the time period we examined. Second, we assessed CBP’s IP enforcement efforts on an aggregate, product, and port basis by analyzing its seizure and penalty data. Specifically, we obtained data from CBP on seizure and penalty outcomes for individual ports of entry for fiscal years 2001 through 2006. For fiscal years 2003 through 2006, we obtained data on the mode of transport or processing environments in which seizure were made. Finally, we obtained data on seizures by product category and source country from CBP’s external Web site. To assess the reliability of the seizure and penalty data, we examined them for internal consistency and discussed with CBP how the data are collected and reviewed. We also reviewed our prior work that reported on CBP’s seizure data. Based on this prior work and our discussions with CBP officials, we identified some limitations in the seizure data related to the precision of the seizure value estimates and the veracity of the fields that indicate product types and modes of transport or processing environments. We also found some inconsistencies in the various data sets we received, which we reported to CBP. The limitations and inconsistencies we identified, however, did not indicate large discrepancies in the data, and we found the data to be sufficiently reliable for the purposes of reporting on broad trends in seizures and penalties over time and among ports. Based on our discussions with CBP headquarters and port officials, we selected four types of enforcement actions by which to analyze port outcomes: (1) number of seizure actions, (2) domestic value of seizures, (3) number of penalty cases opened, and (4) penalty amounts assessed. We aggregated the data on these enforcement actions for fiscal years 2001 through 2006, ranked ports by each of the enforcement actions, and identified the top 10 ports for each of the categories. To evaluate CBP’s approach for improving border enforcement, we discussed CBP’s efforts with knowledgeable OT and OFO officials in Washington, D.C., and the field offices and ports we visited. We reviewed CBP’s and OFO’s strategic plans, CBP’s annual performance and accountability reports, and CBP’s internal plan for addressing IP enforcement as a Priority Trade Issue to identify CBP’s goals, objectives, and plans for conducting and improving IP enforcement at the border. We also examined documents related to the administration’s Strategy Targeting Organized Piracy. From this, we identified key initiatives that CBP has undertaken to improve the effectiveness of IP border enforcement and discussed the status of these initiatives with knowledgeable CBP officials. We also assessed the initiatives to determine how they relate to CBP’s IP enforcement goals. We analyzed IP enforcement outcomes across ports, identified fluctuations that have significantly impacted CBP’s overall IP enforcement outcomes and inconsistencies in outcomes at certain ports, and discussed these observations with CBP officials. We compared IP seizures with IP imports on a value basis for the top 25 IP-importing ports for fiscal years 2002 through 2005, determined the average seizure rate for these ports, and compared the ports’ individual seizure rates with the average. While estimated domestic value of seizures and values for IP imports are not equivalent measurements, we determined that these were sufficiently reliable proxies for the purpose of seizure to import comparisons. We also conducted interviews with CBP officials in headquarters and field locations to obtain information on how OT and OFO carry out their operations and in final meetings discussed ways in which OT and OFO could better use their data to improve IP enforcement. This report is based on a law enforcement sensitive report issued on March 20, 2007, as a restricted report, copies of which are available for official use only. This public version does not contain certain information that DHS regarded as law enforcement sensitive and requested that it not be included. We provided DHS a draft copy of this public report for a sensitivity review by CBP and ICE, which agreed that we had appropriately removed law enforcement sensitive information. We conducted our work from November 2005 through January 2007 in accordance with generally accepted government auditing standards. Appendix II: Analysis of Top Six Ports’ IP Enforcement Outcomes To analyze IP enforcement outcomes across ports, we selected four categories of enforcement outcomes from the data CBP provided: (1) number of seizure actions, (2) domestic value of seizure, (3) number of penalty cases opened, and (4) penalty amounts assessed. We analyzed data by port location for fiscal years 2001 through 2006. For four of these categories, we found that six ports accounted for the bulk of enforcement activity (see fig. 7). Four of these ports ranked in the top 10 for all four IP enforcement categories and 2 ranked in the top 10 for three out of the four categories. These ports may include multiple processing environments (e.g., land, sea, air modes), and seizure associated with such ports may stem from more than one processing environment. Because of inconsistencies in how CBP records seizures by port, we were unable to fully determine how many processing environments are reflected in the enforcement data. We were able to determine the seizure reporting practices of the seven ports that we visited. For example, at one location, CBP has one port “code” that includes seizures made at the seaport as well as the nearby airport. However, seizure data for another location includes seizures made at the seaport and from the international mail processing environment, but does not include other air-based seizures made at the nearby airport, which are reported separately. Neither OFO nor OT officials at the headquarters level could provide us information to understand the seizure reporting practices of CBP’s ports. As figure 7 illustrates, CBP’s data can be used to compare enforcement outcomes across the ports (left to right) or compare outcomes within a single port (top to bottom). With the exception of Port D, the scales are uniform. When analyzing IP enforcement outcomes both across ports and within a single port, we found wide variability when comparing outcomes from year to year and among the enforcement categories. For example, the Port F experienced increases in the number of seizure actions for fiscal years 2002 and 2004; but in alternating years, the number dropped, in some cases by as much as 50 percent. Penalty outcomes also fluctuated by port location. For example, Port G had relatively constant numbers of penalty cases opened and penalty amounts assessed until fiscal year 2005, when these outcomes peaked, and then fell off sharply in fiscal year 2006. When examining outcomes within a single port, we found that the fluctuations in seizure outcomes did not mirror fluctuations in penalty outcomes. For example, Port C had a relatively steady number of seizure actions between fiscal years 2003 and 2006, but the number of penalty cases opened fluctuated greatly during the same time period. While we did not expect to find a direct correlation between seizure and penalty outcomes, given that the seizure data consists of all IP-related activity and penalty data limited to only those assessed under 19 U.S.C. 1526(f), we did not expect to find such wide variations within the most active ports during the 6-year time period we reviewed. Appendix III: Comments from the Department of Homeland Security The following are GAO’s comments on the Department of Homeland Security’s letter dated March 6, 2007, which commented on GAO’s law enforcement sensitive report. GAO Comments 1. We disagree that OT’s formation has, in itself, addressed the lack of integration that we identify in the report regarding IP enforcement. Although OT consolidates its legacy offices’ trade policy and program development functions—specifically, the Office of Strategic Trade, the Office of Regulations and Rulings, and a headquarters component of the Office of Field Operations, front-line implementation of policies and programs continue to be carried out at the ports under the leadership of OFO. Because there is still separation between the office that sets policy and the office that implements policy, there is no evidence that the creation of OT will address the lack of integration that initiated congressional action and that we identified in our review. OT and OFO will need to work closely together to overcome the lack of integration between policy development and implementation that we refer to in the report. Regarding CBP’s comment that we inaccurately refer to OT throughout the report, we modified the report to clarify references to OT and its legacy offices. 2. CBP mischaracterizes the purpose and usefulness of our analysis of IP imports and enforcement activity at its ports. Seizing IP infringing goods is a core IP enforcement activity for which CBP is responsible, but CBP has not conducted systematic analysis of its own data to examine variations in enforcement outcomes over time or among ports. This simple analysis can be a powerful tool to generate discussion among IP policymakers and port management and staff about IP seizure patterns, risks, and outcomes and to improve CBP’s approach to IP enforcement. We do not suggest using it to target imports. We clearly state that our analysis is only illustrative of the types of analysis that CBP could undertake and that CBP should refine and develop its own approach. We support CBP’s decision to follow our recommendation to conduct such analysis. 3. Our measure of IP imports is broader than CBP suggests, but it is not intended as a measure of risk. CBP does not currently have such a measure, and we believe our measure, which we reviewed with CBP, is useful. Our measure is intended to identify the overall relative volume of potential IP traffic that ports process, which is lower than total import volume. Doing so enables us to examine IP seizure activity relative to a meaningful measure of port volume. Our measure is based not only on tariff classifications where CBP actually found IP violations but also their related products groups. These classifications may include both IP and non-IP goods that are not separated out in the tariff schedule. Doing so is appropriate because CBP cannot easily distinguish, based on the tariff information, whether goods involve IP protection, have been purposely misclassified, or are being used to smuggle infringing products. The draft report explained our selection process and methodology, but we added clarity to further explain our measure and how we developed it. We also state that this analysis is one logical way to examine port performance and do not suggest, as CBP states in its response to our third recommendation, that the seizures-to-imports ratio is, in itself, a complete performance measure. 4. We also agree that CBP should consider the range of factors that affect IP goods in conducting its analysis. We clearly stated in the report that our analysis was only one way to examine port performance. Our analysis was not intended to be used for risk analysis or targeting, but only to provide a measure of seizure activity relative to port size, and we did not assume a direct correlation between the volume of IP imports and risk. 5. We agree that CBP’s import data does not fully capture IP import values at express consignment and mail facilities, but even with this likely underreporting of such imports, several ports at which such facilities are dominant ranked among the top IP importing ports. As we state in the report, we agree that CBP should take differences across port environments into account in developing its own analysis. 6. We focused on “transaction-level seizures and penalties” because these are the core IP enforcement activities for which CBP is responsible, and we found that CBP had not systematically analyzed its own performance of these core activities. We also address additional elements of CBP’s approach to improve IP enforcement that fell within the scope of our audit, including its IP audits. However, we found that these efforts have had a limited impact on overall IP enforcement, while perhaps drawing attention away from improvements to its core activities. For example, CBP acknowledges that its audits, completed thus far on 41 companies during a 2½-year period, are most effective for improving IP enforcement among legitimate importers, but these importers likely do not account for the bulk of IP infringing goods that enter the United States. 7. CBP mischaracterizes our analysis by asserting that our report states across-the-board that CBP lacks performance measures for IP enforcement. Our draft report stated that the agency’s strategic plan lacks specific measures for IP enforcement, but noted that CBP does indeed measure and report internally on the number of IP seizure actions and estimated seizure value. The draft report also discussed certain actions that CBP has taken under its “Intellectual Property Rights Trade Strategy,” an internal planning document that is the same as what CBP calls the “National IPR Trade Strategy” in its letter. However, we disagree with CBP’s assertion that this document serves as an agencywide guide for CBP’s IP enforcement efforts. In our discussions with CBP about this document, CBP officials said the document was written for internal planning purposes. We found that the distribution of this document has been limited. For example, CBP documents show that revisions to the IP Rights Trade Strategy have not been distributed to the field since 2003. Moreover, certain CBP officials told us that the ports are generally not familiar with this document. Finally, given the document’s classification as “For Official Use Only,” it is not distributed to Congress or the public, unlike the agency’s strategic plan, which limits its usefulness for holding CBP accountable for its performance on IP enforcement. We added information in our report to clarify the importance of agencywide strategic plans and performance measures in communicating priorities on an agencywide basis and establishing accountability. We added information that the IP Rights Trade Strategy contains other indicators related to IP enforcement. We also modified our recommendation to clarify that we recommend that CBP include IP enforcement-related measures in its strategic plan. Given the Office of Management and Budget’s role in the strategic planning process, we also clarified that CBP should work with the Office of Management and Budget to include IP-enforcement related measures in its strategic plan. 8. See comment 7. 9. See comments 2 and 3. Appendix IV: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the individual named above, Christine Broderick, Assistant Director; Shirley Brothwell; Carmen Donohue; Adrienne Spahr; and Timothy Wedding made significant contributions to this report. Virginia Chanley, Jerome Sandau, Ernie Jackson, Karen Deans, Etana Finkler, and Jena Sinkfield also provided assistance.
U.S. government efforts to protect and enforce intellectual property rights are crucial to preventing billions of dollars in economic losses and for mitigating health and safety risks from trade in counterfeit and pirated goods. The Department of Homeland Security's Customs and Border Protection (CBP) leads intellectual property (IP) enforcement activity at the U.S. border. GAO was asked to (1) examine key aspects of CBP's process to carry out border enforcement, (2) analyze CBP's border enforcement outcomes during fiscal years 2001 to 2006, and (3) evaluate CBP's approach for improving border enforcement. GAO examined relevant documents, interviewed agency officials in Washington, D.C. and seven port locations, and analyzed CBP data on trade and IP seizure and penalty activity. This is the public version of a law enforcement sensitive report by the same title (GAO-07-350SU). CBP's Office of International Trade (OT), formed in 2006, and Office of Field Operations (OFO) carry out IP border enforcement processes, including targeting and examining suspicious shipments, seizing infringing goods, and assessing penalties as warranted. CBP uses computer-based and manual targeting to determine which shipments it will examine, and both methods have strengths and limitations. Port practices for recording exam results vary, making it difficult for CBP to fully assess the effectiveness of its IP targeting efforts. Since 2001, CBP's IP enforcement outcomes have been concentrated among particular transport modes, product types, and ports. Rising numbers of low-value seizures from mail facilities have driven growth in seizure actions, but uneven seizures of high-value goods from sea containers have caused the estimated value of seizures to fluctuate. The vast majority of seizure and penalty outcomes in the last 6 years have been concentrated among 10 or fewer of CBP's 300-plus ports. For example, 10 ports account for 98 percent of the $1.1 billion in penalties assessed during fiscal years 2001 to 2006. CBP lacks agencywide performance measures in its strategic plan and an integrated approach across key offices to guide and improve IP enforcement. Narrowly focused initiatives led by offices now under OT have had limited results. CBP has not done a broader analysis to examine variances in port IP enforcement outcomes. For example, GAO found that some of the largest IP-importing ports had very small seizure rates relative to other top IP-importing ports. A lack of integration between OT and OFO impedes using this type of analysis to identify potential IP enforcement improvements.
Background CBP Is the Lead Federal Agency Responsible for Stemming the Flow of Bulk Cash Leaving the U.S. at Land Ports of Entry CBP is the lead federal agency charged with securing our nation’s borders while facilitating legitimate travel and commerce. To meet the Secretary’s March 2009 mandate that CBP conduct inspections of traffic leaving the U.S. for Mexico at all 25 land ports of entry on the southwest border, CBP expanded or initiated inspections of outbound travelers, including those leaving by foot, private vehicle (see fig. 1), or commercial trucks. CBP’s effort to stem the flow of bulk cash is part of a larger counternarcotics strategy to secure the southwest border. CBP has three main components that have border security responsibilities. First, CBP’s Office of Field Operations is responsible for inspecting the flow of people and goods that enter and leave the country through air, land, and sea ports of entry. Second, CBP’s Border Patrol works to prevent the illegal entry of persons and merchandise, including contraband, into and out of the United States between the ports of entry and at checkpoints located in major traffic routes away from the border. In doing so, the Border Patrol is responsible for controlling nearly 7,000 miles of the nation’s land borders between ports of entry and 95,000 miles of maritime border in partnership with the United States Coast Guard. Third, CBP’s Office of Air and Marine helps to protect the nation’s people and critical infrastructure through the coordinated use of an integrated force of air and marine resources and provides mission support to the other CBP components. For fiscal year 2010, CBP had a $11.4 billion budget, of which $2.7 billion was for border security and trade facilitation at ports of entry. For outbound operations, CBP’s budget was about $109 million in fiscal year 2009 and is an estimated $145 million for fiscal year 2010. In carrying out its responsibilities, CBP operates 327 ports of entry, composed of airports, seaports, and designated land ports of entry along the Northern and southwest border. While CBP does not know the number of travelers that leave the United States through land ports of entry, it estimates that it inspected over 360 million travelers who entered the country in fiscal year 2009 through land, air, and sea ports of entry. In total, the number of travelers who entered the country through land ports of entry represented over 70 percent of all travelers entering the country. CBP’s Process for Inspecting Travelers Leaving the Country The process used by CBP to inspect travelers leaving the country differs from the inspection process for those entering the United States at land ports of entry. CBP centers attention, among other things, on the citizenship and admissibility of the travelers for those who seek to enter or reenter the country through land ports of entry. In contrast, CBP officers ask a different set of questions of travelers leaving the country. To determine whether travelers are in compliance with the reporting requirements for the international transport of currency and monetary instruments, officers may ask travelers whether they (1) intend to leave the country by asking where they are going (i.e., Mexico on the southwest border and Canada on the Northern border), (2) are carrying more than $10,000 in currency, checks, money orders, or any other type of monetary instrument, and (3) are transporting any weapons or ammunition into Mexico or Canada. While carrying more than $10,000 in currency or other type of monetary instrument across the border is legal, failure to report the currency or monetary instrument with the intent to evade the reporting requirement is illegal. Further, it is illegal for an individual to knowingly conceal more than $10,000 in currency or other monetary instruments and transport or attempt to transport such currency or monetary instruments into or out of the United States with the intent of evading the reporting requirements. In addition to the interview process, CBP officers may also inspect, among other things, the content of car trunks, vehicle compartments, and packages in the vehicle. Figure 2 shows CBP officers querying outbound travelers at a land port of entry. When conducting outbound operations, CBP officers may refer travelers for a more detailed inspection—called secondary inspection. Secondary inspections generally involve inspections of vehicles in a separate area from the primary inspection. They can include more in-depth interviewing of travelers, checking the traveler’s identifying information against law enforcement databases, or inspecting containers and boxes (see fig. 3). FinCEN Plays a Key Role in Regulating Money Services Businesses While smuggling cash is one method of taking illegal proceeds out of the country, criminals have also begun using other means to move proceeds from illegal activities across U.S. borders. One such method is the use of electronic media called stored value. This method can involve a broad range of technologies, including the use of prepaid cards, prepaid telephone cards, and financial transactions carried out through a cell phone. Money services businesses play a key role in issuing, selling, and redeeming stored value. FinCEN, a bureau within the Department of the Treasury, is responsible for the administration of the Bank Secrecy Act (BSA)—a statute that authorizes FinCEN to require MSBs and other financial institutions as well as nonfinancial trades or businesses and many individuals to maintain records and file reports that have a high degree of usefulness in criminal, tax, regulatory investigations or procedures, or certain types of counterterrorism investigations. FinCEN carries out this responsibility as part of its broad mission of enhancing U.S. national security, deterring and detecting criminal activity, and safeguarding financial systems from abuse by among other things, requiring financial institutions to establish anti-money laundering programs as well as file reports on large currency transactions. FinCEN has about 300 staff to carry out its analytical, administrative, and regulatory responsibilities. Within FinCEN, the Regulatory Policy and Programs Division is responsible for, among other things, BSA compliance in the financial industry and for issuing regulations for U.S. financial institutions. To carry out its mission, FinCEN supports and networks with law enforcement agencies across the federal government that may be involved in investigating money laundering and terrorism financing. For example, FinCEN works with agencies in DHS, such as CBP, ICE, and Secret Service; agencies in DOJ, including the DEA, Bureau of Alcohol, Tobacco, Firearms, and Explosives, and the FBI; and the Criminal Investigation Division within the IRS. In addition, FinCEN coordinates its efforts with another IRS unit called the Small Business/Self Employed Division which conducts BSA compliance examinations of certain nonfederally regulated non-bank financial institutions, such as MSBs and casinos. MSBs Play a Key Role in Offering Stored Value Products to Consumers Among the businesses that FinCEN has defined as MSBs are those that issue, sell, or redeem stored value. In some cases, such businesses may provide a variety of services in addition to offering stored value products, including check cashing, money orders, and money transmitting services. In other cases, such businesses may only issue, sell, or redeem stored value. These businesses play a key role in providing financial services to a segment of the population that may not maintain checking or savings accounts. Such businesses are common and are located in large and small communities across the country. Examples of MSBs can range from national companies with a large number of agents and branches, such as Western Union and MoneyGram, to small “mom and pop” money services businesses that may offer check cashing, money orders, and other financial services. The volume of transactions for MSBs nationwide is not known. One type of product offered by MSBs that falls under the definition of stored value are stored value cards, which include gift cards or prepaid cards. In this report, we will refer to such cards as stored value cards, the same term that FinCEN currently uses for such products. Stored value cards are a growing alterative to cash transactions and they offer individuals without bank accounts an alternative to cash and money orders. They have many legitimate uses that help consumers in a variety of ways. For example, retail establishments sell gift cards to customers as an easy and convenient way to purchase goods or services. Employers may issue cards in lieu of checks when paying salaries to employees. Consumers can also purchase cards and use them to purchase goods or services at retail stores across the country or to, in some cases, withdraw cash at automated teller machines overseas. For example, rather than paying for groceries using cash, a consumer could use a prepaid card. Also, the federal government uses prepaid cards in conjunction with its food stamp program. The two main types of stored value cards are the following. Closed system cards (see fig. 4)—These are the most common form of stored value cards and are often called gift cards. These cards are issued by major merchants and retailers, such as department stores, electronics stores, and coffee shops. They can be bought at many different types of retailers, including drugstores, grocery stores, and other businesses. Other examples include cards that students may use to purchase food on college campuses or that passengers use on subway systems or phone cards. Generally, closed system cards are limited in use in that they can only be used to purchase goods or services from a single merchant. These cards may be limited to the initial value posted to the card or may allow the card holder to add value. A study conducted for the Federal Reserve estimated that in 2006, the value of transactions for closed system cards amounted to $36.6 billion. Open system cards—These cards have greater use as a cash alternative since a single card may be used at a myriad of stores, merchants, or automated teller machines (ATM) within and across U.S. borders. Such cards are easy to buy and can be bought on-line or in person. Open system cards may not require a bank account or face-to-face verification of the card holder’s identity. Domestically, companies may voluntarily place a dollar limit on the cards. Such cards may be used to access cash from ATMs in and out of the United States and can be reloaded to add value on the card. In certain countries outside of the United States, open system cards can be purchased and can be used to withdraw cash at ATMs across the world. A study conducted for the Federal Reserve estimated that in 2006, the value of transactions for open system cards in the United States totaled $13.2 billion. Hybrid forms of closed system and open system cards are also available. One form of a semi-closed system card can be used at more than one store rather than a single store. For example, a shopper may be able to use a card at a group of stores located in the same shopping mall. Another example of a hybrid card is one that can be used at any merchant that accepts debit or credit cards, but the card cannot be used to withdraw cash at ATMs. The Departments of Treasury, Justice, and Homeland Security recognized stored value as a potential threat for cross-border currency smuggling and money laundering as early as 2005. For example, law enforcement agencies from these three departments stated that “stored value cards provide a compact, easily transportable, and potentially anonymous way to store and access cash value” and that “federal law enforcement agencies have reported used as alternatives to smuggling physical cash.” Further, they stated that “the volume of dirty money circulating through the United States is undeniably vast and criminals are enjoying new advantages with globalization and the advent of new financial services such as stored value cards.” A year later, in 2006, a Treasury official stated that while stored value cards serve legitimate markets, without adequate controls, such payment innovations pose significant risks for money laundering and terrorist financing. This official noted that the risks involved access to bank payment networks without requiring a bank account or verifying customer identification. Beyond cards, new forms of stored value have surfaced in recent years. In 2008, the World Bank issued a report that identified the risk of international smuggling and money laundering of illegal proceeds through the use of financial transactions initiated from a mobile phone, also called mobile financial services. That report describes how technology is now available in countries such as South Korea, the Philippines, and Malaysia that allows individuals to make transactions from an account in one country to an account in another country through a mobile phone. This technology has begun to penetrate the market in the United States and will become more readily available to consumers in the next several years. According to the report, the risks of money laundering with the use of such devices include (1) user identity may not be known, (2) “smurfing,” or splitting large financial transactions into smaller transactions, can be carried out to evade scrutiny and reporting by the financial institution, and (3) mobile financial services fall outside of anti-money laundering regulations. CBP Has Established an Outbound Enforcement Program, but Further Actions Are Needed to Address Program Challenges CBP Has Created an Outbound Enforcement Program and Seized about $41 Million in Bulk Cash Leaving the Country Since March 2009 In March 2009, CBP reestablished an Outbound Enforcement Program within its Office of Field Operations (OFO). The immediate goal of the program was to increase outbound enforcement activities along the southwest border in order to obstruct the illegal flow of firearms and currency being smuggled from the United States to the Mexican Drug Trafficking Organizations. The program is staffed by a Director and 6 other officials. Since March 2009, CBP officers conducting outbound operations have conducted more than 3 million inspections. In addition to increasing outbound inspections, CBP has taken further action to support its efforts to seize bulk cash and other items. For example, CBP has developed a training curriculum that provides officer training in outbound enforcement operations for all port environments, including land, air, and sea. The training curriculum includes a 6-part, Web-based, training series, an 8-day classroom session, and on-the-job training. During the classroom session, officers complete modules on legal authority, targeting, inspecting, and processing, and participate in scenario-based activities. As of July 2010, 131 officers had completed the training in fiscal year 2010. In addition to developing outbound training, the Outbound Enforcement Division integrates the work of outbound operations with other CBP components. For example, the Division coordinates with the Tactical Operations Division and the Office of Intelligence and Operations Coordination to develop tactical and strategic operations based on a review of intelligence information and seizure activity. Further, the Division coordinates its efforts with staff involved in carrying out the Western Hemisphere Travel Initiative as well as with the Office of Border Patrol. For example, outbound enforcement efforts are augmented by 116 Border Patrol agents. OFO also coordinates its efforts with other law enforcement entities working to combat bulk cash smuggling, such as DEA and ICE. For example, CBP coordinates with DEA by providing staff and intelligence to the El Paso Intelligence Center (EPIC), a national tactical intelligence center led by DEA and designed to support law enforcement efforts, with a significant emphasis on the southwest border. Among other functions, EPIC analyzes bulk cash seizure data and develops various reports on bulk cash smuggling methods, which are provided to various law enforcement agencies. EPIC also responds to requests for bulk currency seizure data from officers in the field. Additionally, CBP participates with ICE in Operation Firewall and the Border Enforcement Security Task Force (BEST) initiative. Operation Firewall, started in 2005, targets criminal organizations involved in outbound currency smuggling, while the BEST initiative focuses on increasing information sharing and collaboration among agencies involved in disrupting and dismantling criminal organizations that pose a significant threat to border security. As a result of its outbound enforcement activities, CBP seized about $41 million in illicit bulk cash leaving the country at land ports of entry from March 2009 through June 2010. The vast majority of this currency, 97 percent, was seized along the southwest border. While CBP seized more than twice the amount of bulk cash during the first year of the outbound program as compared to the year prior, total seizures account for a small percentage of the estimated $18 billion to $39 billion in illicit proceeds being smuggled across the southwest border and out of the United States annually. CBP was most successful in seizing bulk cash during the first 6 months of the Outbound Enforcement Program. As shown in table 1 below, CBP seized nearly $21 million from March 2009 through August 2009, averaging about $3.5 million each month. Despite the number of seizures increasing by 17 percent during the second 6 months of the outbound program, the total amount of cash seized decreased by 37 percent when compared to the first 6-month period. The amount of cash seized in any given month varied. As shown in figure 5 below, since the start of the Outbound Enforcement Program, total seizures spiked in March, April, and September 2009. The spikes in March and April 2009, totaling $6.4 million and $8.2 million respectively, were each driven by a single incident in which CBP seized a large amount of currency. For example, of the $6.4 million CBP seized in March 2009 across all ports of entry, CBP seized more than $3 million during a single incident at a port of entry. In contrast, the $6.3 million spike in September is comprised of multiple seizures of smaller amounts of currency, with no single seizure larger than $803,000. From March 2009 through June 2010, CBP had at least one bulk cash seizure at 21 of the 25 land ports of entry along the southwest border while conducting outbound operations; however, the total amount seized varied significantly by port. Along the southwest border, the total cash seized at each port during this time period ranged from about $11,000 to about $11 million, with more than 80 percent of the cash seized at 5 land ports. Two ports represent about half of all seizures. CBP officials stated that the concentration of seizures in these 5 land ports may be the result of high-traffic volumes and proximity to major drug trafficking routes. In addition to bulk cash seizures, CBP’s Outbound Enforcement Program has carried out other enforcement actions, such as firearm seizures, drug seizures, stolen vehicle recoveries, and enforcement of immigration violations. Examples include the following: In April 2010, officers conducting outbound operations at the San Ysidro port of entry in California apprehended a male subject wanted in Mexico for a triple homicide, trafficking of cocaine, methamphetamines, firearms, and ammunitions. In June 2010, officers conducting outbound operations at the San Luis, Arizona port of entry seized a large sports utility vehicle, 114 grenades, and over 2,500 rounds of various types of ammunition. Stemming the Flow of Bulk Cash Is a Challenging Task CBP has succeeded in establishing an Outbound Enforcement Program, but the program is in its early phases and there is a general recognition by CBP managers and officers that the agency’s ability to stem the flow of bulk cash is limited because of the difficulty in detecting bulk cash. Beyond the inherent difficulty in identifying travelers who attempt to smuggle cash, three main factors limit CBP’s success in this area. First, CBP currently does not conduct outbound operations on a full-time basis, providing smugglers opportunities to circumvent detection by crossing the border when CBP officers are not conducting operations. Second, officers have limited equipment, technology, and infrastructure for conducting outbound operations. CBP officers and managers report that additional resources would improve officer effectiveness at discovering bulk cash and enhance officer safety. CBP began a $23 million project to determine how to deploy additional technology to outbound lanes in 2009 and expect cost estimates to be ready in September 2010. CBP also plans to spend approximately $10 million in funds from the Fiscal Year 2009 Supplemental Appropriations Act for temporary infrastructure improvements and to install additional infrastructure at up to 21 crossings on the southwest border starting in February 2011. Third, long wait times impact CBP’s ability to inspect all outbound travelers given CBP’s need to balance its mission of facilitating the cross-border movement of legitimate travelers and billions of dollars in international trade with its mission of inspecting travelers. Additional data and information on the challenges CBP faces in stemming the flow of bulk cash smuggling is law enforcement sensitive and not included in this report. While factors such as staffing, infrastructure, and technology limit CBP’s ability to detect large amounts of cash, the fact that CBP conducts outbound inspections does not guarantee that the agency will identify attempts to smuggle bulk cash. For example, our investigators tested outbound operations at three ports of entry on the southwest border. Our investigators observed that CBP officers and Border Patrol agents were interviewing travelers, inspecting vehicles, and performing secondary inspections at each port. At each of these locations, our investigators bypassed opportunities to turn around and proceeded on a traffic lane at the entrance to the port of entry marked as the route to Mexico, signaling their intent to leave the country. They entered a designated outbound inspection area with shredded cash hidden in the trunk of their car. At two of the three ports of entry our investigators claimed that they only wanted to see the border and would like to turn around when approached by CBP officers and Border Patrol agents conducting outbound inspections. At both of these ports of entry, officers and agents allowed the investigators to turn around without searching the vehicle, asking for identification, or probing further to determine whether our investigators posed a risk of smuggling. In addition, the officers and agents did not question our investigators on why they did not turn around earlier when they had an opportunity to do so. At the third port of entry, CBP officers did not interview our investigators or physically inspect the vehicle that contained the shredded cash. However, CBP officers used an X-ray detector on the vehicle. Program Data, Policies and Procedures, and Performance Measures Could Be Strengthened to Improve the Outbound Program Addressing the limitations described above could require substantial capital investments at all ports of entry. However, the extent that such investments could result in greater seizures of bulk cash, weapons and ammunition is not known, in part because CBP lacks data on benefits and costs of an expanded program. CBP will likely need more time to gain a clearer understanding of how well the program is working and what factors will contribute most to improved results. Data on the expected costs and benefits of the program are a basic building block for informing decisions on whether to expand the program, continue the program at current levels, or to reduce the size of the program. In addition, policies and procedures to ensure the safety of officers are not in place. CBP has developed strategic goals for its outbound program, but it lacks performance measures that assess the effectiveness of the program. Limited Data on Expected Program Costs and Benefits Hinders CBP’s Ability to Inform Decisions on the Budget and Outbound Program As the outbound program matures, developing additional cost data in four key areas—staffing, technology, infrastructure, and wait times created by outbound operations—could help inform decisionmakers on program and budget decisions. OMB provides guidance on how agencies can evaluate the costs and benefits of a program, such as the Outbound Enforcement Program, to inform policymakers on budget and program decisions. In addition, DHS calls on its components to carry out analyses of costs and benefits to assist in planning a project and in managing costs and risks. The Southwest Border Counternarcotics Strategy states that law enforcement agencies should analyze the effectiveness of outbound inspections and, if warranted, consider expanding the number of inspections in search of bulk currency. Data for Determining Staffing Costs for Expanded Operations Has Limits While CBP has data on the current cost of the Outbound Enforcement Program, it faces challenges in developing cost data to estimate the future size of the program. From fiscal years 2008 through 2010, the cost of CBP’s outbound program increased from about $89 million to an estimated $145 million. Costs for the outbound program involved primarily the cost of headquarters and mission support staff as well as salaries, benefits, overtime and premium pay for officers. Together these items represent more than 98 percent of the total cost for each fiscal year. Appendix I provides a more detailed breakdown of costs for the outbound program. CBP plans to improve its data for estimating the cost of staff involved in inspecting outbound traffic for its current level of effort. For example, CBP plans to refine the data by calling on CBP managers at ports of entry to estimate the total number of hours officers worked during an outbound shift rather than simply counting the number of officers who worked that day. While CBP has cost data for staffing the current level of effort, challenges remain for estimating costs of staffing the program in the future. CBP has developed an Outbound Workload Staffing Model to assist managers in determining future staffing levels. However, the model has data limitations, in part, because data on outbound operations is limited or missing because the program is new. For example, the model does not identify the number of CBP canine handlers that are needed to support outbound operations. Having such data would inform future iterations of the model in estimating the number of currency canine handlers that may be needed. Also, the model assumes that outbound traffic volumes are the same as inbound traffic volumes because CBP does not have data on the number of travelers and vehicles that leave the country through land ports of entry. According to these staff, having such data would be helpful in determining staffing needs. CBP is Making Progress in Developing Cost Estimates of Equipment Needed by CBP Officers to Carry Out Outbound Operations at Land Ports of Entry CBP managers stated that they are developing a list of equipment officers need to conduct outbound operations at land ports of entry. Such a list would include equipment, such as mirrors, fiber-optic scopes, and density readers that CBP officers need to inspect vehicles leaving the country. Managers stated that they plan to develop the initial list by the end of 2010 and they would submit the list to managers at ports of entry for comment in 2011. Once comments have been received, CBP plans to develop a cost estimate for outbound equipment later in 2011. One source of funding for purchasing equipment by CBP is the Department of the Treasury’s Forfeiture Fund. In fiscal year 2009, CBP deposited more than $25 million into the fund from currency forfeitures. CBP is permitted by the Department of the Treasury’s Executive Office of Asset Forfeiture to use money from the fund to purchase equipment and infrastructure such as canopies, signage and lighting to support outbound operations. However, CBP has expressed concern about using the funds for a one-time purchase of equipment and infrastructure because funding is not available for maintenance and repair of the equipment in the CBP Office of Field Operations budget. For fiscal year 2010, CBP requested $7.5 million so that the agency could pay overtime to state and local law enforcement officers who work outbound operations with CBP at ports of entry and $500 thousand for equipment, such as currency counters, digital cameras, and contraband detection kits. In total, CBP requested about $102 million from the Treasury Forfeiture Fund for fiscal year 2010. As of June 2010, CBP had received almost $55 million from the fund, however, none of this money was for the outbound program. CBP is Making Progress in Developing Cost Estimates of Technology Improvements CBP has a project underway to determine how to upgrade and install license plate readers and to enable computer connectivity, but the agency has not yet determined how much this would cost at each port. According to CBP, license plate readers are available at 48 of 118 outbound lanes on the southwest border and none of the 179 outbound lanes on the northern border. Additionally, CBP officials estimated that there are a limited number of outbound lanes networked to support computer stations or wireless computing, both necessary for document readers that we discussed earlier in this report. CBP officials in charge of the project stated that they plan to determine the costs involved in deploying license plate readers and computer connectivity and that a cost estimate will be available in September 2010. Such estimates could provide important information for CBP outbound program managers as they assess scenarios for outbound operations at each port of entry. Cost Estimates of Infrastructure Improvements Are Limited Although CBP has plans to consider outbound infrastructure needs, it has not yet conducted an analysis of outbound infrastructure needs at ports of entry and the related costs for improving infrastructure for its outbound operations. The strategic plan for the Outbound Enforcement Program states that the program will request the necessary budgetary funding to conduct facility assessments at ports of entry and articulate the operational needs for outbound facilities. CBP has completed a preliminary assessment of Southwestern ports of entry in which it determined the readiness of each site to accommodate outbound infrastructure. However, this preliminary assessment did not estimate the costs of infrastructure improvements at each port of entry. Building on this effort, CBP plans to conduct a site survey that would consider needed infrastructure at each port of entry and stakeholders that would be involved in construction such as local governments and private landholders. However, Outbound Enforcement Division officials told us in July 2010, that they will not begin to conduct site surveys until they receive funds for construction. They have not requested this funding because DHS has not yet determined whether to expand the program. Without cost estimates, it will be difficult for CBP to inform program managers and policymakers about costs involved in improving infrastructure for the Outbound Enforcement Program. Developing Data on the Costs Created by Wait Times Is a Difficult Task In its Circular A-94 guidance, OMB states that agencies should consider all costs of a program when conducting a cost-benefit analysis, such as the costs resulting from waiting at the border. CBP officials told us that they have not yet collected data on wait times for outbound inspections because they have been initially focused on establishing the program. Furthermore, they said that developing cost data on wait times for outbound inspections would be difficult based on CBP’s experiences in collecting similar wait time data for inbound inspections. In July 2010, we reported that CBP’s wait times data for personal and commercial vehicles in inbound inspections are collected using inconsistent methods and the data are unreliable. CBP acknowledged problems with its wait times data and has initiated a pilot project to automate wait times measurement, and to improve the accuracy and consistency of the data collected. The objectives of the project are to measure wait times in both directions—inbound and outbound—for cars and trucks, determine real-time and predictive capabilities, replace the manual process for calculating wait times, and explore long-term operations. Understanding what kinds of delays might result from outbound inspections and how expanding the program might affect such delays could better position CBP in determining the program’s costs. Analyzing Seizure Data and Other Benefits of Outbound Operations Is Challenging While seizure data are useful for determining many of the benefits of outbound operations, some benefits are more difficult to quantify. For example, it is difficult to quantify the degree to which outbound operations deter drug trafficking organizations from attempting to smuggle bulk cash. Another benefit that is difficult to quantify is the intelligence information that officers may obtain by conducting outbound operations, including information that may help in discovering smuggled cash, weapons and drugs. To address this type of difficulty, OMB encourages agencies to enumerate any other benefits beyond those that can be quantified. For example, agencies that have conducted such analyses have used subject matter experts to offer a qualitative evaluation of benefits. In analyzing the costs and benefits of the outbound inspections program, it is important to recognize that CBP is part of a larger effort by federal, state, and local agencies to disrupt and dismantle drug trafficking organizations, in part by denying them the profits of their drug sales. How much CBP spends to combat such activities could be indirectly affected by the efforts of other agencies involved in interdiction activities. For example, if local police officers were to increase enforcement on highways leading to the border, they may intercept bulk cash before it gets to the border, potentially changing the results of CBP’s efforts. Additionally, if CBP increases its outbound operations, criminals may respond to the increased difficulty of smuggling bulk cash by changing tactics to use other means of moving currency out of the country, such as using stored value. We discuss the use of stored value later in this report. CBP’s Outbound Policies and Procedures Do Not Address Weaknesses Related to Officer Safety CBP has not yet developed policies and procedures to help ensure officer safety in conducting outbound operations. At all five ports of entry we visited, CBP officers and managers cited safety concerns related to conducting outbound inspections. In addition, at each of these ports, we observed that officers used the side of the highway to conduct secondary inspections, while other vehicles moved past, potentially endangering officers. Also, at the Blaine port of entry, the officers conducted inspections of the underside of vehicles in the traffic by lying on the ground with their legs exposed while traffic moved by in neighboring lanes at speeds up to approximately 25 miles per hour. CBP program managers noted that one way to improve the safety of officers is to improve infrastructure, such as developing designated areas for secondary inspections and installing speed bumps and barriers. We agree that improved infrastructure could enhance officer safety, however, whether CBP will receive funds to improve infrastructure remains an open question. Until such improvements are made, CBP will be faced with the important issue of how to ensure officer safety. At two of the five ports of entry we visited, CBP was using guidance for outbound operations that was developed prior to the reestablishment of the Outbound Enforcement Program and it does not specify how CBP officers should inspect travelers in a way that ensures the officers’ safety. This guidance states that the safety of teams conducting outbound operations is an important consideration, but otherwise does not provide safety guidance for officers. At two other ports of entry we visited, CBP officials stated that the ports began conducting outbound operations after the Outbound Enforcement Program was reestablished but did not reference any specific guidance for officers to use. At the Laredo port of entry, officials provided us with locally developed guidance for officers that details specific actions that the officers should take to help ensure their safety. For example, the officer should always face the traffic, use loud commands to vehicles when escorting a vehicle to secondary screening, and remain aware of traffic passing him or her. At the time of this report, CBP had not yet issued an outbound directive to ports of entry that provides guidance for ensuring officer safety. In July 2010, a CBP outbound program manager told us that a directive for the program was under review by CBP management; however the official could not provide estimates on when the directive is to be approved and issued. The manager agreed that policies and procedures on officer safety are important. However, the manager said that developing such policies and procedures should be done at the local level because each port of entry is unique. For example, traffic volumes vary for each port of entry. The manager stated that the draft directive does not include guidance that directs managers at land ports of entry to develop policies and procedures for ensuring officer safety. GAO’s Standards for Internal Control in the Federal Government state that policies and procedures enforce management directives and help ensure that actions are taken to address risks. In addition, the standards state that such control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. Directing and ensuring that managers at ports of entry develop policies and procedures for officer safety could help protect officers from danger when they are conducting outbound operations. CBP Has Developed Strategic Goals for Its Outbound Enforcement Program, but Challenges Remain in Developing Measures Related to Program Effectiveness In October 2009, CBP issued a strategic plan for fiscal years 2010 through 2014 that represented a first step toward developing performance measures for outbound efforts, but challenges remain in developing the measures. The plan states that the immediate goal of the program was to obstruct the illegal flow of firearms and currency being smuggled from the United States to drug trafficking organizations in Mexico. According to the plan, a key objective of CBP’s outbound efforts is to detect and remove people and goods that pose a threat from the legitimate annual flow of millions of people, cargo and conveyances departing from the United States. To help achieve this objective, the Outbound Enforcement Program plans to carry out 11 initiatives, such as conducting an outbound threat assessment and tracking and reporting on outbound activities. The strategic plan for the outbound program also recognizes that developing or obtaining better data on the threat of bulk cash smuggling and other illegal activities is one key to understanding the effectiveness of its operations. For example, the outbound program recognizes the value of assessments that identify major trafficking routes and methods for illegal export activities. However, CBP has yet to develop a performance measure that shows the degree to which its efforts are stemming the flow of bulk cash leaving the country. While we recognize that doing so is a difficult task, we reported in September 2005 that agencies can use performance information to inform decisions on future strategies, planning and budgeting, and allocating resources. In addition, Standards for Internal Control in the Federal Government state that control activities, such as establishing and reviewing performance measures, are an integral part of an entity’s planning, implementing, reviewing and accountability for stewardship of government resources and achieving effective results. Such activities could call for comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions can be taken. Using information and data from other agencies that evaluate drug trafficking organizations provides one way to measure the effectiveness of CBP’s outbound operations. Two examples of how such information could inform managers and policymakers of CBP’s efforts involve studies by NDIC and ICE. In March 2008, NDIC estimated that while current bulk cash interdiction efforts successfully disrupt the transport of tens of millions of dollars in drug proceeds en route to or at the southwest border every year, the interdicted currency is less than 1 percent of the total amount of illicit bulk cash destined for Mexico. In addition, a November 2009 study issued by ICE stated that gross revenue generated by Mexican drug trafficking organizations, and subsequently smuggled into Mexico, is substantial. CBP officials stated that while it may not be possible to know the extent to which its officers are intercepting cash, they believe such information is useful. For example, they cited analyses by ICE’s Bulk Cash Smuggling Center as another data source that could help in developing performance measures. In July 2010, CBP officials stated that they plan to develop draft performance measures comparing program costs to outcomes such as the amount of bulk cash seized by the end of fiscal year 2011. While this is a good first step, without data to show the degree to which CBP efforts are stemming bulk cash smuggling and other criminal activities, it will be difficult for managers and policymakers to assess the effectiveness of CBP’s outbound program. Regulatory Gaps Involving Cross- Border Reporting and Other Anti-Money Laundering Requirements Exist for Stored Value Regardless of the success of efforts to stem the flow of bulk cash, criminals can use other methods of transporting proceeds from illegal activities across the nation’s borders. Stored value is one such method. Regulatory exemptions heighten the risk that criminals may use stored value to finance their operations. For example, unlike cash, FinCEN does not require travelers to report stored value in excess of $10,000 to CBP when crossing the border. FinCEN has initiated actions to address these exemptions, but much work remains before the regulatory gaps are closed and anti-money laundering practices are fully implemented. Unlike with Cash, Travelers Are Not Required to Report More than $10,000 in Stored Value When Crossing the U.S. Border The Bank Secrecy Act (BSA) is a key federal statute that seeks to safeguard the U.S. financial system from criminal activity and to combat the exploitation of the U.S. financial system by criminals and terrorists. Among other things, the BSA authorizes the Secretary of the Treasury to require financial institutions, as well as non-financial trades or businesses and many individuals, to make reports and maintain records that have a high degree of usefulness in criminal, tax, or regulatory investigations or proceedings, or in the conduct of intelligence or counterintelligence activities, including analysis, to protect against international terrorism. Among other things, the BSA and its current implementing regulations require an individual who physically transports, mails, or ships more than $10,000 in currency or monetary instruments, such as traveler’s checks, across the U.S. border to file a Report of International Transportation of Currency or Monetary Instrument (CMIR). Unlike this reporting requirement for currency and monetary instruments, there is no similar requirement for stored value. According to Treasury, no requirement exists because stored value is not defined as a monetary instrument under the BSA or its implementing regulations. Instead, according to FinCEN, stored value is a device that provides access to monetary value, rather than being a monetary instrument itself. MSBs That Issue, Sell, or Redeem Stored Value Are Exempt from Three Key Anti-Money Laundering Provisions of the BSA Many of the anti-money laundering requirements contained in the BSA regulations do not apply to MSBs that offer stored value products. The BSA and its regulatory framework focus on financial institutions’ record keeping and reporting requirements that create a paper trail of financial transactions that federal agencies can use to deter criminal activity and apprehend criminals. Some BSA regulations apply to MSBs that offer stored value products. For example, financial institutions, including MSBs that provide stored value products, are required to report currency transactions made by the same customer that exceed $10,000 during the course of any one day. However, FinCEN exempted MSBs that offer stored value products from many other anti-money laundering provisions of the BSA regulations. According to FinCEN, they provided these exemptions in their 1999 rulemaking due to the “complexity of the industry and the desire to avoid unintended consequences with respect to an industry then in its infancy.” In 2008, FinCEN recognized that these exemptions created a situation whereby issuers, sellers, and redeemers of stored value are subject to a less comprehensive Bank Secrecy Act/Anti-Money Laundering regime than are other actors falling within the scope of FinCEN’s regulations. FinCEN later stated that “if these gaps are not addressed, there is increased potential for the use of as a means for furthering money laundering, terrorist financing, and other illicit transactions through the financial system.” Below is a discussion of three key exemptions related to stored value activity by MSBs. We discuss FinCEN’s efforts to address these exemptions later in the report. FinCEN Does Not Require MSBs That Are Sole Issuers, Sellers, or Redeemers of Stored Value to Register Under the BSA and its implementing regulations, certain MSBs must register with Treasury by filing information with FinCEN. The purpose of registration is to assist supervisory and law enforcement agencies in th enforcement of criminal, tax, and regulatory laws and to prevent MSBs While most types of MSBs are required from engaging in illegal activities. to register, there are exemptions for certain types of MSBs. For example, a MSB that solely issues, sells, or redeems stored value is not required to register under current BSA regulations. The total number of MSBs that are solely issuers, sellers, or redeemers of stored value, and thus exempt from registration, is unknown. A MSB that issues, sells, or redeems stored value is generally required to register with FinCEN if that MSB also provides another financial service which is subject to registration, such as check cashing. However, in 2007, the Secretaries of the Treasury and Homeland Security, and the Attorney General, stated that the majority of MSBs that are required to register continue to operate without doing so. According to FinCEN officials, roughly 25,000 MSBs were registered in May 2007. Through an outreach program to unregistered MSBs, FinCEN increased the number of registered MSBs to 43,041, as of June 15, 2010. However, the total number of MSBs operating nationwide is unknown.,78 FinCEN officials stated that MSBs may not register because of language barriers, cost, training issues, or a lack of awareness as to the requirements. FinCEN Does Not Specifically Require MSBs to Develop and Implement a Customer Identification Program Under BSA regulations, some financial institutions, such as banks, are required to have customer identification programs that include, among other things, procedures for verifying customer identity and determining whether a customer appears on specified government watch lists. However, current BSA regulations do not specifically require MSBs to have a customer identification program. Despite this, MSBs may choose to implement customer identification protocols voluntarily or in order to satisfy other requirements. For example, MSBs are required to maintain anti-money laundering programs. These programs are designed to prevent the MSB from being used to facilitate money laundering and the financing of terrorist activities. As part of this requirement, MSBs are required to develop and implement policies, procedures, and internal controls which include, to the extent applicable to the MSBs under BSA regulations, requirements for verifying customer identification. We discuss FinCEN’s efforts to monitor MSB compliance with these requirements later in this report. A 2005 study by KPMG attempted to estimate the total number of MSBs operating nationwide. The study estimated the number to be approximately 203,000. This estimate excludes the U.S Postal Service, an entity that falls under the definition of MSB because it offers money order services. However, because the survey obtained an 8 percent response rate, the large percentage of non-responses may have affected the survey results. See KPMG, LLP Economic and Valuation Services, 2005 Money Services Business Industry Survey Study, (Washington D.C.: September 2005). activity. For example, a threat assessment of stored value cards by Treasury stated the following: “The 9/11 hijackers opened U.S. bank accounts, had face-to-face dealings with bank employees, signed signature cards and received wire transfers, all of which left financial footprints. Law enforcement was able to follow the trail, identify the hijackers and trace them back to their terror cells and confederates abroad. Had the 9/11 terrorists used prepaid cards to cover their expenses, none of these financial footprints would have been available.” FinCEN Does Not Require MSBs to Report Suspicious Transactions Involving Stored Value While depository institutions are required to file suspicious activity reports (SAR) for stored value transactions, FinCEN does not require MSBs to do so.82,,83 84 Although some MSBs may file SARs related to stored value as part of their anti-money laundering programs or on a voluntary basis, the fact that suspicious activity involving stored value does not have to be reported by all financial institutions heightens the risk that cross-border currency smuggling or the illegal use of stored value may go undetected or unreported. The USA PATRIOT Act of 2001, Pub. L. No. 107-56, 115 Stat. 272 (Oct. 26, 2001), expanded SAR reporting requirements to include nondepository institutions. However, under 31 C.F.R. § 103.20(a)(5), money services businesses are not required to file suspicious activity reports for transactions that involve solely the issuance, or facilitation of the transfer of stored value, or the issuance, sale, or redemption of stored value. Under 31 C.F.R. § 103.18, which discusses the filing of suspicious activity reports by banks, there is no exemption for stored value transactions. For transactions other than stored value, MSBs are generally required to file a suspicious activity report when a transaction is conducted or attempted by, at, or through a MSB, involves or aggregates funds or other assets of at least $2,000, and the MSB knows, suspects, or has reason to suspect that the transactions or pattern of transactions: involves funds derived from illegal activities or is intended or conducted in order to hide or disguise funds or assets derived from illegal activity as part of a plan to violate or evade any federal law, regulation or reporting requirement under federal law or regulation; are designed to evade BSA requirements or other financial reporting requirements; have no business or apparent lawful purpose; or involve the use of the MSB to facilitate criminal activity. criminals. For example, in February 2009, we reported that law enforcement agencies in the Department of Justice and DHS use SARs in their investigations of money laundering, terrorist financing, and other financial crimes. In one example, a bank-filed SAR began an investigation that resulted in the discovery of a predatory certificate of deposit fraud scheme. The SAR narrative described critical elements of the crime in detail and law enforcement and prosecutors in this case noted that the SAR proved instrumental in ending the scheme. FinCEN has determined that suspicious activities related to stored value have been reported by depository institutions and voluntarily reported by MSBs. For example, in 2006, FinCEN conducted an analysis of SARs that identified stored value cards as the nexus of the suspicious activity in order to highlight trends and patterns associated with the questionable/criminal use of stored value cards. FinCEN found that between January 1, 2004, and February 15, 2006, 471 SARs were filed that were associated with stored value activity. Of these, 341 SARs (72 percent), generally described activities associated with structuring and/or money laundering. Law Enforcement Case Examples and Reported Suspicious Activities Demonstrate the Use of Stored Value for Cross- Border Currency Smuggling and Other Illicit Activities In its 2010 report on bi-national criminal proceeds led by the Office of Counternarcotics Enforcement and ICE, DHS reported that little is known about whether Mexican criminal enterprises are making use of stored value technologies. Further, it reported that intelligence gaps center around a lack of data on emerging technologies like stored value cards, especially those that are offshore based. However, in a March 2010 testimony before the House Appropriations Committee, the FBI Director stated that recent money laundering investigations demonstrate that criminals are able to exploit existing vulnerabilities in the reporting requirements in order to move criminal proceeds using stored value devices, such as prepaid cards. While the extent to which stored value is used for illicit purposes is unknown, law enforcement case examples and reported suspicious activities demonstrate that stored value has been used for cross-border currency smuggling and other illicit activities.At least two mechanisms that can be used to move currency out of the country using stored value devices have been documented by law enforcement and reported suspicious activities. First, illegal proceeds can be loaded on stored value devices and physically carried across the border. Two examples of the physical transport of stored value across the U.S. border are described below. CBP officers at a Washington state port of entry stopped a commercial shipping truck and discovered $7.2 million worth of prepaid phone cards. CBP officers report that they were unable to detain or seize these phone cards because there is no requirement that such cards be reported at the border. Later analysis revealed the manufacturer had sent five other shipments of phone cards across the border in a 3- month period, totaling more than $25 million. ICE agents assisting with outbound inspections at the San Ysidro port of entry encountered an individual attempting to leave the country in possession of a laptop computer, several merchandise gift cards, credit cards, and cell phones. Upon further investigation, the agents uncovered that the passenger had over 1,000 stolen credit card numbers and was working as part of a credit card fraud operation. The passenger explained that for his work, he was paid with prepaid gift cards. The man used these gift cards to purchase prepaid phone cards, which he smuggled into Mexico and sold for a profit. The second method involves moving illicit proceeds out of the country by shipping stored value cards out of the country, where co-conspirators can use the cards to make purchases or to withdraw cash from local ATMs. Many cards can also be reloaded with additional value remotely via the Internet. For example, in 2008, DEA agents in Connecticut were investigating a narcotics and money laundering organization allegedly using stored value cards to launder narcotics proceeds. The investigation revealed that illicit proceeds were loaded onto stored value cards, which were then shipped to Colombia, South America. In Colombia, co- conspirators withdrew the money from local ATMs. The investigation revealed that in a 5-month period, conspirators withdrew more than $7 million from the stored value cards at a single location in Medellin, Colombia. As discussed above, stored value devices are not subject to cross-border reporting requirements. As a result, individuals are not required to file any report if they physically transport, mail, or ship more than $10,000 in value in the form of stored value products. Four of six law enforcement agencies with whom we spoke expressed concern about the lack of a cross-border transport reporting requirement for stored value. For example, CBP senior officials report that because stored value is not subject to CMIR requirements, they lack the authority to seize stored value devices at the border without establishing probable cause or linking the stored value devices to a specified unlawful activity. In contrast, an IRS special agent told us that a cross-border reporting requirement would not entirely address the illicit use of stored value because there are other mechanisms by which stored value can be used to transport funds internationally. For example, smugglers could physically carry or ship stored value cards with no value out of the country and then add value to the cards remotely. Beyond the use of stored value for cross-border currency smuggling, law enforcement examples and reported suspicious activities demonstrate that stored value can be used for other illicit purposes, such as money laundering, tax fraud, and identity theft. Below are two examples: In a recent law enforcement case, stored value cards were used to conceal proceeds of a $15 million tax fraud scheme. In this example, suspects filed more than 540 fraudulent tax returns. On some occasions, the suspects routed electronic transfers of tax refunds directly to prepaid cards obtained anonymously through an Internet application process. A depository institution filed a suspicious activity report describing a customer who loaded $73,405 on one prepaid card and $9,987 on a second prepaid card over the course of about a year and a half. All transactions were made in cash, mostly $20 bills, and reporting officials noted that the cash had an odor similar to marijuana. Of the $73,405 loaded on the card, $72,212 was withdrawn in cash. While the deposits took place in Washington state, the transactions on the card occurred in Southern California and Mexico. Efforts Are Under Way to Address Regulatory Gaps Related to Stored Value, but Much Work Remains FinCEN Is in the Process of Developing and Issuing Regulations that Require Anti-Money Laundering Practices for Stored Value At the time of our review, FinCEN was in the process of developing and issuing regulations, as required by the Credit CARD Act, to address the risk associated with the illicit use of stored value. On May 22, 2009, the Credit CARD Act was enacted which, among other things, required the Secretary of the Treasury, in consultation with the Secretary of Homeland Security, to do the following: Issue regulations in final form implementing the BSA, regarding the sale, issuance, redemption, or international transport of stored value, including stored value cards. In doing so, the Credit CARD Act stated that Treasury may issue regulations regarding the international transport of stored value to include reporting requirements pursuant to 31 U.S.C. § 5316 which applies to the transport of monetary instruments. Take into consideration current and future needs and methodologies for transmitting and storing value in electronic form in developing the regulations. The Credit CARD Act also called on Treasury to issue final regulations implementing the above requirements by February 2010. FinCEN is in the early phases of issuing the related regulations and much work remains before it addresses the risk of cross-border currency smuggling and money laundering through the use of stored value. For significant regulatory action, such as the proposed rule that FinCEN developed on stored value, OMB prescribes an 11-step process. This process involves steps that range from drafting a Notice of Proposed Rulemaking (NPRM) to publication of the final rule at least 60 days before its effective date. In June 2010, FinCEN issued a NPRM. FinCEN proposes to revise the BSA regulations applicable to MSBs with regard to stored value by, among other things, renaming “stored value” as “prepaid access” and defining that term; imposing suspicious activity reporting requirements, customer information and transaction recordkeeping requirements on providers and sellers of prepaid access; and imposing a registration requirement on providers of prepaid access. In preparing the NPRM, FinCEN carried out several actions. For example, FinCEN consulted with Treasury components, such as IRS SB/SE and IRS- Criminal Investigations Divisions. In addition, it obtained input from external stakeholders including industry, law enforcement, and federal agencies and departments. In doing so, FinCEN officials told us they consulted with and obtained input from DHS agencies, such as ICE and CBP, before and after writing versions of the draft rule. In addition, FinCEN received comments from OMB prior to issuing the NPRM. Treasury and FinCEN officials told us that they accelerated their efforts toward developing and issuing a new rule on stored value due in part to the requirements under the Credit CARD Act. They acknowledged that the existing regulations for stored value—issued in 1999—have not kept pace with developments in the stored value industry and that the regulations were now outdated. However, agency officials said they believe that their efforts prior to the Credit CARD Act, such as leading an interagency effort to develop and issue the 2007 Money Laundering Strategy Report, establishing a Stored Value Subcommittee of the Bank Secrecy Act Advisory Group in May 2008, and posing questions related to stored value to the public as part of proposed revisions to MSB definitions in May 2009, placed them in a better position to establish a regulatory framework for stored value in response to the Credit CARD Act. We describe in more detail below how FinCEN plans to address several of the regulatory gaps that apply to MSBs involved in stored value. However, FinCEN has not established an end date for the regulations, which is discussed later in this report. FinCEN Proposes Addressing Three Regulatory Gaps Related to MSBs Involved in Stored Value Recognizing that stored value products are vulnerable to money laundering, FinCEN’s June 2010 NPRM proposes to address regulatory gaps related to MSBs involved in stored value or “prepaid access” in the following three areas: Registration with FinCEN. The NPRM proposes that providers of prepaid access must (1) register with FinCEN as a MSB, (2) identify each prepaid program for which it is the provider of prepaid access, and (3) maintain a list of its agents. However, sellers, such as grocery stores or drug stores, of prepaid access would not have to register. According to FinCEN, it is proposing to exempt the seller from registering with FinCEN because the seller’s role is complementary with, but not equal to, the authority and primacy of the provider of prepaid access, and the seller is generally acting as an agent on behalf of the provider. As stated in the NPRM, providing an exemption would be consistent with the treatment of other agents under the MSB rules. Customer identification program. The NPRM proposes that providers and sellers of prepaid access must establish procedures to verify the identity of a person who obtains prepaid access under a prepaid program; obtain identifying information concerning such a person, including name, date of birth, address, and identification number; and retain such identifying information for 5 years after the termination of the relationship. Submitting reports on suspicious activities. The NPRM proposes that MSBs must file reports on suspicious activities related to prepaid access. The next steps that FinCEN plans to follow include (1) summarizing and analyzing the comments, (2) revising the regulation as proposed in the NPRM, if appropriate, (3) consulting with law enforcement and regulatory stakeholders and clearance within Treasury, (4) preparing a final rule for OMB to review, and (5) addressing any further comments from OMB. FinCEN Has Not Yet Decided How Best to Address the International Transport of Stored Value At the time of our review, FinCEN was considering several options to address the international transport of stored value; however, the agency has not yet decided on what course of action it will take or when. In the June 2010 NPRM, FinCEN stated that it plans to regulate the cross-border transport of stored value in a future rulemaking proposal in part because of issues identified with respect to financial transparency while performing its regulatory research of the stored value industry. According to FinCEN officials, they have not addressed the cross-border transport of stored value in the June 2010 NPRM because addressing regulatory gaps in (1) registration with FinCEN, (2) customer identification programs, and (3) reporting on suspicious activities had a higher priority. While FinCEN may ultimately call upon individuals to report stored value at the borders, FinCEN officials indicated that cross-border transparency and monitoring may be achieved through other means. According to FinCEN, one option it may use to achieve cross-border transparency is to call upon entities in the stored value industry to report suspicious activities related to the use of stored value that cuts across the nation’s borders. In addition, FinCEN is proposing in the June 2010 NPRM that providers of prepaid access maintain records that may include information on the type and amount of the transaction and the date, time, and location where the transaction occurred. For example, such information could identify the purchase and use of stored value in and outside of the United States. FinCEN’s success in using this approach depends, in part, on (1) the degree to which entities report such instances in a complete and accurate fashion and (2) the timeliness of such reporting and the degree to which the information is shared with law enforcement agencies. The challenges FinCEN faces in using this approach are discussed later in this report. FinCEN Has Developed Initial Plans for Issuing the Final Rule on Stored Value, but Its Plans Do Not Assess Ways to Mitigate Risks for Completing Rules on Stored Value FinCEN has developed initial plans for issuing the final rules for stored value; however, its plans are missing key elements that are consistent with best practices for project management. Best practices for project management established by the Project Management Institute state that managing a project involves project risk management, which serves to increase the probability and impact of positive events, and decrease the probability and impact of events adverse to the project. Project risk management entails determining which risks might affect a project, prioritizing risks for further analysis by assessing their probability of occurrence, and developing actions to reduce threats to the project. Other practices include (1) establishing clear and achievable objectives, (2) balancing the competing demands for quality, scope, time, and cost, (3) adapting the specifications, plans, and approach to the different concerns and expectations of the various stakeholders involved in the project, and (4) developing milestone dates to identify points throughout the project to reassess efforts under way to determine whether project changes are necessary. In an effort to meet the statutory deadline of February 2010, FinCEN developed preliminary plans and milestones for issuing the final rule on stored value. For example, the agency identified certain steps in the rulemaking process, such as summarizing comments and making recommendations to management before finalizing the rule. However, FinCEN’s plans did not assess which risks might affect the project, prioritize risks for further analysis by assessing their probability of occurrence, or develop actions to reduce threats to the project as suggested by best practices for project management. While FinCEN officials acknowledge risks exist, such as not knowing whether the nature of these comments may cause FinCEN to change its policy path with respect to the NPRM, they have not produced a plan that identifies actions to reduce threats to the project nor does their plan (1) consider alternative approaches that the agency may need to take based on comments received, or (2) include the time it may take to produce a series of rules, including a rule that addresses the cross-border transport of stored value. Assessing ways to mitigate risks associated with issuing rules on stored value and the cross-border transport of stored value could better position FinCEN to provide reasonable assurance that it can produce a set of rules that (1) fulfills the requirements of the Credit CARD Act and (2) informs decisions related to improving anti-money laundering practices among the stored value industry. In general, federal rulemaking can be a lengthy process for significant regulatory action. In April 2009, we reported that the average time needed to complete a significant rulemaking across 16 case-study rules at four federal agencies was about 4 years—having a range from about 1 year to nearly 14 years with considerable variation among the federal agencies and rules. However, as called for by best practices for project management, all four of the federal agencies examined in the report set milestones for their regulatory development. Additionally, during the course of our review one of the four agencies provided data showing it routinely tracked these milestones, and two federal agencies subsequently provided some documentation and data to show likewise, when commenting on our draft report. Our report concluded that monitoring actual versus estimated performance enables agency managers to identify steps in the rulemaking process that account for substantial development time and provides information necessary to further evaluate whether the time was well spent. A project management plan that is consistent with best practices could help FinCEN better manage its rulemaking effort. The Credit CARD Act required FinCEN’s effort in issuing regulations in final form implementing the BSA regarding the sale, issuance, redemption, or international transport of stored value to be completed within the prescribed time frame of 270 days from the date of enactment. However, FinCEN was unable to meet the statutory deadline of February 2010 to develop and issue these regulations and has much work to do to carry out the requirements of the Credit CARD Act. In addition to identifying and mitigating risks associated with the regulatory process, a project management plan could also help FinCEN (1) track and measure progress on tasks associated with completing mandated requirements, and (2) identify points throughout the project to reassess efforts under way to determine whether goals and milestones are achievable or project changes are necessary. If such plans call for changes to time lines, then FinCEN could request for legislation to extend the statutory deadline. Until the rule is finalized and implemented, vulnerabilities could continue to exist in the stored value industry with respect to the cross-border transport of stored value and money laundering for the purpose of supporting illegal activities. More Work Remains to Ensure Agencies Enforce Cross-Border Currency Smuggling and Industry Complies with the Final Rule While issuing the final rule on stored value will be a major step toward addressing regulatory gaps, much work remains to ensure enforcement by law enforcement agencies to identify cross-border currency smuggling with the use of stored value and to ensure issuers, sellers, and redeemers of such devices implement anti-money laundering requirements after the final rule is issued. For example, FinCEN faces the task of conducting awareness programs about the new rule for officials in law enforcement and industry, as well as determining whether the new rule will address the Credit CARD Act requirements or if additional rules will need to be developed. Beyond these tasks, federal law enforcement agencies and FinCEN face other challenges as well. These are described in more detail below. Enforcing the Cross-Border Requirements Related to Stored Value Will Be a Challenge If FinCEN requires individuals to declare stored value at the border when leaving the country, law enforcement officials we spoke to report that they would encounter the following challenges. Detecting illegitimate stored value cards. According to the law enforcement officials we spoke with, it may be difficult to detect illegitimate stored value for three reasons. First, stored value cards loaded with large amounts of currency can be easily concealed in a wallet, letter, or package given the minimal amount of physical space a stored value product occupies, particularly when compared to bulk cash. Second, stored value cards do not contain any features that distinguish them from traditional credit or debit cards. Third, there is no mechanism by which to distinguish stored value cards that an individual possesses for legitimate reasons and those possessed for illegitimate reasons. Obtaining proper traveler declarations. The public would have to be made aware of any new declaration requirement for the international transport of stored value. Further, it may be difficult for the traveler to recall the value on a stored value card and for law enforcement to verify the value on a card. Unlike cash which can be counted, the value of a stored value card can only be determined using a card reader or by accessing the account information. Seizing the funds. Unlike cash, which can be physically seized, the process of seizing funds from a stored value card is much more difficult. Law enforcement first has to identify where the funds are held, which could be at any financial institution worldwide. Second, law enforcement would need to obtain the right to freeze the funds and to seize the funds through obtaining a warrant. However, in the time it takes to obtain a warrant, it is possible that a suspect and any co- conspirators could transfer the funds off of the stored value card to another account. FinCEN Faces Challenges in Ensuring Industry Compliance With the New Rules FinCEN’s approach for addressing vulnerabilities with cross-border currency smuggling and other illicit use of stored value depends, in part, on ensuring that industry complies with the new rules. Among other things, FinCEN faces challenges in areas such as monitoring MSBs, addressing gaps in anti-money laundering practices of off-shore issuers and sellers of stored value, and educating industry about the new anti- money laundering requirements. Current Guidance for Monitoring MSB Compliance With Anti- Money Laundering Requirements Is Silent on Stored Value As administrator of the BSA, FinCEN is responsible for, among others things, developing regulatory policies for agencies that examine financial institutions and businesses for compliance with the BSA regulations. FinCEN is also responsible for overseeing agency compliance examination activities and provides these agencies with assistance to ensure they are able to carry out their compliance exams. Treasury, through FinCEN, has delegated the authority to conduct compliance examinations of certain nonfederally regulated nonbank financial institutions (NBFI), including MSBs, to the Office of Fraud/BSA, within IRS’ Small Business/Self- Employed Division. IRS Fraud/BSA carries out this function with approximately 385 field examiners nationwide. FinCEN’s guidance for these examiners lacks specific information to follow when assessing MSB compliance by issuers, sellers, and redeemers of stored value. To provide guidance for performing MSB examinations to these examiners, in December 2008, FinCEN issued, jointly with IRS, the Bank Secrecy Act/Anti-Money Laundering Examination Manual For Money Services Businesses. FinCEN’s goal was to ensure consistency in the application of the anti-money laundering requirements called for by BSA. The manual includes general procedures that are applicable to all MSBs, such as procedures for reviewing an anti-money laundering program, but it does not specifically address transaction testing procedures for examining issuers, sellers and redeemers of stored value. Standards for Internal Controls in the Federal Government state that an effective control environment is a key method to help agency managers achieve program objectives.The standards state, among other things, that agencies should have policies and procedures that enforce management’s directives. The standards also state that such control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. Developing policies and procedures for monitoring entities that issue, sell, and redeem stored value could help ensure that such entities carry out current and future anti-money laundering requirements. IRS Fraud/BSA officials acknowledged that there are no specific transaction testing procedures in the manual for examiners to follow at a MSB that issues, sells, and redeems stored value. They told us that at the time the manual was developed, FinCEN did not have sufficient information on the stored value industry and it wanted to get a better understanding of the industry before including examination procedures in the manual. In July 2010, FinCEN told us that it intends to update the manual to reflect final rules on MSB re-definitions and prepaid access. However, it is uncertain when it will do so because the manual update is contingent on completion of the final rules. FinCEN Faces Challenges in Tracking Reports on Suspicious Activities Related to Stored Value FinCEN faces challenges regarding the ease in which it can analyze the SAR database for reports related to stored value. We sought to identify the types of reported suspicious activities involving stored value or prepaid products by analyzing the SAR database; however, we experienced significant data challenges that limited our efforts. Currently, SAR forms do not contain a mechanism to indicate that stored value was the financial service involved in the suspicious activity, aside from including this information in the narrative portion of the form. Therefore, to identify SARs potentially involving stored value products, the narrative portion of the form must be searched using key terms, such as “stored value,” “prepaid card,” or “gift card,” that might indicate this activity. We reviewed a random, probability sample of 400 SARs that were identified by using narrative search terms believed to identify SARs filed due to stored value. However, for an estimated 39 percent of the reports, the suspicious activity described did not involve the use of stored value, even though one of the key search terms appeared in the narrative. For example, the search term identified in the narrative did not describe the type of suspicious activity that occurred, but rather, was included in a description of the type of services the reporting entity offered. In another example, the SAR was filed for suspicion of credit card fraud or structuring but the report also described the type of transactions the customer completed, one of which might have been the purchase of gift cards. Due to these database limitations, it is difficult to track and monitor suspicious activity and the risks related to the use of stored value. Standards for Internal Controls in the Federal Government state that internal controls should include an assessment of the risks the agency faces from both external and internal sources. This guidance defines risk assessment as the identification and analysis of relevant risks associated with achieving the agency’s objectives and forming a basis for determining how risks should be managed. In addition, internal control standards state that once risks have been identified, they should be analyzed for their possible effect. Risk analysis includes estimating the risk’s significance, assessing the likelihood of its occurrence, and deciding how to manage the risk and what actions should be taken. To address the difficulty in tracking suspicious activities related to stored value, FinCEN has discussed SAR form revisions with the Data Management Council that include check boxes for the types of items involved in the suspicious activity, including prepaid products. FinCEN plans to implement a revised SAR form with these changes in fiscal year 2012. Making such changes could better position FinCEN to fully evaluate the potential impact of the stored value industry on their ability to carry out the agency’s broad mission of enhancing U.S. national security, deterring and detecting criminal activity, and safeguarding financial systems from abuse. FinCEN Faces Challenges in Developing a More Complete Database of MSBs FinCEN does not have a complete database on MSBs, including those that issue, sell, and redeem stored value. The lack of a comprehensive database complicates FinCEN’s ability to educate all MSBs and other entities about the new rule and its anti-money laundering requirements since the agency does not have full knowledge of the MSB population or other entities involved with stored value such as the telecommunications industry (mobile devices—cellular phones and other wireless communication devices). These entities may not be as familiar with BSA anti-money laundering requirements and need more time and orientation to understand and meet the new requirements. Historically, identifying the population of MSBs subject to BSA requirements has been a challenge for FinCEN and IRS Fraud/BSA. This challenge has been well-documented over the years by, among others, Treasury’s Inspector General, us,and, more recently in the 2007 National Money Laundering Strategy Report. To illustrate this problem, IRS Fraud/BSA uses its Web-based Currency and Banking Retrieval System, public and commercial databases, Internet searches, and the yellow pages to identify MSBs to monitor because a complete database of MSBs does not exist. FinCEN performed searches of past BSA reports and got referrals from other law enforcement officials about potential MSBs to monitor. However, not all of the businesses identified were actually subject to BSA requirements. FinCEN officials told us they plan to use the Bank Secrecy Act Advisory Group and its Subcommittees, including the Stored Value Subcommittee, to identify ways to perform appropriate outreach to applicable MSBs, in part, to develop a more complete database. FinCEN has not set a date for completion of this effort as its plans have not been finalized. FinCEN officials told us that under the new rule, monitoring and compliance may be performed by its Office of Compliance, Office of Enforcement, and IRS’ Fraud/BSA. However, even if it is able to develop a more complete database of MSBs, the degree to which IRS will monitor MSBs involved in issuing, selling, and redeeming stored value is an open question. For example, in March 2010, IRS told us that MSBs which provide stored value services generally have not been the target of compliance exams in recent years. IRS Fraud/BSA officials told us that most MSBs they examined for fiscal years 2007 through 2009 provided some other financial service (e.g. money transmission, check cashing, and issuing and selling traveler’s checks) as their primary financial service, and may have conducted stored value transactions as an auxiliary financial service. IRS Fraud/BSA work plans during this period as well as for the current fiscal year (2010) excluded examination of MSBs whose primary financial service is stored value for the following two reasons: (1) most MSBs that were examined provided multiple financial services of which stored value may have been only one of them, and (2) the existing statutory requirements for entities that offer stored value products are minimal and IRS resource expenditures would be more beneficial in focusing on other MSBs. FinCEN’s Efforts to Close Gaps in Anti-Money Laundering Regulations for Off-Shore Entities Has Made Progress, but More Work Remains Combating the use of stored value by criminals involves not only efforts to implement anti-money laundering practices domestically, but also involves extending these efforts to international financial markets. Stored value issuers outside of the United States are generally not subject to FinCEN’s anti-money laundering regulations, even though the stored value products they issue may be used in the United States or elsewhere in the world. Such devices can be used to load money from this country and download money in foreign countries through ATMs. Prior to enactment of the Credit CARD Act, FinCEN had begun the process of proposing a new rule to address, among other things, off-shore MSBs that market their stored value products in the United States but the final rule is being delayed. As of April 2010, agency officials told us its final rule related to off-shore MSBs will be delayed and issued at the time FinCEN issues the final rule addressing the requirements under the Credit CARD Act. This would allow for the provisions in both rules to be synchronized along with appropriate references because the two rules are closely related to one another. Meanwhile, one way Treasury and FinCEN are addressing off-shore providers of stored value is through an intergovernmental entity called the Financial Action Task Force (FATF). FATF’s purpose is to establish international standards and to develop and promote policies for combating money laundering and terrorist financing. In 2006, FATF issued a study concluding that providers of new payment methods, such as stored value and mobile payments that are outside the jurisdiction of a given country, may pose additional risks of money laundering when (1) the distribution channel being used is the Internet, (2) there is no face-to-face contact with the customer, and (3) the new payment network operates through an open network that can be accessed in a high number of jurisdictions (e.g. ATMs worldwide). More recently, in its Strategic Plan (2008-2012) FinCEN recognized that to address the risk of cross-border transport and money laundering through the use of such devices calls for an approach that involves international cooperation with regulatory and law enforcement agencies outside of the United States. The degree to which FinCEN will succeed in gaining the cooperation of agencies outside of the United States in regulating stored value remains an open question. Officials at three of six law enforcement agencies we spoke with expressed concern about the risk of money laundering from off-shore MSBs. According to Treasury officials we interviewed, the agency led the 2006 effort to disclose risks of money laundering related to off-shore MSBs that sell and issue stored value products and informed us that a new effort is currently under way to update the conditions and findings to see what more needs to be done to deter the use of such products for money laundering and terrorist financing activities. Treasury officials told us the updated report is being co-chaired by FATF representatives from Germany and the Netherlands with Treasury as a participating member. Although originally scheduled for June 2010, the revised issuance date for the updated report on new payment methods is October 2010. FinCEN May Need to Evaluate Alternative Approaches to the Proposed Rule In the NPRM, FinCEN has included a request for comments on the proposed rule, as well as 15 questions it asks stakeholders to comment on. For example, FinCEN’s proposed rule exempts certain types of prepaid devices from anti-money laundering requirements. Specifically, the proposed rule exempts those devices that are (1) used to distribute payroll or benefits; (2) used to distribute government benefits; or (3) used for pre-tax flexible spending accounts for health care and dependent care expenses. The proposed rule also exempts programs offering closed system products that can only be used domestically as well as products that limit the maximum value and transactions to $1,000 or less at any given time. As stated in the NPRM, FinCEN recognizes that some members of the law enforcement community have expressed concern about exempting prepaid access payroll programs from anti-money laundering requirements. To address concerns such as these, FinCEN has requested comments on methods for ensuring that the company and its employees are legitimate, and that the program is valid. FinCEN has also asked for comments on the $1,000 a day threshold as it may apply to transactions involving multiple MSBs. According to FinCEN, it will consider this matter and any comments that it receives. In doing so, when FinCEN reviews the comments it receives, it may need to evaluate alternatives to exempting such prepaid programs to address the risk of money laundering and the transport of such devices across the nation’s borders to finance illegal activities. OMB Circular A-4 and Executive Order 12866, as amended, indicate that analysis of alternatives is a key component in assessing proposed rules. FinCEN Acknowledges that Outreach Will Help Ensure Industry Compliance With the New Rules While the June 2010 NPRM proposes regulations that would require each “provider of prepaid access” to register with FinCEN and carry out anti- money laundering requirements related to prepaid access, the degree to which providers will register with FinCEN and carry out the proposed requirements remains an open question. FinCEN may face two challenges in this regard. First, while the proposed rule describes characteristics of MSBs that may qualify as a provider, entities in the prepaid access industry may not immediately know whether they are a provider without further clarification from FinCEN. This condition could lead to entities not registering as a provider when FinCEN intends that they follow the anti- money laundering requirements for such entities. Second, while sellers are exempt from registration requirements under the proposed rule, they are required to comply with certain anti-money laundering requirements. Not knowing whether the universe of providers and sellers is complete and accurate may hinder compliance efforts. As a result, FinCEN may face a higher risk of noncompliance without a program to educate industry about the rule and how to apply it. In July 2010, FinCEN officials told us that they typically develop and conduct industry outreach, as resources allow, supporting the implementation of major new rulemakings. FinCEN officials explained that these outreach activities greatly assist covered industries in better understanding the new rules and how the new rules are to be applied. Because the prepaid access rulemaking is ongoing and FinCEN is awaiting feedback on its proposed regulations, preparations and planning for outreach to providers of prepaid access and other affected industry participants are in the initial phases, according to agency officials. Officials told us that FinCEN will continue its discussions with the Bank Secrecy Act Advisory Group and its Subcommittees to gain insight on how best to reach those affected by any final regulations. According to FinCEN, the level of effort associated with a major industry outreach effort of this kind will be significant. Conclusions Moving illegal proceeds across the border, whether in the form of bulk cash or stored value, represents a significant threat to national security. While CBP’s outbound inspection effort has shown some early results, particularly in terms of bulk cash seized, the program’s future is uncertain. If DHS continues to conduct outbound inspections, CBP faces important decisions regarding resources and processes for outbound inspections, and without all the necessary information, CBP may be unable to most effectively inform decisions on where scarce resources need to be applied. In addition, CBP could also improve its Outbound Enforcement Program by directing and ensuring ports of entry develop guidance that addresses officer safety. Also, by establishing performance measures related to program effectiveness, CBP could be better positioned to show the degree to which its efforts are stemming the flow of cash, weapons, and other goods that stem from criminal activities. While we recognize that this is a new program, without data and information to inform resource decisions, help ensure that officers are safe, and measure program effectiveness, CBP risks that the program could result in an inefficient use of resources, that officers will be endangered, and that Congress could not have the information it needs for its oversight efforts. Even if efforts to reduce the flow of bulk cash into Mexico are successful, drug trafficking organizations and other criminal elements may shift their tactics and use other methods to smuggle illegal proceeds out of the United States, such as through the use of stored value. FinCEN is in the process of developing and issuing regulations related to the issuance of stored value, as required by the Credit CARD Act, but work remains and it is unclear when the agency will issue the final regulation. By developing a management plan with timelines for issuing final rules, FinCEN could be better positioned to manage its rulemaking efforts and to reduce the risk of cross-border smuggling and other illicit uses of stored value by drug trafficking organizations and others. Developing policies and procedures, such as for transaction testing for monitoring MSBs that issue, sell, and redeem stored value could help ensure that such MSBs carry out current and future anti-money laundering requirements. Recommendations for Executive Action To strengthen CBP’s implementation of the Outbound Enforcement Program as well as its planning efforts related to the program, we recommend that the Secretary of Homeland Security direct the Commissioner of Customs and Border Protection to take the following three actions: Collect cost and benefit data that would enable a cost/benefit analysis of the Outbound Enforcement Program to better inform decisions on where scarce resources should be applied. These data could include cost data on training and using currency canine for outbound operations as part of the Outbound Workload Staffing Model, cost estimates for equipping officers, installing technology to support outbound operations, assessments of infrastructure needs at port of entry outbound lanes, an estimate of the costs resulting from travelers waiting to be inspected, and information on quantifiable benefits, such as seizures, as well as non-quantifiable benefits resulting from outbound inspections. Direct and ensure that managers at land ports of entry develop policies and procedures that address officer safety, such as detailing how officers should conduct outbound inspections on a busy highway environment. Develop a performance measure that informs CBP management, Congress, and other stakeholders about the extent to which the Outbound Enforcement Program is effectively stemming the flow of bulk cash, weapons, and other goods that stem from criminal activities by working with other federal law enforcement agencies involved in developing assessments on bulk cash and other illegal goods leaving the country. To strengthen FinCEN’s rulemaking process and to ensure IRS compliance examiners consistently apply the anti-money laundering requirements under the Credit CARD Act, we recommend that the Director of FinCEN take the following two actions: Update its written plan by describing, at a minimum, target dates for implementing all of the requirements under the Credit CARD Act to include FinCEN’s overall strategy and risk mitigation plans and target dates for issuing notices of proposed rulemaking and final rules. Revise its guidance manual to include specific examination policies and procedures, including for transaction testing, for IRS examiners to follow at a MSB that issues, sells, and/or redeems stored value. Agency Comments and Our Evaluation We provided a draft of the sensitive version of this report to DHS, the Department of the Treasury, and DOJ for comment. In commenting on our draft report, DHS, including CBP, concurred with our recommendations. Specifically, DHS stated that it is taking action or plans to take action to address each recommendation. For example, DHS stated that it is collecting cost data as well as identifying quantifiable and non-quantifiable benefits of the outbound program to conduct cost/benefit analysis. In addition, DHS stated that it will update its National Outbound Operations Policy Directive to ensure each Port Director establishes a standard operating procedure for officer safety. DHS also stated that it will work to develop effective performance measures that accurately assess its surge- type outbound operations. CBP stated that it will coordinate with other law enforcement entities, including other DHS components and DOJ as well as the White House Office of National Drug Control Policy to enhance CBP interdiction efforts. DHS also stated that it is investigating the implementation of a random sampling process in the outbound environment that would provide statistically valid compliance results for outbound operations. If effectively implemented, these actions would address the intent of our recommendations. In commenting on our draft report, Treasury, including FinCEN, stated that they agree with our recommendations. Specifically, Treasury stated that it anticipates issuing additional rulemaking to address all areas of potential vulnerability in the prepaid access sector. Treasury stated that although identifying target dates is particularly challenging when taking a phased approach to rulemaking, it agrees that the existing plan should be updated accordingly. Additionally, Treasury stated that when the initial rulemaking is finalized, it will proceed with its plan to update the Money Services Business examination manual and other related outreach efforts. If effectively implemented, these actions would address the intent of our recommendations. DOJ did not have formal comments on our report. DHS, Treasury, and DOJ provided technical comments, which we incorporated as appropriate. Appendix III contains written comments from DHS. Appendix IV contains written comments from Treasury. As arranged with your offices, we plan no further distribution of this report until 30 days after the issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Attorney General of the United States, the Secretary of the Treasury, the Director of the Office of Management and Budget, and the appropriate congressional committees. In addition, the report will be available at no charge on the GAO Website at http://www.gao.gov. If your offices or staff have any questions concerning this report, please contact me at (202) 512-8777 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Appendix I: Costs for Outbound Enforcement Program (Fiscal Years 2008-2010) Appendix I: Costs for Outbound Enforcement Program (Fiscal Years 2008-2010) Fiscal year 2009 Fiscal year 2010 (projected) Miscellaneous expenses. Appendix II: General Overview of the Federal Rulemaking Process This appendix provides an overview of the steps in the rulemaking process for a significant regulatory action under Executive Order 12866, as amended, and the potential time involved for some of the steps. Step 1: Agency (or agencies, if a joint rule) completes development of the notice of proposed rule making (NPRM), which includes the proposed rule and supplemental information. Step 2: Agency submits the draft NPRM and supporting materials, including any required cost-benefit analysis, to the Office of Management and Budget (OMB) for review. Step 3: OMB reviews the draft NPRM and supporting materials and coordinates review of the proposed rule by any other agencies that may have an interest in it. Step 4: OMB notifies the agency in writing of the results of its review, including any provisions requiring further consideration by the agency, within 90 calendar days after the date of submission to OMB. Step 5: OMB resolves disagreements or conflicts, if any, between or among agency heads or between OMB and any agency; if it cannot do so, such disagreements or conflicts are resolved by the President or by the Vice President acting at the request of the President. Step 6: Once OMB notifies the agency that it has completed its review without any requests for further consideration, the agency reviews the NPRM and publishes it for public comment in the Federal Register. Step 7: Agency is to give the public a meaningful opportunity to comment on the proposed rule, which generally means a comment period of not less than 60 days. Step 8: Once the comment period has closed, the agency reviews the comments received, makes appropriate revisions to the proposed rule, and prepares a notice of the final rule, including supplemental information with responses to comments received. Step 9: Agency submits draft notice and final rule, including updated supporting materials or cost-benefit analysis, to OMB for review. Step 10: OMB reviews the draft notice, final rule, and supporting materials; coordinates review by any other agencies that may have an interest in the rule; and notifies the agency of the results within 90 calendar days after the date of submission to OMB. Step 11: Once OMB notifies the agency that it has completed its review without any requests for further consideration, the agency reviews the rule one more time and generally publishes the final rule and supplemental information in the Federal Register at least 60 days before the new rule takes effect. Appendix III: Comments from the Department of Homeland Security Appendix IV: Comments from the Department of the Treasury Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to those named above, David Alexander, Neil Asaba, Chuck Bausell, Willie Commons III, Kevin Copping, Mike Dino, Ron La Due Lake, Jan Montgomery, Jessica Orr, Susan Quinlan, Jerome Sandau, Wesley Sholtes, Jonathan Smith, Katherine Trenholme, and Clarence Tull were key contributors to this report.
U.S. Customs and Border Protection (CBP) is the lead federal agency responsible for inspecting travelers who seek to smuggle large volumes of cash--called bulk cash--when leaving the country through land ports of entry. It is estimated that criminals smuggle $18 billion to $39 billion a year in bulk cash across the southwest border. The Financial Crimes Enforcement Network (FinCEN) is responsible for reducing the risk of cross-border smuggling of funds through the use of devices called stored value, such as prepaid cards. GAO was asked to examine (1) the extent of actions taken by CBP to stem the flow of bulk cash leaving the country and any challenges that remain, (2) the regulatory gaps, if any, of cross-border reporting and other anti-money laundering requirements of stored value, and (3) if gaps exist, the extent to which FinCEN has addressed them. To conduct its work, GAO observed outbound operations at five land ports of entry. GAO also reviewed statutes, rules, and other information for stored value. This is a public version of a law enforcement sensitive report that GAO issued in September 2010. Information CBP deemed sensitive has been redacted. In March 2009, CBP created an Outbound Enforcement Program aimed at stemming the flow of bulk cash leaving the country, but further actions could be taken to address program challenges. Under the program, CBP inspects travelers leaving the country at all 25 land ports of entry along the southwest border. On the Northern border, inspections are conducted at the discretion of the Port Director. From March 2009 through June 2010, CBP seized about $41 million in illicit bulk cash leaving the country at land ports of entry. Stemming the flow of bulk cash, however, is a difficult and challenging task. For example, CBP is unable to inspect every traveler leaving the country at land ports of entry and smugglers of illicit goods have opportunities to circumvent the inspection process. Other challenges involve limited technology, infrastructure, and procedures to support outbound operations. CBP is in the early phases of this program and has not yet taken some actions to gain a better understanding of how well the program is working, such as gathering data for measuring program costs and benefits. By gathering data for measuring expected program costs and benefits, CBP could be in a better position to weigh the costs of any proposed expansion of the outbound inspection program against likely outcomes. Regulatory gaps of cross-border reporting and other anti-money laundering requirements exist with the use of stored value. For example, travelers must report transporting more than $10,000 in monetary instruments or currency at one time when leaving the country, but FinCEN does not have a similar requirement for travelers transporting stored value. Similarly, certain anti-money laundering regulations, such as reports on suspicious activities, do not apply to the entire stored value industry. The nature and extent of the use of stored value for cross-border currency smuggling and other illegal activities remains unknown, but federal law enforcement agencies are concerned about its use. FinCEN is developing regulations, as required by the Credit CARD Act of 2009, to address gaps in regulations related to the use of stored value for criminal purposes, but much work remains. FinCEN has not developed a management plan that includes, among other things, target dates for completing the regulations. Developing such a plan could help FinCEN better manage its rulemaking effort. When it issues the regulations, law enforcement agencies and FinCEN may be challenged in ensuring compliance by travelers and industry. For example, FinCEN will be responsible for numerous tasks including issuing guidance for compliance examiners, revising the way in which it tracks suspicious activities related to stored value, and addressing gaps in anti-money laundering regulations for off-shore entities that issue and sell stored value.
Background HUD encourages homeownership by providing mortgage insurance for single family housing and makes rental housing more affordable for about 4.8 million low-income households by insuring loans to construct or rehabilitate multifamily rental housing and by assisting such households with their rent payments. In addition, it has helped to revitalize over 4,000 localities through community development programs. To accomplish these diverse missions, HUD relies on third parties, including contractors, to administer many of its programs. As shown in figure 1, according to data HUD reported to the Federal Procurement Data Center (FPDC), for fiscal year 2000, HUD obligated the bulk of its contracting dollars—over 96 percent—in three categories of contracting: automated data processing and telecommunications services for about $254 million; operation of government-owned facilities for about $195 million for one of its multifamily contractors; and over $600 million for professional, administration, and management support services contracts, such as real estate brokerage services, technical assistance, and other services. According to HUD data, about $640 million of the $1.2 billion contract obligations for fiscal year 2000 are for Office of Housing contracts; much of this contracting was for services HUD needs to manage its foreclosed single-family and multifamily housing inventory, which HUD acquires when borrowers default on mortgages insured by the Federal Housing Administration (FHA). According to HUD’s fiscal year 2001 annual performance report, the Secretary holds single-family property with a value of about $2.4 billion and multifamily property with a value of about $750 million as of September 30, 2001. In its single-family program, HUD contracts for management and marketing contractors who are responsible for securing, maintaining, and selling the houses that HUD acquires when the owners default on their loans. HUD also contracts for property management services, such as on-site management, rent collection, and maintenance, for multifamily properties it acquires through foreclosure. HUD’s two largest multifamily property management contractors have an obligated value of about $650 million over 5 years. Contracting is conducted in HUD’s Office of the Chief Procurement Officer (OCPO) in Washington D.C., or by one of HUD’s three Field Contracting Operations (FCO) offices located in Philadelphia, Pennsylvania; Atlanta, Georgia; and Denver, Colorado. OCPO contracts for information technology and other services in support of HUD headquarters. FCO offices primarily contract for services related to the business operations of HUD’s field offices and specialized centers. For example, contracting officers in one FCO assist HUD’s two Multifamily Property Disposition Centers (located in Atlanta, Georgia, and Ft. Worth, Texas) in contracting for and overseeing the property management contractors that are responsible for the day-to-day management of foreclosed multifamily properties. HUD’s Office of Multifamily Housing field offices and two Property Disposition Centers are responsible for the oversight of various programs to provide affordable multifamily housing. The largest multifamily contracts in the field are for the Property Disposition Centers, which are responsible for management of foreclosed multifamily properties. HUD’s multifamily housing field offices—comprising 18 hub offices and their associated 33 program centers—also contract for inspections of the construction of multifamily properties built under its FHA-insured and assisted multifamily housing programs, which are for the construction of housing for the elderly and disabled. Multifamily Housing has four full- time GTRs located in Atlanta and Ft. Worth who are responsible for monitoring most field multifamily contracts; two GTRs are assigned to the Property Disposition Centers and two are responsible for the construction inspection contracts. Most of the field contracts are also assigned at least one GTMs, who is designated to assist, on a part-time basis, the GTRs on the day-to-day technical oversight of the contractors’ performance. Various federal laws, regulations, and policies govern contracting operations and procedures. The Federal Acquisition Regulation (FAR) establishes uniform policies and procedures for acquisitions by all executive agencies. The FAR establishes procedures for all aspects of the contracting process, from solicitation to post award monitoring, including responsibilities of the various members of the acquisition team such as the contracting officer. Also in 1974, the Office of Federal Procurement Policy (OFPP) Act created OFPP within the Office of Management and Budget to provide, among other things, governmentwide policies for agencies in procurement matters. OFPP’s Guide to Best Practices for Contract Administration recommends the use of a contract administration plan for good contract administration. According to the guide, this plan should specify the performance outputs and describe the methodology used to conduct inspections of those outputs. HUD supplements the FAR through its own regulations called HUD’s Acquisition Regulation (HUDAR) and its Procurement Policies and Procedures Handbook. The handbook specifies various monitoring tools that the GTR may use to monitor contractor performance, such as a quality assurance plan, a contractor’s work plan and schedule of performance, or progress reports. The purpose of the monitoring is to ensure that (1) the contractor performs the services and/or delivers the products of the type and quality that the contract requires, (2) performance is along the most efficient lines of effort, (3) performance and deliverables are timely, (4) performance is within the total estimated cost, and (5) HUD will be able to properly intervene when performance is deficient. For example, according to the handbook, often the best way for a GTR to determine the quality of the contractor’s performance is through an actual inspection of work or products. Inspections may be routine, unannounced, or a combination of the two, and the contract should specify any requirement for routine inspections, such as the frequency and dates, or other occurrences that would trigger an inspection. The handbook does not establish specific monitoring requirements, such as timetables for review or numbers of site visits; however, some of the policies established by individual program offices do include such requirements. For example, the Office of Multifamily Housing’s Standard Operating Procedures No. 5 requires that HUD staff visit construction sites at least twice during construction to verify the performance of the inspection contractor who is responsible for inspecting actual construction of the project. The Clinger-Cohen Act of 1996 requires executive agencies, through consultations with OFPP, to establish education, training, and experience requirements for acquisition workforces at civilian agencies. Under implementing guidance issued by OFPP, an agency’s acquisition workforce—including its contracting officers, contract specialists, purchasing agents, contracting officer representatives, and contracting officer technical representatives—must meet an established set of contracting competencies. In addition, OFPP has identified specific training requirements for personnel in the contracting and purchasing occupation series. OFPP further required that agencies have policies and procedures that specify career paths and mandatory training requirements for acquisition positions and that agencies collect and maintain standardized information on the training of its acquisition workforce. A strong internal control system provides the framework for accomplishing management objectives, accurate financial reporting, and compliance with laws and regulations. Effective internal controls, including monitoring, serve as checks and balances against undesired actions, thereby providing reasonable assurance that resources are effectively managed and accounted for. A lack of effective internal controls puts an entity at risk of fraud, waste, abuse, and mismanagement. Monitoring is a particularly critical management control tool for HUD because its housing programs rely extensively on various third parties, such as contractors, to achieve HUD’s goals. For many years, HUD has been the subject of criticism for management and oversight weaknesses that have made its programs vulnerable to fraud, waste, abuse, and mismanagement. In January 2001, we recognized the credible progress that HUD had made in improving its management and operations, and we reduced the number of HUD program areas deemed to be high risk to two of its major program areas—single family mortgage insurance and rental housing assistance. These program areas include the single family and multifamily property disposition activities cited earlier and comprise about two-thirds of HUD’s budget. We, HUD’s Office of Inspector General (OIG), and the National Academy of Public Administration (NAPA) have reported on weaknesses in HUD’s contract administration and monitoring of contractors’ performance over several years. For example, starting in 1998, we reported on the performance of HUD’s single-family contractors—that HUD did not have an adequate system in place to assess its field offices’ oversight of real estate asset management contractors. In three offices that we reviewed, none of the offices adequately performed all of the functions needed to ensure that the contractors met their contractual obligation to maintain and protect HUD-owned properties. HUD’s OIG completed a comprehensive review of HUD’s contracting operations in 1997 and found that a lack of adequate planning, needs assessment, good initial planning, monitoring, and cost control on several multimillion dollar contracts left HUD vulnerable to waste and abuse. The OIG found contract monitoring to be very lax throughout the program areas. The GTRs and GTMs had a poor understanding of their roles and responsibilities, allowing HUD to be overbilled, improperly authorizing contract tasks, accepting less than complete contract work without financial credits or adjustments, and could not document whether certain tasks were completed. In a followup review in 1999, the OIG reported that HUD’s reforms had laid the groundwork for an effective acquisition process; however, they concluded that HUD’s contracting attitudes and practices had not changed significantly. In May 1997, NAPA, reported that HUD’s procurement process took too long; FHA’s oversight of contracted services was inadequate; and FHA sometimes used contracting techniques that limited competition. In 1999, NAPA issued its final report on the results of their study and noted that HUD had made progress toward improving its procurement processes. HUD Has Increased Contracting Activities and Taken Steps to Improve Acquisition Management HUD’s contracting obligations have been on an upward trend in recent years; HUD reports that its contracting obligations increased from about $786 million in fiscal year 1997 to almost $1.3 billion in fiscal year 2000 (in 2001 constant dollars), or about a 62 percent increase within 5 years.Much of this increase in contracting activity is compensating for staff reductions in the early 1990’s from about 13,500 to about 9,000 by March 1998 and the need to contract out for activities previously done by HUD employees, and for new functions, such as the physical building inspections of public housing and multifamily insured projects, initiated under recent management reforms. Figure 2 shows the change in HUD’s contracting dollars from fiscal year 1995 through fiscal year 2000 (in 2001 constant dollars). With recent management reform initiatives, HUD now contracts out for many activities formerly done by HUD employees, as well as for new functions. For example, HUD’s Homeownership Centers (HOC) hired contractors to review single-family loan files and issue mortgage insurance. HUD staff, which formerly performed these functions, then became contract monitors. HOCs also contracted out for a new type of contractor to manage and market acquired single-family properties. They also awarded other contracts to inspect 10 percent of the properties handled by each of the management and marketing contractors and review 10 percent of the management and marketing contractors’ property case files each month. HUD’s Office of Multifamily Housing contracted out work previously done by HUD employees that included inspections of repairs and inspections during construction of insured and assisted multifamily properties. The department expects its reliance on contracting to continue on an upward trend. Although HUD’s Deputy Secretary has expressed an interest in possibly returning some of this contract work to HUD employees, the President’s fiscal year 2003 budget proposed that federal agencies make at least 5 percent of the full-time equivalent positions that are determined to be commercial activities subject to competition between public and private sectors in fiscal year 2002 and an additional 10 percent in fiscal year 2003. HUD’s fiscal year 2003 budget set a goal of opening 290 additional HUD positions to competition in fiscal year 2003 and an additional 580 positions in fiscal year 2004. HUD is also planning to renew some of its major contracts in the near future that involve substantial financial commitments over an extended period of time. HUD’s fiscal year 2002 procurement forecast includes plans to award new contracts for its major multifamily property management activities, with an expected cost of $800 million over 5 years. HUD also plans to award its new information technology systems contract that is expected to cost about $2 billion over 10 years. HUD Has Taken Several Steps to Improve Acquisition Management In response to weaknesses in its contracting administration and monitoring reported by us, HUD’s Office of Inspector General and NAPA, HUD undertook a number of corrective actions to improve its acquisition management in recent years. For example, HUD instituted full-time GTR positions in its program offices to assist in contract administration. HUD also created a GTR certification program in 1998 to establish standard training requirements for HUD staff who serve as GTRs and to provide them with an understanding of the federal contracting process as implemented at HUD. In addition to classroom training, HUD has also developed on-line GTR training to supplement the classroom training. In 1997, HUD created a centralized management information system, called the HUD Procurement System (HPS), to assist in managing its contracts. HUD upgraded the system in fiscal year 1998 to consolidate and combine headquarters and field contracting data and to improve integration with HUD’s financial systems. As a result, for the first time HUD had one source for contracting data. In 1999, the HUD Office of Inspector General recognized that HUD had shown substantial strides in automating the department’s procurement data and establishing the necessary financial linkages to integrate HPS with HUD’s core accounting system. HUD also hired a Chief Procurement Officer and created a Contract Management Review Board in 1998 to improve contract administration and procurement planning. The Contract Management Review Board reviews all contracts over $500,000 to provide a departmentwide planning perspective. In addition, HUD has increased its training budget for those in acquisition positions (i.e., $66,871 in fiscal year 2000 to $163,537 in fiscal year 2002). In addition, HUD has implemented a compliance and monitoring initiative in order to assist staff in prioritizing their responsibilities and directing their resources. This initiative, although not specifically targeted to contract monitoring, emphasizes the importance of risk-based approaches to monitoring. HUD reported that over 1,200 staff were trained through fiscal year 2001, and the department’s fiscal year 2003 annual performance plan anticipates increasing the number of trained staff to more than 2,000. Despite the improvements HUD has undertaken, we have continued to identify deficiencies in HUD’s acquisition management. For example, in October 2001, we reported that HUD relied on contracting to address staffing shortfalls rather than assessing whether contracting was a better or more effective solution and problems continued in HUD’s oversight of its contractors. We concluded that HUD’s acquisition management was one of the significant challenges facing HUD in its attempts to sustain the progress of its management reform and move toward its goal of becoming a high-performing organization. Our current work has found that specific deficiencies remain in HUD’s oversight of contracts, management of the acquisition workforce, and the reliability and availability of data needed to manage contract operations. HUD Does Not Employ Certain Processes and Practices that Would Facilitate Effective Contractor Monitoring Holding contractors accountable for results requires processes and procedures to facilitate effective monitoring. HUD, and in particular the multifamily housing program, does not employ certain processes and practices that would aid in oversight of its contractors. HUD does not use a systematic approach for monitoring its contractors, which would include the use of monitoring plans or a risk-based strategy, that would help to guide its monitoring. And the monitoring that occurs is generally remote; consisting mainly of reviews of progress reports and invoices, telephone calls, and emails. Without a systematic approach to oversight and adequate on-site monitoring the department’s ability to identify and correct contractor performance problems and hold contractors accountable for results is reduced. The resulting vulnerability limits HUD’s ability to assure itself that it is receiving the services for which it pays. HUD’s Monitoring of Contractors Is Not Systematic HUD does not employ a systematic process for monitoring its contractors that consistently uses plans and risk-based strategies needed to guide its monitoring; nor does HUD track contractor performance needed for such plans and strategies. HUD’s Procurement Policies and Procedures Handbook provides a framework for the monitoring of contractors and establishes various monitoring tools that GTRs should use to ensure that contractors are held accountable for results, including a contract administration plan, a quality assurance plan, and a contractor’s work plan and schedule of performance. However, our review of 43 active contracts out of 49 contracts administered by HUD’s Office of Multifamily Housing found that the GTRs on 30 of these contracts—70 percent—did not make use of any of these plans. Among these plans, OFPP’s Guide to Best Practices for Contract Administration describes a contract administration plan as essential for good contract administration. According to OFPP, this plan must specify the performance outputs and describe the methodology used to conduct inspections of those outputs. According to our survey, only 23 percent of HUD’s GTRs use contract administration plans and 32 percent reported that they had never heard of such a plan. In 1999, HUD’s Inspector General found that HUD’s various offices did not consistently develop and implement formal contract monitoring plans and recommended that HUD develop and disseminate a model comprehensive contract-monitoring plan for HUD-wide GTR use. In our review of active multifamily housing contracts, we found that although HUD reported to the Inspector General it had implemented this recommendation, there was no evidence of such a model comprehensive contract-monitoring plan in use by Multifamily Housing. In addition to its limited use of monitoring plans, HUD has not effectively incorporated a risk-based approach into its process for overseeing contractors. In recent years, HUD has emphasized developing risk-based approaches to managing and monitoring its programs, including establishing a Risk Management Division within the Office of the Chief Financial Officer, and developing a training program and desk guide to help staff understand and prioritize their monitoring responsibilities. However, we found little evidence that the concept of risk-based management is used in HUD’s oversight of its contracts. Our past work found that HUD’s efforts to perform risk-based monitoring and performance evaluations on its single family property disposition contractors met with limited results—some field offices did not perform required assessments while, others did not perform them as often as required. Our more recent work in HUD’s multifamily program found little evidence that the concept of risk management or risk-based monitoring has been applied to contract oversight. Acquisition workforce staff said they were unaware of any requirements to apply a risk-based methodology to their monitoring efforts, and we saw no evidence of any formal risk assessments in our review of the Multifamily program’s 43 active contract files. While staff indicated that a risk-based approach would be useful, they generally told us that monitoring is conducted based on the availability of travel funds and location of staff, or after a significant contractor performance problem has been identified. A key component to developing effective monitoring plans and incorporating risk-based approaches to monitoring is tracking past and current contractor performance; however, we found little evidence that HUD tracks contractor performance systematically. The HUD Procurement System (HPS) allows its acquisition workforce across HUD’s programs to track contract milestones and deliverable dates, as well as document and record contractor performance information—information that could aid in the contract monitoring process. However, we found that these data fields are often not used. The scheduled deliverable date data field was left blank 35 percent of the time, and the contractor performance data field was incomplete in 73 percent of contracts that are inactive and closed. The Deputy Secretary directed that effective January 2000 contractor products and performance would be tracked in HPS, initially for all new contracts over $1 million. Contractor oversight problems in HUD’s multifamily housing programs are further compounded by the lack of clearly defined GTR and GTM roles. For example, in the Multifamily Property Disposition Centers, GTR and GTM roles and responsibilities are not defined consistent with HUD’s policy, possibly resulting in gaps in the monitoring process. HUD’s Procurement Policies and Procedures Handbook states that “a GTR or GTM may not provide any direction to the contractor in those areas of responsibility assigned to another GTR or GTM,” so as to avoid providing potentially conflicting guidance. However, the Multifamily Property Disposition Centers modified their contracts to change any place where the term “GTR” is used to the term “GTR/GTM” instead. The effect of this change is that the GTRs and GTMs would have the same responsibilities, which is what the guidance sought to avoid so that conflicting instructions could not be given to the contractors. The roles are further complicated by a decision the Property Disposition Centers made to name managers as GTMs on the two property management contracts, in one case assigning the Center Director as a GTM. HUD’s handbook states that the GTR is responsible for monitoring GTM activities. By designating managers as GTMs, HUD has created a situation in which the property management GTRs are essentially overseeing the work of their supervisor or of someone in a management position, in a reporting line above them. As noted above, HUD is attempting to improve its oversight and monitoring of contractors. For example, Multifamily Housing implemented a structure in which four full-time GTRs will provide oversight for procurement actions of more than $100,000 in the field. The Property Disposition Center has also developed GTR and GTM protocols for the various types of services for which it contracts out, in an attempt to more clearly define the roles and responsibilities of these positions. However, Multifamily’s GTR program is still in transition and not all the roles and responsibilities have been clearly delineated. HUD’s Monitoring of Contractors Is Generally Remote The monitoring that occurs in HUD’s multifamily housing program is generally remote. In our past work in other program areas, we have noted that without adequate on-site inspections, HUD could not be assured that it was receiving the services for which it had paid. The GTRs and GTMs in Multifamily Housing who are responsible for oversight of HUD’s property disposition activities report being unable to make regular visits. HUD’s oversight and monitoring of contractors consists mainly of reviews by HUD staff of progress reports and invoices prepared by the contractors, as well as email correspondence and telephone conversations between HUD staff and contractors. Site visits to multifamily properties to oversee contractor activities do not occur on a routine basis, particularly for the two largest multifamily housing program contracts, which constitute a total value of almost $650 million in obligations over 5 years for the management of HUD’s inventory of foreclosed multifamily properties. Although HUD’s multifamily property disposition handbook calls for GTRs or GTMs to conduct quarterly on-site physical inspections of the properties, the specific guidance related to that requirement has not yet been developed. Site visits do not routinely occur largely because the properties in HUD’s property disposition inventory are located throughout the country, while the GTRs and GTMs responsible for the oversight of these properties are located in Atlanta, Georgia, and Ft. Worth, Texas, with the exception of one GTM located in New York City, New York. A consistent theme among the GTRs and GTMs we interviewed is that they believe that in order to effectively do their jobs, they should probably be conducting a greater number of on-site visits, but they lack the time and resources that would allow them to do so. Some noted that the failure to visit the properties stems from the workload—the GTM’s are assigned multiple properties under the property management contracts that are usually not in good condition and are located in different parts of the country—and that it is difficult to keep up with everything that needs to be done. Restraints on travel funds were often cited as the reason for not making visits. The property disposition center staff are not the only multifamily housing staff experiencing difficulties making on-site visits to assess contractor performance. HUD also contracts with inspectors who monitor construction of HUD-assisted and insured multifamily projects throughout the phases of construction and during the 1 year warranty period after completion of construction. HUD’s construction inspection guidance requires that HUD employees make at least two site visits during construction to assess the performance of the construction inspectors. However, GTMs for these contracts also report being unable to make site visits, due to other job responsibilities and because the projects are dispersed over a wide geographic area and HUD lacks the necessary travel funds. GTMs for these inspection contracts told us that they primarily rely on reviews of reports from the contractors to assess the contractors’ performance. To address its inability to do more on-site monitoring, the multifamily housing program uses another inspection contractor to visit selected properties and perform property management reviews of some properties managed by the two property management contractors. However, according to data maintained by HUD, this inspection contractor made 31 visits to 26 properties for the Atlanta Property Disposition Center since 1997, although the property management contractor managed over 100 properties during that period. The inspection contractor was not used at all by the Ft. Worth Property Disposition Center for the almost 150 properties in its inventory. We also found that in those cases where the inspection contractor identified a problem, HUD does not routinely follow up on those deficiencies to make certain the problems have been resolved. Instead, the multifamily housing program staff accepted correspondence from the property management contractors as evidence that deficiencies were resolved. According to staff, HUD does not routinely follow up because of limited resources. HUD will sometimes make a site visit to verify that the problems have been resolved, but field staff told us that there normally are not enough travel funds to make a special follow-up visit. Weaknesses in Monitoring Limit HUD’s Ability to Prevent Contractor Fraud, Waste, or Abuse Weaknesses in HUD’s monitoring processes limit the department’s ability to identify and correct contractor performance problems, hold contractors accountable, and assure itself that it is receiving the services for which it pays. We have reported that HUD has experienced similar difficulties monitoring the contractors that are responsible for managing and disposing of its foreclosed single-family properties for several years. In 1998, we reported that although HUD’s single-family guidance establishes various methods for monitoring the performance of its single-family real estate asset management contractors, such as conducting monthly on-site property inspections, these methods were not consistently used in a way that would assure HUD that contractors were meeting their contractual obligations. Without adequate on-site inspection, HUD could not be assured that it was receiving the services for which it had paid. We found similar conditions in May 2000 when we reviewed the new marketing and management contractors HUD acquired to replace the earlier contractors, and in July 2001, we found that HUD’s oversight of these contractors remained inadequate As recently as February 2002, in audits of HUD’s consolidated financial statements, the independent auditor identified HUD’s monitoring of its single-family property inventory as a significant internal control deficiency. The auditors recommended that HUD, among other things, (1) improve monitoring by enhancing comprehensive oversight tools and management reporting and (2) use risk-based strategies in the oversight process. In our October 2002 testimony before the House Government Reform and Operations Committee, Subcommittee on Governmental Efficiency, Financial Management and Intergovernmental Relations, we reported improper payments identified during the review of the $214 million in disbursements made under HUD’s multifamily property management contracts. As we reported, one of HUD’s multifamily property management contractors bypassed HUD’s controls on numerous occasions by (1) alleging that construction renovations were emergencies, thus not requiring multiple bids or HUD pre-approval, and (2) splitting renovations into multiple projects to stay below the $50,000 threshold of HUD required approval. Over 18 months HUD authorized and paid for approximately $10 million of renovations, of which each invoice was for less than $50,000, at two properties. HUD did not verify that any of the construction renovations were actually performed or determine whether the emergency expenditures constituted such a classification. As we testified, our review of these payments indicates that HUD paid 5 invoices totaling $227,500 for emergency replacement of 15,000 square feet of concrete in front of 5 buildings; however, we visited the site and determined that only about one-third of the work HUD paid for was actually performed. As a result, more than $164,000 of the $227,500 billed and paid for “emergency” installation of concrete sidewalk appeared to be improperly paid. As an example, figure 3 illustrates that only portions, the lighter shaded section, of the sidewalk were replaced in front of one of the buildings and not the entire sidewalk as was listed on the paid invoices. At this same property, we found instances where HUD paid construction companies for certain apartment renovations, deemed “emergency repairs,” that were not made. Three of the 10 tenants that we interviewed told us that some work listed on the invoice that the property management firm submitted was not performed at their homes. For instance, while one invoice indicated that the apartment floor and closet doors had been replaced at a cost of $10,400, the tenant stated that the floors and doors were never replaced. On several other occasions, HUD paid the same amount to perform “emergency renovations” of apartments of varying sizes. For example, HUD paid three identical $32,100 invoices for the emergency renovation of a one bedroom (600 square feet), a two bedroom (800 square feet), and a three bedroom (1,000 square feet) apartment. All three invoices listed identical work performed in each unit. For example, each invoice listed a $4,500 cabinet fee, yet the one bedroom unit had five fewer cabinets than the three bedroom unit. We, and the independent construction firm we hired, questioned the validity of the same charge for units of varying sizes and the likelihood of numerous apartments being in identical condition and in need of the same extensive renovations. These cases are now being investigated by the HUD Inspector General and our Office of Special Investigations. The potential for these and other types of problems would be reduced with improved monitoring and oversight. HUD Does Not Strategically Manage Its Acquisition Workforce Holding contractors accountable requires the appropriate number of people in the right positions with the right skills and training. HUD does not strategically manage its acquisition workforce to ensure that individuals have the appropriate workload, skills, and training that allows them to effectively perform their jobs. Specifically, HUD has not yet addressed workload issues, assessed the skills and capabilities of its acquisition workforce, or provided required training to substantial numbers of its acquisition workforce. Workload Issues Not Yet Addressed and Skills and Capabilities Not Yet Assessed Although HUD identified workload disparities, the department has not yet determined the appropriate workload allocation for its acquisition workforce. To assist in the department’s efforts to address human capital issues resulting from HUD’s diminishing staffing levels, HUD undertook a Resource Estimation and Allocation Project (REAP) to determine current workload levels agencywide. The resulting study determined that serious staffing shortages exist within OCPO and recommended an additional 31 full-time equivalent positions for OCPO in headquarters and no change for the field. The study recommended that headquarters staffing be increased from the 54 full-time equivalent staff to 85 and that field staff remain at 68 full-time equivalent staff. The study observed that the OCPO in headquarters is “an organization in crisis,” and that the majority of supervisors and contract specialists reported working a very high number of uncompensated hours. HUD has taken steps to shift workload around to address some disparities, but has not yet utilized the study results to determine the appropriate allocation and workload levels of its acquisition workforce. OCPO has shifted some activities to the field contracting operations, such as closing out contracts, or assigned field staff to details in headquarters to assist in addressing workload distribution issues and keep field staff fully occupied. Our survey results and other work also show that acquisition staff across HUD perceive they have too much work to do. According to our survey, 55 percent of respondents overall said that their contracting workload has increased over the past 2 years. Further, 31 percent of HUD’s acquisition workforce who manage and monitor more than five contracts believe that the number of contracts they monitor is “too many.” Finally, 18 percent of HUD’s acquisition workforce reported that they spend “too little” time on their contracting related responsibilities. Although HUD has taken steps to identify the knowledge, skills, and abilities needed by its acquisition workforce to do their work, HUD has not assessed the skills and capabilities of its acquisition workforce—a critical step in successful workforce management. We have identified an agency’s development of a comprehensive strategic workforce plan that includes both an analysis of the knowledge, skills, and abilities needed by staff to do their work as well as the capabilities of its staff as a crucial part of a strategic human capital management approach. HUD has taken some steps toward that goal by drafting an Acquisition Career Management Plan that discusses the knowledge, skills and abilities needed by staff; however, HUD has not yet specifically assessed the skills and capabilities of its acquisition workforce. Consequently, HUD is not as prepared as it could be to address the human capital challenges, such as skill gap deficiencies, within its acquisition workforce. Further, the ability for HUD management to make informed decisions, such as recruiting and hiring as well as planning for training, is hampered. Many of HUD’s Acquisition Workforce Not Receiving Required Training Over half of HUD’s GTRs—who are directly responsible for monitoring contractors—may not have received acquisition training required by the Clinger-Cohen Act and OFPP. In response to the Clinger-Cohen Act of 1996 and OFPP policies that require specific training for GTRs, in 1998, HUD developed and implemented a GTR training curriculum. During our review, we identified 251 individuals serving as full or part-time GTRs on contracts; however, according to HUD’s training records, 143 individuals who have not taken HUD’s required GTR training are currently serving as GTRs on contracts. OCPO management stated that they were not aware that these individuals were serving in this capacity. HUD’s acquisition workforce also includes about 495 individuals serving as GTMs; according to HUD’s training records, only 7 percent of these individuals—35 out of 495—have received specialized acquisition training. Although the Clinger-Cohen Act and OFPP policies do not establish specific training requirements for GTMs and HUD does not explicitly require that GTMs receive acquisition training, HUD documents indicate that providing acquisition training to GTMs is necessary and is part of OCPO’s intent. Specifically, in discussing the roles and responsibilities of GTMs, the department’s procurement handbook states, “many of the duties of the GTR can be delegated to GTMs.” Further, HUD’s draft Acquisition Career Management Plan indicates that it intends the plan to apply to GTMs—it states “the term GTR shall include GTM.” However, according to OCPO managers, HUD is not currently requiring GTMs to fulfill any acquisition training requirements. HUD does not accurately track the training of some of its acquisition workforce and has not finalized its acquisition workforce career management planning as required by OFPP. According to HUD’s centralized training records maintained by OCPO, 89 percent of HUD’s contracting officers, contract specialists, procurement analysts, and purchasing agents do not meet federal training requirements. In response to our observations, the OCPO Director of Policy and Field Operations said that while it is likely that some of these individuals do not meet the training requirements, it is probable that many of the individuals have met the training requirements. The director offered the following reasons. For example, the centralized information system maintained by OCPO has not been updated, partly because HUD is waiting for a new governmentwide system to be completed that will track such information. Further, the training requirements were mandated after some staff had been in an acquisition position for a number of years and therefore have not taken the training because they possess necessary skills. As a result of our review, HUD will institute a training waiver to capture this scenario. Also, HUD has not finalized its draft Acquisition Career Management Plan that specifies career paths and mandatory training for acquisition positions and shows how HUD’s training courses correlate with those required by OFPP. This plan has been in draft form since June 2000. Further, the draft plan does not meet OFPP requirements because it does not specify training requirements for purchasing agents. As a result of our review, HUD officials told us they intend to revise their draft plan to reflect OFPP requirements. Weaknesses in Programmatic and Financial Management Information Systems Holding contractors accountable requires tools and information to ensure that HUD staff can monitor contracts and that HUD management can oversee departmentwide contracting activities. HUD’s centralized contract management information system and several financial management information systems lack complete, consistent, and accurate information—thus, these systems do not adequately support the department’s efforts to manage and monitor contracts. For example, the centralized contracting system does not contain reliable information on the number of active contracts, the expected cost of the contracts, or the types of goods and services acquired. To compensate for the lack of information, HUD staff have developed informal spreadsheet systems to fulfill their job responsibilities. The systems deficiencies also mean that HUD managers lack reliable information needed to oversee contracting activities, make informed decisions about the use of resources, and ensure accountability in the department’s programs. HUD’s Centralized Contracting Information System Does Not Provide Reliable Information on Contract Activities To improve its ability to manage and oversee contracts, HUD implemented a contracting management information system—the HUD Procurement System (HPS)—to track and manage both field and headquarters contracts. HUD uses HPS to (1) monitor workload levels of contracting officers and contract specialists; (2) track events throughout the life of a contract—such as the award, obligation of funds, contract modification, milestones, contractor performance, and close out; (3) identify outstanding procurement requests; and (4) report to the Federal Procurement Data Center (FPDC) to comply with federal reporting requirements so that the Office of Management and Budget (OMB) and the General Services Administration (GSA) can manage contracting governmentwide—for example, establishing contracting goals for federal agencies. In addition, a significant number of HUD’s acquisition workforce, such as contract specialists and GTRs, also use HPS to manage and monitor contracts. However, the data in HPS are not reliable—that is, the data are not consistent, complete, or accurate. We found that Over a quarter of those contracts shown as currently active had dates in a date completion field, which would indicate that the contract had expired, making it difficult for HUD to identify the active contracts it is managing.For example, when we asked for a list of active multifamily contracts, HUD had to call various field offices and GTRs to compile the complete list of multifamily contracts. HPS showed that for 4 percent of HUD’s active contracts, HUD has obligated a total of $197 million more than the stated total value of the contracts because HPS contains errors in the contract value fields. Because HPS is a programmatic information system, this discrepancy does not necessarily mean that HUD has or will spend more than planned for the contracts, but indicates that HUD does not readily know the correct obligated amounts or total value of its contracts. The types of goods or services HUD contracts for is not readily apparent because HPS contains three separate data fields to capture the type of good or service being provided and none of them are utilized in such a way to provide a picture of the good and services HUD purchases. One field is used only by field office staff; another contains narrative descriptions of services but no standard terminology is specified; and the third field uses governmentwide codes for external reporting, rather than HUD-specific codes. (See app. III for a more detailed illustration of discrepancies identified in HPS.) According to HUD officials, the inconsistencies in HPS are due to data entry problems, misunderstandings among staff about what data to record and how to record it, and limited verification procedures. For example, staff inconsistently record data on multi-year contracts with “base” and “option” years. HUD currently has limited verification procedures in place to ensure that HPS data are reliable. According to the HPS administrator, OCPO staff are not required to routinely verify the accuracy of the data they are responsible for maintaining in HPS. HUD’s Financial Management Information Systems Do Not Readily Provide Contracting Obligation and Expenditure Data HUD’s program offices also record contracting obligation and expenditure information in various financial management information systems. However, these systems do not readily provide consistent and complete information for either HUD’s overall contracting activity or individual contracts. Concerns about the effectiveness of HUD’s programmatic and financial management information systems are not new. We have reported that HUD lacks the programmatic and financial management information systems necessary to ensure accountability over its programs since 1984. The lack of readily available, consistent, and complete contracting information is one example of these concerns with HUD’s programmatic and financial management information systems. To obtain aggregate information on HUD’s contract obligations and expenditures, HUD managers must manually query several financial management systems. However, according to a HUD official, these ad hoc queries are only useful in identifying transactions that “look like” contracts. These queries do not reliably produce obligation and expenditure data on all HUD’s contracting activity and also include obligation and expenditure data for activities other than contracts. After attempting to obtain data for us over a period of about 5 months, HUD was able to provide only partial data. HUD officials provided multiple reasons for this, including that several of HUD’s financial management information systems do not track obligation data and HUD does not have ready access to some FHA data for fiscal years 1998 and 1999 because FHA no longer uses the systems. As a result, HUD’s different information systems provide widely different pictures of HUD’s contracting activity. Specifically, as shown in table 1, the aggregate obligation data from HUD’s financial management systems were not consistent with the data HUD reported from in its centralized contracting management information system, HPS (discussed further in page 27). (See app. IV for a listing and brief description of the various financial systems that maintain contracting information.) After over 5 months of working on our request, HUD was also unable to provide us obligation and expenditure data on 33 of 115 individual contracts, and what it could provide was often not consistent with data maintained in HPS. We requested data on two groups of HUD contracts; for one we judgmentally selected 66 active contracts from all HUD program offices and for the second we used all 49 active multifamily contracts with an obligated value over $100,000. HUD staff cited several reasons why they could not identify data on specific contracts, including the fact that HUD tracks some obligation and expenditure information using the contractor’s Tax Identification Number. As a result, when HUD has multiple contracts with one contractor, it often cannot separate obligations and expenditures by individual contract. Of the 82 contracts for which HUD was able to provide information on contract obligations and expenditures, the obligation information in the financial management systems was consistent with HPS for only 37.Some of the inconsistencies included cases where the amount shown in the financial systems as spent on the contract exceeded the amount shown in HPS that was obligated for the contract. In the HUD-wide group, for example, the expenditure information in the financial management systems exceeded the obligation amount shown in HPS for 13 of the contracts, indicating that HUD paid a total of $59 million more than what HPS recorded was obligated. For the multifamily contracts, 3 of the 49 contracts had obligated amounts in the financial management systems that exceeded that shown in HPS with a total difference of $1.4 million. System Limitations Impede Efforts of HUD’s Acquisition Workforce and HUD Management As a result of the systems limitations, HUD’s acquisition workforce do not have basic information about the contracts for which they are responsible readily available to them. This is particularly significant because, as previously discussed, HUD relies extensively on remote monitoring strategies, which would be most effective with readily available and reliable contract information. In the absence of such data, HUD’s acquisition workforce have developed informal or “cuff” systems— personal spreadsheets to track, manage, and monitor contracts. While helping staff perform their jobs, these informal systems are not subject to HUD’s policies, procedures, or internal controls to ensure that the information maintained in them—and used by HUD’s acquisition workforce to manage and monitor individual contracts—is accurate. Further, the use of informal spreadsheets indicates that duplicate data collection efforts may be occurring (e.g., some data maintained in the spreadsheets are identical to data maintained in HPS), which in an environment of decreasing resources and increasing workload is not an efficient use of resources. Since the spreadsheets are maintained and used by individuals, this information is not readily accessible by HUD management to support their oversight responsibilities. Finally, since HPS data are not reliable and the accuracy of the data maintained in the personal spreadsheets is not known, HUD does not have a dependable “early warning system” to alert staff to contracts with high-risk characteristics. As a result, HUD’s ability to ensure that its contract resources are protected from waste, fraud, abuse, and mismanagement is reduced. HUD’s programmatic and financial management information systems also do not provide managers with accurate and timely information needed to effectively manage and monitor the department’s programs. HUD cannot readily obtain complete aggregate contracting obligation and expenditure information from the department’s financial systems to oversee the agency’s activities, make informed decisions about the best use of HUD’s resources, and ensure accountability on an ongoing basis. Because HPS does not contain reliable data, HUD management cannot readily obtain accurate information on HUD’s contracting activity to report contracting information, assist in making management decisions, and ensure the proper stewardship of public resources. Without reliable data on the number of active contracts, management cannot accurately analyze HPS for trends, which would assist in assessing and/or realigning staff workload, or making decisions about what activities to contract and or retain. Finally, because the department uses HPS to report acquisition data to FPDC to comply with federal contract activity reporting requirements, HUD’s submissions to FPDC are inaccurate. Conclusions Ensuring that HUD’s mission is accomplished and its contractors are held accountable requires (1) processes and practices that effectively monitor contractors’ performance; (2) an acquisition workforce with the right workload, training, and tools to carry out its mission; and (3) effective programmatic and financial management information systems. HUD has already taken steps toward improving its acquisition management; however, weaknesses remain in HUD’s monitoring processes, management of its acquisition workforce, and programmatic and financial management systems that support its contracting. Many of the tools that would help improve how HUD monitors its contractors already exist, either through plans and strategies HUD already developed or through OFPP guidance. Using these tools and employing a systematic, risk-based approach to contractor oversight would allow HUD to target its scarce resources to areas posing the greatest risk and to identify potential problems, such as those we have identified in this report, before they become more serious. In large measure, the challenges HUD faces in relation to its acquisition workforce and contracting information systems are symptomatic of the larger challenges the department faces to strategically manage its human capital and to improve its programmatic and financial management systems. Both are complex, long-standing management challenges that we have identified in our high-risk work that will be addressed on a departmentwide basis over a period of many years. Nevertheless, to improve its management of acquisitions, HUD can take shorter term and more immediate actions to maximize the effectiveness of the department’s acquisition workforce by completing existing career planning and training activities. It could also enhance the information and tools available to that workforce by improving the accuracy and utility of its centralized contracting management information system. Recommendations for Executive Action To address weaknesses we identified, we recommend that the Secretary of HUD Implement a more systematic approach to HUD contract oversight that (1) uses monitoring/contract administration plans; (2) uses a risk-based approach for monitoring to assist in identifying those areas where HUD has the greatest vulnerabilities to fraud, waste, abuse, and mismanagement; and (3) tracks contractor performance. Clarify the roles and responsibilities of the multifamily housing GTRs and GTMs, including the need to (1) clearly define reporting lines and (2) reduce overlap of responsibilities consistent with HUD guidance. Improve management of HUD’s acquisition workforce by (1) addressing workload disparities, (2) finalizing and implementing the Acquisition Management Career Plan, (3) assessing the skills and capabilities of the existing acquisition workforce, and (4) ensuring that appropriate training is provided to staff with contract oversight responsibilities and that staff meet federal training requirements. Improve the usefulness of HUD’s centralized contracting management information system by (1) providing training to staff on the definitions of data intended to be captured; (2) providing training to program office staff on the functions, such as tracking milestones, deliverables and contractor performance, of the system, and (3) developing and implementing verification procedures. Agency Comments We provided a draft of this report to HUD for its review and comment. HUD agreed that it faces significant long-standing challenges in monitoring the performance of its contractors, managing its acquisition workforce, and addressing weaknesses in its information systems. HUD stated that it is taking actions to address our recommendations. For example, according to HUD, the department is requiring each program organization to review its contracting oversight and monitoring polices and procedures to ensure that they are clear, consistent, and risk based. To strengthen oversight of the mulitfamily program’s property management contractors, HUD stated that when the multifamily property management contracts are renewed and awarded again in 2003, HUD plans to strengthen their oversight requirements. For example, the contractors will be required to provide a quality control plan to, among other things, monitor the work assignments of employees and subcontractors. The department also stated that it is taking action to improve management of its acquisition workforce and address weaknesses in its information systems. HUD stated that it expects to finalize its Acquisition Career Management Plan during 2003, and is clarifying the roles, responsibilities, and reporting lines of GTRs and GTMs in the multifamily program. For example, HUD said that it would ensure that staff are not overseeing the work of a supervisor or management personnel. HUD also said that it agreed with our findings and recommendations concerning its centralized contracting management information system, and would implement our recommendations to improve its usefulness by revising its training to provide better definitions of data to be captured and more emphasis on the system’s functions. While HUD agreed with our recommendations, HUD said it believes that its acquisition workforce is receiving required training because (1) it has developed acquisition training for GTRs in accordance with federal requirements and (2) GTMs do not require the same level of training as GTRs and are provided acquisition training appropriate to their duties when needed. We recognize that the Clinger Cohen Act does not establish a specific training curriculum for GTRs; however, the act requires executive agencies, through consultations with OFPP, to establish training requirements for positions in their acquisition workforces and HUD has defined its acquisition workforce to include both GTRs and GTMs. We agree that the department has developed an acquisition training program for GTRs in response to federal requirements. However, we found that a significant portion of the department’s GTRs have not had this training, and HUD did not disagree with our finding. Furthermore, while we agree that GTMs may not require the same level of training as GTRs, HUD policies permit the duties and responsibilities of GTRs to be delegated to GTMs; and its draft Acquisition Career Management Plan—which establishes training requirements for HUD’s acquisition workforce—states that the GTR training requirements also apply to GTMs. Therefore, we remain concerned that, according to HUD’s records, 93 percent of HUD’s GTMs have not received any specialized acquisition training and we did not change the report in response to this comment. We are, however, encouraged by HUD’s comment that it will continue to assess the training needed for GTMs to more effectively monitor contractor performance. HUD also provided technical comments, which we have incorporated as appropriate. The full text of HUD’s comments and our response appear in appendix V. We are sending copies of this report to other interested congressional committees and the Secretary of Housing and Urban Development. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report or need additional information, please contact me or Steve Cohen at 202-512-2834. Major contributors to this report are listed in appendix VI. Appendix I: Scope and Methodology To determine whether HUD has the processes in place to facilitate managing and monitoring its contractors, and the extent to which HUD monitors its contractors to ensure that they are held accountable for results, we focused on HUD’s policies, procedures, and data related to activities after a contract is awarded to a successful bidder. We first reviewed HUD’s guidance for managing and monitoring its contracts, including HUD-wide guidance, as well as that for the Office of Multifamily Housing. We also interviewed HUD officials at the Office of Chief Procurement Officer, Office of Administration, Office of Housing, Office of Public and Indian Housing, Community Planning and Development, and Fair Housing and Equal Opportunity to obtain information on their oversight and management of contracts. We also interviewed officials at HUD’s Atlanta, Philadelphia, Denver, and Ft. Worth Field Contracting Operations (FCO) to obtain an understanding of their oversight of the contract process and relationship with the multifamily program operations in their jurisdiction. To identify potentially improper payments made by HUD, we used computer analysis and data mining techniques to identify unusual transactions and payment patterns in the department’s fiscal year 2001 disbursement data. We focused our review on the $214 million of payments made for the goods and services at HUD’s multifamily properties during fiscal year 2001. To obtain a more detailed understanding of the practices associated with monitoring and oversight of contracts, we selected contracts in the Office of Multifamily Housing to review. We selected Multifamily Housing contracts because (1) of the contracting dollars associated with multifamily housing activity; (2) of the value of the inventory, insurance and grants associated with HUD’s multifamily programs; and (3) we recently completed reviews that addressed acquisition issues in HUD’s single-family housing program and systems acquisition and development efforts. The Office of Multifamily Housing contracts for services associated with HUD’s multifamily insurance program, as well as management of those properties that HUD acquires when owners default on insured mortgages. We reviewed the GTR files for 43 of the 49 active multifamily housing program contracts over $100,000, and then conducted structured interviews with the 17 GTRs responsible for administering these contracts. These contracts included both headquarters and field- administered contracts. For the 43 GTR files, we obtained and reviewed documentation that described the extent to which the GTRs are able to monitor HUD contractors. The structured interviews focused on (1) the processes HUD has in place to facilitate managing and monitoring its contractors, (2) the extent to which the GTRs monitor the contractors, (3) the types of information and data systems used by the GTRs to help them manage and monitor the contractors, and (4) the types of contracting- related training the GTRs have received. We also interviewed selected government technical monitors. To supplement our work in the multifamily program and obtain information from a cross-section of HUD’s acquisition workforce, we also conducted a telephone survey to gather information on HUD’s contracting activities. Our objectives were to obtain information on workload, the availability and perceived usefulness of training, the extent to which programmatic and financial management information systems support contract oversight, and oversight methods. We completed 185 interviews with randomly selected employees who were currently working in acquisition positions. In the survey, we asked questions about the training HUD provides its acquisition workforce, the opinions of the acquisition workforce of the data systems they use, and HUD’s monitoring of contractor performance. (See app. II for the survey scope and methodology.) To assess HUD’s management of its acquisition workforce and compliance with federal procurement requirements and policies, we reviewed related federal laws and policies, including the Clinger-Cohen Act and OFPP Policy Letters 92-3 and 97-01. Further, we asked HUD for the department’s definition of its acquisition workforce and asked the department to identify the names of staff meeting that definition. HUD defined its acquisition workforce to include those staff serving as GTMs. Since HUD did not centrally maintain a listing of staff currently serving as GTRs and GTMs in the program offices, we contacted each program office and requested that they identify staff currently serving in that capacity. Using the definition of acquisition workforce that HUD provided and the lists of staff provided by program offices, HUD’s acquisition workforce totals 833 people. Additionally, we requested data on HUD’s training records for contracting officers, contract specialists, purchasing agents, and GTRs; HUD provided a spreadsheet that contained summary information. We then compared HUD’s training records to OFPP training requirements to determine whether HUD’s acquisition workforce is meeting federal training requirements. We also reviewed HUD’s training requirements and policies as well as the GTR training manual. To obtain information on the adequacy of the data systems that support HUD’s acquisition workforce, we obtained and analyzed data from several of HUD’s programmatic and financial management information systems. We obtained data from HUD’s centralized system, the HUD Procurement System (HPS). Our analysis of HPS was two-fold. First, we performed reliability assessments on several data fields for all contracts that were identified as “active/awarded,” that is, currently active contracts. The data fields we analyzed were as follows: contract value, obligated amount, contract status, completion date, last completion date, and type of service. Second, we identified two groups of active contracts and purchase orders and downloaded the total dollar amounts obligated for these procurements from HPS. The first group was a judgmentally selected sample of 66 contracts and purchase orders from various HUD program offices. The second group was all active contracts in HUD’s multifamily housing program with an obligated value over $100,000 as of March 2002. We also obtained obligation and expenditure data from numerous HUD financial management systems for these contracts and purchase orders. (See app. IV for a description of these systems.) We then compared HPS data on those contracts and purchase orders to the data maintained in HUD financial management information systems. We requested aggregate data on HUD’s contract obligations and expenditures. We also obtained data from the Federal Procurement Data Center (FPDC) to which HUD reports detailed information on its contracting activity over $25,000 in accordance with federal requirements. Appendix II: Sampling Methodology for GAO Survey of HUD’s Acquisition Management Reforms Objectives Our primary objectives in the survey were to (1) assess the acquisition workforce workload, (2) assess the availability of training and the perceived usefulness of the training that the staff receive, (3) determine the extent to which HUD’s data systems are used to support its contract management and monitoring, and (4) determine the ways in which the acquisition workforce monitor HUD’s contractors. Scope and Methodology To attain our objective, we surveyed a statistically representative sample of HUD’s acquisition workforce. We developed and administered a survey designed to estimate characteristics of HUD’s acquisition workforce relating to contract monitoring, training and workload issues, and data systems. The survey was administered from April to June 2002 by trained GAO employees to a stratified sample of 250 HUD acquisition workforce employees through telephone interviews that were entered into a computer-assisted data collection instrument. Our work was conducted in accordance with generally accepted government auditing standards. Study Population The study population for this survey consisted of 833 employees in HUD’s acquisition workforce as of March 2002. We developed a list of acquisition workforce employees based on data from two sources. The first source used was the training records compiled by HUD’s OCPO, the office responsible for providing the training required for acquisition workforce employees. In further audit work with HUD program offices that have contracts, we discovered that this list was not complete. We then supplemented the OCPO list by contacting 15 program offices to identify the number of acquisition workforce staff in each office, and the positions that those individuals hold. When we administered the survey, we found that not all of the employees we identified in our study population were in HUD’s acquisition workforce at the time of contact. Respondents who were no longer members of the acquisition workforce were “out of scope,” and were excluded from the analysis. Sample Design The sample design for this study is a single-stage stratified sample of acquisition workforce employees in the study population. The strata were based on reported job position categories, based on the list we compiled from program office records. Of the total sample of 250, we received 185 completed responses from employees who were in the acquisition workforce at the time of the survey (in scope). We obtained sufficient information for an additional 28 employees to determine that they were not in the acquisition workforce at the time of the survey (out of scope). The remaining 37 cases could not be contacted or refused to participate. These results are summarized by sampling stratum in table 2. Overall, we obtained a response rate of 85 percent, and of those respondents, all estimates are based only upon the in scope respondents (in the acquisition workforce). Estimates All estimates produced in this report are for a target population defined as HUD’s acquisition workforce during the study period. Estimates were determined by weighting the survey responses to account for the effective sampling rate in each stratum. The weights reflect both the initial sampling rate and the response rate for each stratum. Sampling Error Because we surveyed a sample of HUD’s acquisition workforce, our results are estimates of actual acquisition workforce characteristics and thus are subject to sampling errors that are associated with samples of this size and type. Our confidence in the precision of the results from this sample is expressed in 95 percent confidence intervals. The 95 percent confidence intervals are expected to include the actual results for 95 percent of the samples of this type. We calculated confidence intervals for our study results using methods that are appropriate for a stratified probability sample. For the percentages presented in this report, we are 95-percent confident that the results we would have obtained if we had studied the entire study population are within +/- 10 or fewer percentage points of our results, unless otherwise noted. For example, our survey estimates that 54.5 percent of the acquisition workforce believes that their contracting workload has increased over the past 2 years. The 95 percent confidence interval for this estimate would be no wider than +/- 10 percent, or from 44.5 percent to 64.5 percent. Nonsampling Error In addition to these sampling errors, the practical difficulties in conducting surveys of this type may introduce other types of errors, commonly referred to as nonsampling errors. For example, questions may be misinterpreted, the respondents’ answers may differ from those of people who did not respond, or errors could be made in keying questionnaires. We took several steps to reduce such errors. Data were collected using computer-assisted telephone interviewing, and the GAO staff who administered the survey over the telephone attended a training session to familiarize them with the use of the computer-assisted survey instrument. In addition, computer analyses were performed to identify inconsistencies and other indicators of errors, and a second independent analyst reviewed all computer programs. Survey Development We identified areas to cover in the survey based on the congressional request and initial interviews with top-level HUD managers and staff. The survey was pretested at HUD headquarters, through a simulated telephone interview. A GAO analyst administered the survey to two members of HUD’s acquisition workforce over the telephone from an off- site location, while another GAO analyst observed the HUD employee on- site. The HUD employees were debriefed after the pretest, and the audit team was able to make appropriate changes to the questionnaire prior to implementation. The final survey contained 165 questions. Survey Administration A team of 30 GAO staff conducted the survey in April, May, and early June of 2002, through a computer-assisted telephone survey, the results of which were entered simultaneously in our computer-assisted data collection instrument. We called all initial nonrespondents at least three times in order to encourage a high response rate. We performed our work between February and June 2002 in accordance with generally accepted government auditing standards. Appendix III: Analysis of the HUD Procurement System (HPS) Identified Discrepancies in HUD’s Contracting Data Appendix III: Analysis of the HUD Procurement System (HPS) Identified Discrepancies in HUD’s Contracting Data Examples A portion of HUD’s active contracts contain a total obligated amount that is greater than the contract total value amount. The two relevant data fields in HPS are “total value amount” and “total obligated amount.” The total value amount data field represents the total value of the contract The total obligated amount represents the total amount of funds that have been obligated to date. According to the HPS administrator, the figure in the total value amount field should be greater than or equal to the figure in the total obligated amount data field; as funds should not be obligated in excess of the value of the contract. According to HPS, 4 percent of HUD’s active contracts contain a total obligated amount that is greater than the overall contract total value amount, which means that according to HPS, HUD has obligated $197 million in excess of the contracts’ value. About a quarter of HUD’s active IDIQ contracts contain inconsistent data within three HPS data fields; according to HPS, HUD has obligated about $14 million in excess of the maximum associated with the contract. The three relevant data fields in HPS are “total value amount,” “total obligated amount,” and “IDIQ Max.” The total value amount data field represents the total value of the contract. The total obligated amount represents the total amount of funds that have been obligated to date. The IDIQ Max identifies the maximum dollar amount for the contract; specifically, according to the HPS administrator, the IDIQ Max represents a ceiling and therefore the contract total value amount and total obligated amount should not exceed the IDIQ Max. According to the HPS administrator, the figure in the IDIQ maximum field should be greater than or equal to the total contract value and the total obligated amount data fields. 24 percent of active awarded IDIQ contracts have a total obligated amount that exceeds the IDIQ maximum. 28 percent of active awarded IDIQ contracts have a total value amount that exceeds the IDIQ maximum. About a quarter of all active awarded contracts contain inconsistent contract status and date information, and some date fields are blank. The three relevant data fields in HPS are “status,” “completion date,” and “last completion date.” The status represents the current status of the contract (e.g., pre-award, active: awarded, and inactive: closed-out). The completion date tracks the completion date by which the contractor will deliver product(s) and /or service(s). The last completion date indicates the very end of the contract, including option periods. According to the HPS administrator All of these data fields should be populated. If a contract has a status of active: awarded, then the last completion date should be in the future. The completion date should be prior to or equal to the last completion date. For 27 percent of contracts identified as active: awarded, the last completion dates are in the past. For 21 percent of the contracts identified as active awarded, the completion date is after the last completion date. For 17 percent of the contracts identified as active awarded, the last completion date data field is blank. The data does not readily lend itself for analysis because the data is not complete or consistent. HPS contains three separate data field to capture the “type of good or service” being provided by the contractor. The three relevant data fields in HPS are the “type of service,” “deliverable description,” and “product service code.” The type of service field is populated by HUD field staff when they select the appropriate description from a drop down menu. The deliverable description data field is populated by all HUD staff when they are entering information on individual deliverables. The product service code field is populated by all HUD staff. The codes are based on the Federal Procurement Data System’s (FPDS) codebook. According to the HPS administrator, HUD conducts analysis on the type of service and deliverable description data fields. The type of service data fields are blank for about a third of the active awarded contracts because only field staff are required to complete this field. The deliverable description data field contains narrative entries generated by HUD staff who do not use standard terminology to describe the service being purchase. Appendix IV: HUD’s Financial Management Information Systems that Contain Contracting Obligation and Expenditure Information Program Accounting System & Line of Control and Credit System (PAS & LOCCS) Single Family Acquired Asset Management System (SAMS) Cash Control Accounting Report System (CCARS) Comprehensive Servicing and Monitoring System/Property Management System (CSMS/PMS) Description HUDCAPS is the department’s General Ledger and Funds Control System, and it serves as a focal point for integrating other HUD financial systems. The Chief Financial Officer sponsors the system. All HUD offices use HUDCAPS to control and manage administrative and program budgets. For example, since fiscal year 2000, the obligations and expenditures for all of FHA’s administrative contracts (i.e., headquarters and field) are processed from the HUDCAPS system. PAS & LOCCS are two financial systems that are integrated. PAS is the project- level funds control system and is used to record, control, and report on the commitment, obligation, and expenditure of funds. LOCCS is the payment control system that is used by those requesting payments. Together, PAS& LOCCS is an accounting system that tracks the reservation, obligation, and expenditure of funds, and it is the department’s primary disbursement and cash management system for the majority of HUD programs. For example, obligations and expenditures associated with the Office of Public and Indian Housing’s (PIH) contracts are tracked in these systems. Further, prior to fiscal year 2000, LOCCS was the financial system that maintained information on FHA’s administrative contracts in the field. The Chief Financial Officer sponsors this system. SAMS tracks information on single-family properties that HUD acquires due to foreclosure; the system tracks the property from acquisition to sale and the system is used to manage the properties. For example, this system tracks the expenditures that are associated with contracts for services provided by closing agents; this system does not track obligations. CCARS tracks and disburses funds received by FHA, from the Department of the Treasury, to various internal FHA offices. These funds are received electronically and are downloaded into the CCARS database daily. Prior to FY 2000, CCARS maintained obligation and expenditure information on FHA’s administrative contracts in headquarters. CSMS/PMS tracks HUD-held Multifamily properties and certain HUD-held defaulted notes, which are in mortgagee-in-possession (MIP) status. CSMS/PMS performs several management and accounting functions for these properties (e.g., property management, tax servicing, tenant leases, accounts receivable, accounts payable and disbursements processing, financial accounting, and management reports). A HUD contractor maintains CSMS/PMS. Macola is Ginnie Mae’s financial system. The system is commercial off-the-shelf software and tracks obligations and expenditures of funds. Appendix V: Comments from the Department of Housing and Urban Development The following are GAO’s comments to HUD’s letter dated October 23, 2002. GAO Comments 1. HUD has defined its acquisition workforce to include contract officers, contracting specialists, purchasing agents, GTRs, and GTMs. Therefore, our report recommends that HUD assess the skills and capabilities of its existing acquisition workforce, not simply the contract officers and contracting specialists in the 1102 series. 2. We believe that the report accurately and properly distinguishes the different acquisition positions at HUD--including GTRs, GTMs, and various OCPO staff--and accurately reflects the federal training requirements associated with these positions. Specifically, the Clinger- Cohen Act mandates that executive agencies, through consultations with OFPP, establish specific education, training, and experience requirements for acquisition workforces. Under implementing guidance issued by OFPP, an agency’s acquisition workforce— including its contracting officers, contract specialists, purchasing agents, contracting officer representatives, and contracting officer technical representatives, which HUD calls GTRs—must meet an established set of contracting competencies. HUD has developed a GTR training program in response to federal requirements; however, we found that a significant portion of staff serving in this capacity have not received this training. In addition, HUD has identified GTMs as part of its acquisition workforce and HUD’s policies permit the duties and responsibilities of GTRs to be delegated to GTMs; and its draft Acquisition Career Management Plan—which establishes training requirements for HUD’s acquisition workforce—states that the GTR training requirements also apply to GTMs. Therefore, we remain concerned that, according to HUD’s records, a significant portion of the department’s GTRs have not received training and that 93 percent of GTMs have not received any specialized acquisition training. 3. We agree that GTMs may not require the same level of training as GTRs; however, HUD policies and handbooks indicate that providing acquisition training to GTMs is necessary and is part of its intent. And therefore, as discussed in comment 2, we remain concerned that, according to HUD’s records, 93 percent of GTMs have not received any specialized acquisition training. We revised the report to reflect the increases in HUD’s training budgets discussed in its response. 4. As discussed in comment 2, we recognize that HUD is providing required training to some members of its acquisition workforce and we are encouraged by HUD’s plans to assess the training needed for GTRs and GTMs to more effectively monitor contractor performance. However, we remain concerned that not all components of HUD’s acquisition workforce are receiving the required training and, based on the reasons discussed in comment 2, we made no changes to our conclusions or recommendations. 5. We recognize that HUD has multiple information systems that are used to manage and monitor the department’s contracting activities and that some might be better suited to track the deliverables of specific types of contracts than others. However, we found that HUD staff were not utilizing HPS to its fullest potential, including its ability to track deliverables. We clarified the wording in our report to make clear that we are not suggesting that HPS be used to track specific deliverables for all HUD contracts. 6. HUD’s emphasis on reviewing subcontractor work would not necessarily identify the improper payments we found. The improprieties occurred, in part, because the vendors split the work into multiple invoices to fall below HUD’s established threshold of review and subcontractor requirements. HUD’s plan to review the property management subcontracting file documentation and on-site invoices to assure that work orders were not deliberately split to avoid competition and/or HUD approval is a step to assist in catching such irregularities; however, we believe that part of developing a risk-based approach and monitoring contractor performance is to include monitoring those disbursements made by the property management contractors that are under the dollar threshold for individual anomalies and unusual disbursement patterns to identify potentially improper billing practices. 7. The review of improper payments that HUD refers to was part of a separate congressional request to review disbursement processes that are particularly susceptible to improper payments and determine whether improper payments occurred. Based on our review of the multifamily disbursements, it was evident that the existing internal controls would not prevent and detect improper payments in the normal course of business. Consequently, a random sample approach was unnecessary, and we used our data mining approach to search for and identify irregularities that indicated the existence of possible improper payments. While the annual financial statement audit is designed to broadly assess internal controls, the auditors would not necessarily focus their work on payments not expected to have a material impact on the financial statements. Our improper payments review was not limited by such constraints and therefore, could be a much more detailed analysis of the areas that we determined to be particularly susceptible to improper payments. 8. We modified the report to reflect these clarifications. Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above Amy Bevan, Dan Blair, J Bryant, Anne Cangi, Bonnie Derby, Eric Diamant, Colin Fallon, Evan Gilman, Barbara Johnson, Irvin McMasters, Andy O’Connell, Mark Ramage, and Nico Sloss made key contributions to this report.
In the 1990s the Department of Housing and Urban Development (HUD) dramatically downsized its staff, however, its mission did not decrease. As a consequence, HUD relies more heavily on private contractors, and needs to hold its contractors accountable for results. GAO was asked to determine if HUD has processes and practices in place to effectively oversee contractors, strategically manages its acquisition workforce, and has management information systems that support its acquisition workforce. HUD's contracting has increased significantly in recent years. Although HUD has taken actions to improve its acquisition management--such as instituting full-time contract monitoring positions and improving its contracting information system--weaknesses remain that limit HUD's ability to identify and correct contractor performance problems, assure that it is receiving the services for which it pays, and hold contractors accountable for results. HUD, in particular, its multifamily housing program, does not employ processes and practices that could facilitate effective monitoring. For example, HUD's monitoring process does not consistently include the use of contract monitoring plans or risk-based strategies, or the tracking of contractor performance. HUD has not ensured that individuals responsible for managing and monitoring contracts have the appropriate workload, skills, and training that would enable them to effectively perform their jobs. For example, according to HUD's records, over half of the staff who are directly responsible for monitoring contractor performance have not received required acquisition training. HUD's management information systems do not adequately support its acquisition workforce in their efforts to manage and monitor contracts. Specifically, key information in HUD's contracting system is not reliable and HUD's financial systems do not readily provide complete and consistent contracting obligation and expenditure data.
Reporting Objectives To better understand the current plan-freeze environment and its significance to the DB system going forward, we address the following questions: 1. To what extent are DB pension plans frozen, and what are the characteristics of such freezes? 2. What are the implications of such freezes for plan sponsors, participants, and the PBGC? Scope and Methodology To determine the extent and characteristics among plans that are currently frozen, we collected and analyzed original survey data. We also analyzed and reviewed recent studies of frozen DB plans, notably PBGC’s studies of hard frozen DB plans. Appendix I contains revised slides that update the preliminary briefing information that we provided to interested congressional staffs and members, as well as officials from the Department of Labor, PBGC, and the Department of the Treasury from late-April to June 2008. We conducted our work from April 2006 to July 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform our work to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our research objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. To achieve our survey objectives, we surveyed a stratified probability sample of 471 DB pension sponsors from PBGC’s 2004 Form 5500 Research Database. We limited our study population to 7,804 sponsors that had 100 or more total participants in sponsored plans, and our survey population results represent estimates for all sponsors with this characteristic. While they are a minority of sponsors (about 34 percent), sponsors whose plans have more than 100 participants represent about 99 percent of all DB plan participants in the single-employer DB system. Further, sponsors with more than 100 participants in participating plans also represent 99.1 percent of the total liabilities among single-employer plans. To deploy the survey, we mailed a questionnaire to DB pension plan sponsors in the three smallest strata we identified. In addition, as part of a longer questionnaire, we collected similar information via a web survey about plan freezes from the very largest strata of plan sponsors. The survey results can be reviewed in GAO-08-818SP. See appendix II for a more detailed discussion about our survey methodology. Frozen Plans Affect about One-Fifth of Active DB Plan Participants Overall, an estimated 3.3 million active participants in our study population—or 21 percent of all active participants in the private, single- employer DB system—are affected by reported freezes. (See app. I, slide 9 and slide 10.) Active participants are employees that are or may become eligible to accrue or receive additional benefits under a plan; if all participants in the DB system (that is, active participants, retirees, and separated vested participants) are considered, the proportion represented by active participants who are affected by plan freezes falls to 10 percent. (See app. I, Slide 9.) We considered only those participants who are currently accruing benefits (that is, active participants) at the time of freeze implementation to be affected by a freeze. The above calculations, therefore, do not include sponsors whose largest frozen plans are under a new-employee-only soft freeze, where the plan is closed to new entrants and benefit accruals for active participants remain unchanged. The extent to which active participants are affected by a freeze depends on the type of freeze in place. Under hard freezes, future benefit accruals cease for active participants. In contrast, soft freezes may reduce future benefit accruals for some or all active participants. Soft freezes are distinct from hard freezes in that the restrictions on participants’ future benefit accruals are less comprehensive than the total cessation of future accruals under hard freezes. Our survey shows that about half the sponsors in the study population have one or more frozen plans. (See app. I, slide 11.) Overall, about 51 percent of plans in the study population were reported as closed to new entrants, the basic requirement of a plan freeze. Nearly half of plans with a reported freeze, or 23 percent of all plans in the study population, were under a hard freeze. (See app. I, slide 12.) In addition, 12 percent reported some type of soft freeze. About 6 percent reported a partial plan freeze, while 4 percent reported an “other” freeze, which include situations where plan participants are separated into plan tiers, or freezes brought on by bankruptcy, plant closure, or plan merger. The survey results suggest that two factors may influence the likelihood that sponsors will implement a hard freeze: sponsor size and the extent to which a sponsor’s plans are subject to collective bargaining (CB) agreements. Larger sponsors, those with 10,000 or more total participants, are significantly less likely than smaller sponsors to have implemented a hard freeze, with only 9.4 percent of plans under a hard freeze among larger sponsors compared with 25.4 percent of plans under a hard freeze among smaller sponsors. (See app. I, slide 13.) Similarly, firms with some or all plans subject to CB are significantly less likely to implement hard freezes than sponsors with no plans subject to CB. (See app. I, Slide 14.) However, these two factors may be related, as larger sponsors in our survey are generally more likely to have one or more plans subject to CB than smaller sponsors. About half of the freezes of sponsors’ largest frozen plans have occurred since 2005. (See app. I, slide 16.) This figure includes only plans that are currently frozen, and it does not represent a longitudinal dataset of all plan freezes. Any plans frozen during the same time period and terminated prior to GAO’s survey are not included. However, PBGC data shows us that there has been a recent decline in the number of plan terminations among plans with 100 or more participants. The number of standard terminations declined by two-thirds from 2001 to 2002 through 2005 to 2006, a period during which there was a significant increase in the number of current plan freezes. This may suggest possible growth in the number of frozen plans currently in the single-employer DB system. Of sponsors in the study population with one or more frozen plans, 83 percent offered a replacement retirement savings vehicle to affected participants in their largest frozen plans. (See app. I, slide 18.) Eleven percent of sponsors did not offer any replacement plan to affected participants; however, this figure includes any sponsors who allowed affected participants to join or increase employee contributions to an existing but unchanged plan. An additional 6 percent of sponsors froze plans under extenuating circumstances that preclude the offering of a replacement plan (such as, a firm merger, bankruptcy, plant closure, multiple employer plan, or new-employee-only soft freeze). Of those sponsors offering replacement plans, over 80 percent offered enhanced existing or new DC plans. (See app. I, slide 19.) About 5 percent of sponsors offered a new DB plan to affected participants. Sponsors of frozen plans cited a number of reasons why they froze their largest plan. “Annual contributions needed to satisfy funding requirements and their impact on cash flows” was cited most often, with 72 percent of sponsors responding that this was a reason for freezing their largest frozen plans; “unpredictability/volatility of plan funding requirements” followed with 69 percent. (See app. I, slide 21.) The remaining 13 reasons range in prevalence from 54 percent responding that “plan was frozen in anticipation of replacing it with an alternative retirement plan” to 12 percent for “other reason.” About two-thirds of sponsors with frozen plans reported at least some level of confidence that their largest frozen plan was reasonably well- funded at the time of freeze. (See app. I, slide 22.) Further, 58 percent of sponsors were highly or moderately confident that their largest frozen plans could have undergone standard, fully-funded terminations instead of freezing. This is compared with 30 percent of sponsors who were not at all confident that their largest frozen plans could have undergone standard terminations. However, there are some limitations to these data. First, the survey asked about sponsor beliefs, not actual funding levels. Second, the data refer to when the freezes were implemented and may bear no relation to current funding levels. Third, the data only include each sponsor’s largest frozen plan. Nevertheless, the data provide some insight into sponsors’ state of mind when they chose to freeze their largest frozen plans. For sponsors with plans that are already frozen, fewer than half reported having a firm idea of the anticipated outcome for their largest frozen plans. Among these sponsors, a very small number anticipate “thawing,” or unfreezing, their plan, and about one-third said they will eventually terminate their largest frozen plans. (See app. I, slide 24.) In contrast, nearly half say they will keep the plan frozen indefinitely. Another 14 percent report that it is too early to make a decision or that they are uncertain what the outcome will be. The anticipated outcome for a sponsor’s largest plan varies significantly by the type of frozen plan. Sponsors with frozen plans that were not under a hard freeze were significantly less likely to anticipate termination as the outcome for their largest frozen plan. (See app. I, slide 25.) Among sponsors with one or more plans not currently frozen, only 10 percent have firmly decided to freeze, or not freeze, any plans in the future. (See app. I, slide 26.) Thirty- five percent of sponsors have considered freezing additional plans in the future but are uncertain if they will, while nearly 50 percent have not yet considered or discussed future freezes. Plan Freezes Have Various Implications for Key Stakeholders The prevalence of frozen DB plans today has different implications for key stakeholders in the single-employer DB system—plan sponsors, participants, and the PBGC. Our survey found that nearly a third of the sponsors ultimately expect to terminate their largest frozen plan. Further, we found that about half of all frozen plans were hard frozen and that sponsors of hard frozen plans appear more likely to anticipate termination as an eventual outcome. However, the number of plan terminations has not increased recently. For example, from 1990 to 2006, total annual standard terminations averaged about 7 percent of insured single-employer plans. However, from 2002 to 2006, this rate had been far lower. (See app. I, slide 28.) Further, larger plans, or those plans with 100 or more participants, which account for about 36 percent of plans but which account for the overwhelming number of the system’s active participants, accounted for only about 9 percent of the terminations during the 2002 to 2006 period. This suggests that the single-employer DB system’s decline does not appear to be accelerating, with many large plans continuing in operation. Plans may freeze for many reasons, and our survey population of frozen plan sponsors cited cost of contributions and volatility of plan funding as the key reasons for freezing their largest plans. However, when we asked all sponsors, including those with no frozen plans, about the key challenges to the future health of the single-employer DB system generally, the very same issues of plan cost and volatility were listed most frequently. Given that these issues seem to be an inherent problem for all sponsors, it may be that each sponsor decision to freeze a plan has a firm-specific reason or is based on other factors not picked up in our survey. In any case, the current prevalence of plan freezes does not present an encouraging landscape for DB plan sponsorship. For active plan participants, plan freezes imply a possible reduction in anticipated retirement income. In particular, a hard freeze, which ceases future benefit accruals, is especially likely to reduce anticipated retirement income—unless this income is made up through increased savings, possibly from such sources as higher wages or other nonwage benefit increases. Although a majority of the sponsors with frozen plans cited plan cost considerations as a key motivation for the freeze—suggesting that they may be somewhat reluctant to fully redirect any potential cost savings from the freeze to other areas of compensation or benefits—our survey did not collect information to fully address this issue. For example, while our survey indicated that sponsors most often do offer a replacement plan for frozen participants and this offering is most often a DC or 401(k)-type plan, we did not ask about the generosity of these replacement plans or of the previous frozen DB plan. The offering of an alternative plan may have different consequences for employees in different stages of their career. Reductions to anticipated accruals for participants affected by a freeze will vary considerably depending on key plan features, participant demographic characteristics, and market interest rate factors. However, for those participants with traditional pension plan formulas that are hard frozen and replaced with a typical DC, or 401(k)-type plan, all else being equal, longer-tenured, midcareer workers are most likely to see the greatest reductions in anticipated retirement income. This effect occurs because older, longer- tenured employees generally have less time remaining in their careers to offset anticipated accrual losses through typical 401(k)-type plan contributions compared to younger workers. Alternatively, depending on the generosity of the frozen, pay-based pension plan, certain younger (or less well-tenured) and more mobile participants might actually see increases in their anticipated retirement incomes by moving to a typical, or average, 401(k)-type plan. These concerns are not just relevant for the current active participants of a frozen plan. Our survey also shows that roughly a majority of sponsors in our study population have closed their plans to new employees, many of whom will also likely depend on a DC plan as a source of retirement income. Our survey did not collect information on the degree to which affected employees are participating in either the newly offered DC plans or any existing, but enhanced, DC plan. DC plans are increasingly the dominant retirement savings vehicle for private sector workers. Like DB plans, DC plans pose their own potential retirement-income challenges, including the need for employees to participate in the plan and to effectively manage the investment risk of their DC accounts if they are to have a secure retirement. Yet for some workers, especially for lower- income workers, this may be difficult to do as they are less likely to participate when offered the opportunity to do so and less able to make even limited contributions. The effect of plan freezes on PBGC’s net financial position is not certain, but it could be modestly positive in both the immediate- and long-term; freezes generally reduce system liabilities and potentially minimize claims among financially weak plans. The possible improvement in PBGC’s net position, however, assumes that the aggregate effect of plan freezes does not significantly reduce the agency’s premium income over time. The reductions in flat-rate premium income could come from a decline in participants, possibly from the considerable number of plans that we found that were closed to new employees or from terminations that may result from the freeze. Variable-rate premium income could also be reduced to the degree that sponsors of underfunded plans improve funding as a result of a plan freeze. PBGC’s financial status is influenced not only by the number of freezes and terminations but also by the relative health and size of the plans and sponsors that decide to terminate. For example, PBGC finds that hard frozen plans are more likely to be underfunded and to terminate, which may highlight two other trends. Plan sponsors that initiate a standard termination must have sufficient assets in the plan to pay participants their accrued benefits and are unlikely to represent the very same plan sponsors that are also underfunded. If relatively well-funded and financially healthy sponsors are the ones who terminate their frozen plans, it may leave the underfunded, and potentially financially distressed, frozen plan sponsors under PBGC’s insurance responsibility. Alternatively, data from PBGC shows that relatively large plans terminate at a much lower rate than smaller plans. This is possibly encouraging for PBGC’s financial status, to the degree that these larger plans do not result in claims, because these plans represent the bulk of liabilities and participants. Ultimately, no matter what the impact on its net financial position, the freezing of plans and the exiting from the single-employer DB system by sponsors do not indicate future plan growth for the PBGC. One part of its mission is to foster the continuation and maintenance of private-sector pension plans. PBGC’s single-employer insurance program currently covers 28,800 plans, which is 65 percent fewer plans than it covered 15 years ago. Given the prevalence of plans that are currently frozen and the relationship between plan freezes and plan termination, the shrinking of the single-employer insurance program plan base seems likely to continue. Concluding Observations The private DB pension system, a key source of retirement income for millions of Americans, continues to experience a slow decline. Plan freezes are a common phenomenon, affect a large number of participants, and have important implications for plan sponsors, participants, and the PBGC. While plan freezes are not as irrevocable as plan terminations, they are indicative of the system’s continued erosion. Yet freezes are just one of the many developments now affecting the DB system. The broad ranging Pension Protection Act of 2006, changes in accounting rules, rising retiree health care costs and health care costs generally, a weak economy, and falling interest rates all represent challenges that DB plan sponsors may need to confront. How key stakeholders, plan sponsors, participants, the PBGC, other government agencies, and congressional policymakers respond to all of these challenges will shape the fortunes of the DB system and its future role in providing retirement security to American workers. Agency Comments and Our Evaluation We provided a draft of this report to the Department of Labor, the Department of the Treasury, and PBGC. The PBGC provided written comments, which appear in appendix III. PBGC generally agreed with the findings and conclusions of the report. However, PBGC did express some concerns about our survey methodology—especially with respect to the comparability of our estimates of hard frozen plans and affected active participants with their estimates, which are based on the Form 5500 filings for plan year 2006 received to date. PBGC notes that differences in results may be due to a variety of reasons, including that our survey data are more recent than the 2006 Form 5500 and the potential for some reporting errors on the Form 5500. Other explanations include the potential for response bias in the GAO survey, our use of a size variable that is sponsor-based rather than plan-based and the GAO survey’s omission of newly formed pension plans since 2004. We addressed many of PBGC’s specific methodological concerns by providing additional information to our technical appendix. We note that the very different methodologies used by PBGC and GAO in estimating the number of hard plan freezes and the number of active participants affected by such freezes suggest that the studies’ results should be compared with extreme caution. We do note that our survey questionnaire was pre-tested extensively. Further, regarding the issue of response bias, we considered this as we analyzed our survey’s results. We do not believe response bias is a significant issue because we did not find significant differences when we analyzed a comparison of key characteristics of the survey respondents to all sponsors in the study population. We also did not include newly-formed DB plans other than those formed by survey respondents because such data were not available, and, in our view, would not have a significant effect on our estimates. PBGC identified a number of explicit areas of agreement with our report. They noted that our finding with respect to the differences in prevalence of freezes between large and small plans is generally consistent with their estimates. They also mentioned that our report was consistent with their views on the effect that freezes may have on the future health of the DB system, the PBGC itself, as well as the impact of freezes on retirement incomes. Further, despite PBGC’s concerns about the magnitude of certain estimates in our report, they generally found the relative estimates of alternative definitions of plan freezes to be new and important information. Lastly, PBGC noted that the comparison of our survey estimates to Form 5500 estimates highlights the delay that PBGC faces getting basic plan data. PBGC expects that plan data will become timelier in the near future, but some delay will still remain that may hinder PBGC’s awareness of changing trends among plans that it insures. The Department of Labor and Treasury provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Labor, the Secretary of the Treasury, and the Director of the PBGC, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions are listed in appendix V. Appendix I: Frozen DB Plan Briefing Slides GAO Freeze Survey: Sampling Summary Liabilities (in billions) Finding: Plan Freezes Are a Common Occurrence For Sponsors’ Largest Plans, Just Over Half Were Frozen During or After 2005 Finding: Most Sponsors Who Froze a Plan Made an Alternative Replacement Plan Available Finding: Sponsors Froze Plans for a Variety of Reasons Top reasons sponsors reported for freezing a plan were: – Cost of annual contributions needed to satisfy funding requirements and their impact on cash flow (72 percent) – Unpredictability/volatility of plan funding requirements (69 percent) Finding: Sponsors of DB Plans Uncertain About Future Course of Action For those sponsors with frozen plans: Despite the widespread prevalence of plan freezes, a rise in terminations has yet to materialize (notably among the largest plans) A freeze may imply: Freezes likely to have a slightly positive Appendix II: Scope and Methodology To achieve our objectives, we conducted a survey of sponsors of defined- benefit (DB) pension plans. For the purposes of our study, we defined “sponsors” as the listed sponsor on the 2004 Form 5500 for the largest sponsored plan (by total participants). To identify all plans for a given sponsor we matched plans through unique sponsor identifiers. See appendix I for further detail on how we defined a sponsor in the data. Population and Sample Design We constructed our population of DB plan sponsors from the 2004 Pension Benefit Guaranty Corporation’s (PBGC) Form 5500 Research Database by identifying unique sponsors listed in this database and aggregating plan level data (for example, plan participants) for any plans associated with this sponsor. As a result of this process, we identified 22,960 plan sponsors. A summary of the number of sponsors and participants is shown in table 1. As shown in table 1, sponsors having 100 or more participants accounted for about 99 percent of DB plan participants and about 99 percent of total liabilities in sponsored plans in 2004. We limited our study to this population of 7,804 larger sponsors (our study population) because it would be informative about the vast majority of covered participants and we expected a higher success rate in locating, contacting, and obtaining responses from this group than would have been obtained from the smallest sponsors. We drew a stratified probability sample of 471 DB plan sponsors, where the strata were based on the number of participants covered by the sponsor’s plans. See table 2 for a summary of the study population, the selected sample, respondents, and out-of-scope sponsors by stratum. The sample was designed to provide acceptably precise estimates of the proportions of sponsors with at least one frozen plan. Further, sponsors in the larger sponsor strata are sampled at a higher rate than sponsors in the smaller strata to improve the precision of estimates of plan-level and participant-level estimates. As shown in table 2, response rates ranged from 46 percent to 82 percent, with an overall weighted response rate of 78 percent. Administration of Survey We developed two questionnaires to obtain information about the experiences of DB pension plan sponsors that have 100 or more participants. One questionnaire—with 18 questions—was mailed in November 2007 to a stratified random sample of 366 pension plan sponsors and asked questions about their experiences with DB plans, benefit freezes, if any, and factors that may have contributed to the decision to freeze. The strata were based on the size of the plan sponsor (as measured by number of participants) and were comprised of three categories. In the initial mailing, we sent a cover letter and questionnaire to pension plan sponsors. To encourage responses, we followed up with another mailing of a copy of the questionnaire in December 2007. In addition, to try to increase the response rate, we called all sponsors who had not responded to the mail survey. A second, longer questionnaire was sent in December 2007, via the Internet, to the 105 largest pension plan sponsors who were part of the Fortune 500 or Global Fortune 500 and had 50,000 or more participants. This was preceded by an email to notify respondents of the survey and to test our email addresses for these respondents. This web questionnaire asked plan sponsors about their recent experiences with DB plans and benefit freezes. The first 17 questions and last question of this questionnaire mirrored the questions asked in the mail questionnaire about benefit freezes. We asked these plan sponsors additional questions about their reactions to the current environment for such plans and how the plan or plans may be a part of the firm’s total compensation structure. To help increase our response rate, we sent four follow-up emails from January through April 2008. In addition, we also contacted some respondents by telephone to clarify unclear responses. We received responses from 48 respondents. For the 18 questions that asked about frozen pension plans in both the mail and web survey, we obtained an overall unweighted response rate of 70 percent and a weighted response rate of 78 percent. To pretest the questionnaires, we conducted cognitive interviews and held debriefing sessions with 11 pension plan sponsors; three pretests were conducted in-person and focused on the web survey, and eight were conducted by telephone and focused on the mail survey. We selected respondents to represent a variety of sponsor sizes and industry types, including a law firm, an electronics company, a defense contractor, a bank, and a university medical center, among others. We conducted these pretests to determine if the questions were burdensome, understandable, and measured what we intended. On the basis of the feedback from the pretests, we modified the questions as appropriate. Content Coding of Responses In addition to the closed-ended questions, we provided an opportunity to provide responses to an open-ended question about the key challenges facing the future health of the single-employer DB system. The responses to this question were classified and coded for content by a GAO analyst, while a second analyst verified that the original analyst had coded the response appropriately. Two-hundred-seventeen respondents provided substantive comments to this item. Some comments were coded into more than one category since some respondents provided more than one topic or category. This means that the number of coded items does not equal the total number of respondents who commented. These comments cannot be generalized to our population of plan sponsors. See table 3 for a tally of the comments. Table 3. Summary of Content Analysis, by General Category of Comment Affordability (i.e. cost of funding, administrative cost, cash flow) Non-cost administrative issues (i.e. complexity, reporting requirements, accounting rules) Workforce issues (i.e. demographics, recruitment/retention) Disadvantageous compared to DC plans Competition (i.e. industry, international) Strong belief in DB system PBGC insurance (i.e. premiums, incentives for unhealthy plans) Sampling Error and Estimation To produce the estimates from this survey, answers from each responding case were weighted in the analysis to account statistically for all the members of the population, including those that were not selected or did not respond to the survey. Estimates produced from this sample are from the population of sponsors represented in PBGC’s 2004 Research Database that had at least 100 participants. Because our results are based on a sample and different samples could provide different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 11 percentage points). We are 95 percent confident that each of the confidence intervals in this report include the true values in the study population. Unless we note otherwise, percentage estimates based on all sponsors (for example, percent of sponsors with at least one frozen plan) have 95 percent confidence intervals of within plus or minus 8 percentage points. All other percentage estimates in this report have 95 percent confidence intervals of within plus or minus 11 percentage points, unless otherwise noted. Confidence intervals for other estimates are presented along with the estimate where used in the report. Nonsampling Error In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. We took the following steps to increase the response rate: developing the questionnaire, pretesting the questionnaires with pension plan sponsors, conducting multiple follow-ups to encourage responses to the survey, contacting respondents to clarify unclear responses, and double keying and verifying all data during data entry. Although the overall response rate was 78 percent, we performed an additional analysis to check whether our survey respondents had characteristics that were significantly different from all sponsors in the study population. To do this, we identified several sponsor characteristics that were available for the entire study population and estimated these population values using the survey respondents. For each estimate tested, we found no significant difference between the estimate and the actual population value. We performed computer analyses of the sample data to identify inconsistencies and other indications of error and took steps to correct inconsistencies or errors. A second, independent analyst checked all computer analyses. Comparability of Survey Results with 2006 PBGC Results In July 2008 discussions with PBGC staff and in their comments on this report, PBGC indicated that it has calculated estimates of the number of hard frozen plans on the most recently available Form 5500 data. Based on Form 5500 filings received to date, PBGC currently estimates that 15.9 percent of plans were hard frozen in 2006. Our survey estimates are not directly comparable with PBGC’s estimates for a number of reasons: The GAO Survey is Based on a Statistical Sample - GAO survey estimates, including those involving hard freezes are based on a probability sample and is subject to sampling error. The PBGC calculations are based on Form 5500 data filings which must be completed by plan sponsors of PBGC-insured defined benefit plans. The GAO Survey Focuses on Sponsors with Larger Plans - Our survey specifically excluded “smaller” sponsors—those with less than 100 total participants. Although leaving out such smaller sponsors excluded a majority of all plans on the 2004 Form 5000 file, it only excluded about 1 percent of participants, and allowed us to survey a smaller sample. However, if the rate of hard freezes was different for plans having fewer than 100 participants than for larger plans, then we would expect that our survey estimate would differ from an estimate developed from all plans. The GAO Survey Focuses on Hard- and Soft-Freezes and Includes Post- 2006 Freezes - Our survey questionnaire used a definition of a hard freeze that was intended to be substantively similar to the definition contained with the Form 5500 instructions. However, our questionnaire also included a broad range of plan freeze definitions as well as additional questions pertaining to a sponsor’s largest frozen plan. The mode of data collection, topical focus, format, item wording or item interpretation of our questionnaire may influence respondents in different ways relative to the applicable hard freeze character code on the Form 5500. One critical difference that could lead to different estimates is that our survey captures freezes that occurred since 2006. The 2006 Form 5500 only includes information as of the end of the 2006 plan year. Possible Differences in Actual Survey Respondents - While we generally directed our survey to the individual we identified as being most knowledgeable about the DB plans of a given sponsor, it may be the case that the individuals responding to our survey are not the very same individuals that also complete the Form 5500, possibly leading to different responses. Despite these differences in approach and methodology, some may wish to compare PBGC’s estimate of 15.9 percent of plans were hard frozen in 2006 and our study population estimate of 23.3 percent of plans as hard frozen among sponsors with 100 or more participants in all plans. Any comparisons should be made with extreme caution for all of the reasons noted above. Further, the 95 percent confidence interval for our estimate ranges from 18.3 to 28.3 percent. PBGC also calculated that, based on Form 5500 filings for plan year 2006 received as of July 2008, 0.75 million active participants out of 2.39 million total participants were in frozen plans. As with the estimated percentage of hard frozen plans, our numbers are not completely comparable, due to differences in our methodologies Although our survey identified the active participants affected by the sponsor’s largest frozen plan, we did not specifically ask about total participants in the largest frozen plan. Our questionnaire also asked sponsors to report the calendar year of freeze implementation, while the Form 5500 data is reported on a plan year basis, which can differ from the calendar year. Another important difference is that PBGC data is not current and new hard frozen plans and active participants affected by such freezes may yet be identified. Some of these newly identified plans may be plans of sponsors that reported freezes in our survey in 2007 or later. When we removed hard frozen plans that occurred for a sponsor’s largest plan since 2006 and recalculated the number of active participants affected by hard freezes, we estimate that 1.27 million active participants are affected by a hard freeze in the sponsor’s largest frozen plan. As with all of our survey estimates, this result is subject to sampling error. The 95 percent confidence interval of active participants affected by the sponsors largest hard frozen plan (removing post-2006 freezes) ranges from 0.75 million to 1.78 million. Appendix III: Comments from the Pension Benefit Guaranty Corporation Appendix IV: GAO Contact and Staff Acknowledgments Barbara D. Bovbjerg at (202) 512-7215 or [email protected]. Staff Acknowledgments In addition to the contact above, Charles A. Jeszeck, Charles Ford, Isabella Johnson, Luann Moy, Mark Ramage, Joe Applebaum, Craig Winslow, Gene Kuehneman, Brian Friedman, Melissa Swearingen, Marietta Mayfield, Sue Bernstein, and Walter Vance made important contributions to this report.
Private defined benefit (DB) pension plans are an important source of retirement income for millions of Americans. However, from 1990 to 2006, plan sponsors have voluntarily terminated over 61,000 sufficiently funded single-employer DB plans. An event preceding at least some of these terminations was a so-called plan "freeze"--an amendment to the plan to limit some or all future pension accruals for some or all plan participants. Available information that the government collects about frozen plans is limited in scope and may not be recent. GAO conducted a stratified probability sample survey of 471 single-employer DB plan sponsors out of a population of 7,804 (with 100 or more total plan participants) to gather more timely and detailed information about frozen plans. We have prepared this report under the Comptroller General's authority as part of our ongoing reassessment of risks associated with the Pension Benefit Guaranty Corporation's (PBGC) single-employer pension insurance program, which, in 2003, we placed on our high-risk list of programs that need broad-based transformations and warrant the attention of Congress and the executive branch. Frozen DB plans have possible implications for PBGC's long-term financial position. This report examines (1) the extent to which DB pension plans are frozen and the characteristics of frozen plans; and (2) the implications of these freezes for plan participants, plan sponsors, and the PBGC. Frozen plans are fairly common today, with about half of all sponsors in our study population having one or more frozen DB plans. Overall, about 3.3 million active participants in our study population, who represent about 21 percent of all active participants in the single-employer DB system, are affected by a freeze. The most common type of freeze is a hard freeze--a freeze in which all future benefit accruals cease--which accounts for 23 percent of plans in our study population; however, an additional 22 percent of plans are frozen in some other way. Larger sponsors (i.e. those with 10,000 or more total participants) are significantly less likely than smaller sponsors to have implemented a hard freeze, with only 9 percent of plans under a hard freeze among larger sponsors compared with 25 percent of plans under a hard freeze among smaller sponsors. The vast majority of sponsors with frozen plans in our study population, 83 percent, have alternative retirement savings arrangements for these affected participants, but 11 percent of sponsors do not. (An additional 6 percent of sponsors froze plans under circumstances that preclude a replacement plan.) Plan sponsors cited many reasons for freezing their largest plans but most often noted two: the impact of annual contributions on their firm's cash flows and the unpredictability of plan funding. Sponsors of frozen plans generally expressed a degree of uncertainty about the anticipated outcome for their largest plan, but sponsors whose largest plan was hard frozen were significantly more likely to anticipate plan termination as the likely plan outcome. The implications of a freeze vary for sponsors, participants, and PBGC. For plan sponsors, while hard freezes appear to indicate an increased likelihood of plan termination, a rise in plan terminations has yet to materialize. For participants, a freeze generally implies a reduction in anticipated future retirement benefits, though this may be somewhat or entirely offset by increases in other benefits or a replacement retirement-savings plan. However, because the replacement plans offered to affected participants most frequently are defined contribution, the investment risk and responsibility for saving are shifted to employees. Finally, plan freezes may potentially improve PBGC's net financial position, but the degree to which it is accompanied by sponsor efforts to improve plan funding is unclear. In any event, the shrinking of the single-employer pension insurance program plan base seems likely to continue.
Background In May 1997, DOD completed a comprehensive review of national security threats, risks, and opportunities facing the United States to 2015. This review, known as the Quadrennial Defense Review (QDR), was intended to examine America’s defense needs and provide a blueprint for a strategy- based, balanced, and affordable defense program. The QDR noted that DOD had reduced active duty personnel by 32 percent between 1989 and 1997 while reducing personnel performing infrastructure functions by only 28 percent. The report called for additional reductions in both military and civilian personnel. Our July 1998 report on the 1999-2003 FYDP noted that the services planned to reduce military and civilian personnel by 175,000 and save $3.7 billion by 2003. Our recent reviews of planned Defense personnel reductions resulting from the QDR and the 1999-2003 FYDP raised questions about DOD’s ability to achieve some of these reductions and savings. The changes in military strategy and capabilities enunciated in the QDR and other reports have often been referred to as a revolution in military affairs. However, the QDR also recognized that DOD must undergo a similar revolution in its business affairs. To that end, the Secretary of Defense chartered a study effort that resulted in the November 1997 DRI report. The report emphasized the need to reduce excess Cold War infrastructure to free up resources for modernization. The report identified numerous initiatives to reengineer business practices, consolidate organizations, eliminate unneeded infrastructure through additional base closures, and conduct public/private competitive sourcing studies for commercial activities. Most of the potential savings identified in the report were expected to result from BRACs and competitive sourcing studies. Future BRAC actions were contingent on the Congress enacting legislation authorizing additional closures, while competitive sourcing studies were to be completed under the policy guidance of OMB. The concept of competitive sourcing is not new. Through the 1980s, DOD encouraged the services and Defense agencies to conduct competitions between the public and private sectors to determine who would be responsible for performing selected functions that were being provided by in-house staff. These competitions were to be done under OMB Circular A-76. Although DOD’s use of Circular A-76 was limited from the early to mid-1990s, in 1995 DOD reestablished the competition program in the hope of obtaining significant savings that could be used to fund modernization and other priority needs. Circular A-76 and its supplemental handbook specify a process to develop a statement that defines the work to be done and a comparison of in-house costs with contractor costs to determine who should perform the work. Circular A-76 is limited to competitions for the conversion of recurring commercial activities. The handbook identifies circumstances under which detailed cost studies may not be required, such as for the conversion from performance by military personnel to contractor performance or if the number of affected civilian positions is below a specific threshold. It also indicates instances in which Circular A-76 may not apply, such as for restructured or reengineered functions. Appendix I contains a more detailed description of the A-76 process. In addition, several laws affect competitive sourcing. Some, such as 10 U.S.C. 2461 and 2462, affect the process of transferring work currently being performed by civilian government employees to the private sector. Section 2461, as amended by the National Defense Authorization Act for Fiscal Year 1999 (P.L. 105-261), requires an analysis of the activity and a comparison of the costs of having the activity performed by DOD civilian employees and by a contractor to determine whether changing to contractor performance will save money. It also requires that DOD notify the Congress of this analysis and provide other information before making a change in performance. Section 2462 requires the Secretary of Defense to obtain needed supplies or services from the private sector, if a private-sector source can provide the supply or service at less cost, and establishes criteria for conducting the cost comparison. FYDP Shows Partial Costs and Savings Estimates From BRAC and Competitive Sourcing DOD expects savings from individual DRIs but has not incorporated specific savings from these initiatives in the FYDP, except in the areas of potential BRAC and competitive sourcing. Both have significant up-front investment costs that can limit net savings in the short term. The 1999-2003 FYDP shows a more complete accounting of these investment costs for potential BRACs than it does for competitive sourcing, but the latter provides the majority of DRI savings incorporated in the FYDP. While personnel reductions are programmed in the 1999-2003 FYDP and are expected to represent a portion of savings from DRIs, DOD has not required the services to link specific personnel reductions to individual initiatives. Some services, however, have projected personnel reductions in conjunction with competitive sourcing studies. FYDP Shows Net Costs for Early Years of Implementing Any New BRAC Rounds We previously reported that BRAC actions can provide the basis for significant savings in infrastructure costs. However, while savings can begin to accrue even as costs are being incurred to implement BRAC decisions, it can take several years for net savings to begin accruing on an annual recurring basis. The 1999-2003 FYDP reflects this situation, showing a net cost for projected BRAC decisions between fiscal year 1999 and 2003. The 1999-2003 FYDP incorporated some savings from future BRAC rounds, but these savings were offset by implementation costs, resulting in net costs of $832 million for fiscal year 2002 and $1.45 billion for fiscal year 2003. DOD showed these net costs in the FYDP as a Department-level contingency account but did not allocate them to individual services. Beyond the FYDP period, DOD expects the two additional rounds of base closures to result in about $3.4 billion in annual savings after the closures are completed and implementation costs have been offset. We reported in November that DOD’s method of estimating costs and savings for future BRAC rounds was limited, principally because it assumed that savings from future base closures would closely resemble savings from the 1993 and 1995 BRAC rounds, adjusted for inflation. While DOD’s estimate may be appropriate for planning purposes, its precision is limited because the costs of future BRAC rounds might not parallel those of the prior two rounds. Previous base closures frequently involved facilities that were of low military value and were the least costly to implement. Often those closures required the shortest time for savings to offset implementation costs. Generally, DOD did not choose to close facilities that required higher implementation costs or longer periods to recover savings. More precise cost estimates will probably not be available until DOD actually studies implementation scenarios for specific BRAC actions and puts in place more reliable cost accounting systems. However, BRAC history suggests that future implementation costs could be greater than those in previous rounds and the closures could thus take longer to produce net recurring savings. Projected Competitive Sourcing Savings Are Significant but Do Not Fully Account for Investment Costs DOD’s 1999-2003 FYDP projected $6.2 billion in savings from competitive sourcing between fiscal year 1997 and 2003. However, as we previously reported, the projected savings do not fully account for the up-front investment costs associated with completing the studies and implementing the results. Though recurring savings from competitive sourcing could be substantial in the long term, it will take longer to begin achieving these savings than DOD has projected, and net savings during the 1999-2003 period will be less than projected. In formulating their fiscal year 1999 budget, Defense components identified over 200,000 positions that would be subjected to competitive sourcing studies between 1997 and 2003. Table 1 shows the projected savings and the number of positions to be studied by fiscal year as summarized in documents supporting the President’s fiscal year 1999 budget submission. Our February 1999 report on competitive sourcing goals noted that, like BRACs, competitive sourcing studies and implementing actions require up-front investments that should be considered when estimating net savings. We also reported that the estimates of competition savings provided to the Congress in 1998 had limitations and that several factors were likely to reduce savings in the short term. We further noted that DOD had not fully identified the resources associated with the studies or the personnel separation costs likely to be needed for implementation. The Navy was the only component that had deducted some estimated investment costs when calculating the savings presented to the Congress in DOD’s April 1998 report on competitive sourcing. Linkage of Specific Personnel Reductions With Individual Defense Reform Initiatives Is Limited Personnel reductions are programmed in the 1999-2003 FYDP and are expected to represent a portion of savings from DRIs. DOD officials told us that they had not required the services to target specific personnel reductions to individual initiatives. The Army programmed a reduction of 9,600 civilian staff on the assumption that 20 percent of the positions studied would be eliminated. The Army programmed the 20-percent reduction based on the assumption that the 20-percent savings would result whether the government or a contractor won the A-76 competitions. Army officials said they made this assumption because they did not want to be seen as preselecting the winner. The Army recognizes, however, that it will likely separate more personnel based on the historical trends of contractors winning about 50 percent of the competitions. The Army did not program any reductions in military positions as a result of competitive sourcing and planned to make up for military personnel shortages elsewhere by transferring the military personnel from positions competed to other military duties. The Navy and the Marine Corps did not program any potential military or civilian personnel cuts as a result of competitive sourcing. According to Navy officials, the Navy’s overall objective is to achieve savings through competition, and personnel savings are a consequence and not a goal of the program. Further, Navy officials said they believed that establishing a goal for personnel reductions would send a negative message to staff and would affect morale. The Air Force was more aggressive in identifying personnel reductions from competitive sourcing and programmed about 26,000 military and 19,300 civilian position reductions between fiscal year 1997 and 2003, according to its final budget submission. The Air Force’s programmed reduction of all military positions to be competed was based on the belief that if a position could be competed, it did not have to be staffed by military personnel. Generally, the Air Force programmed reductions in civilian positions on the assumption that private contractors would win 60 percent of the competitions and that the competitions would last 2 years. Officials at the Office of the Secretary of Defense (OSD) were aware that the services used different methods to show the effects of competitive sourcing on personnel and funding and that the fiscal year 1999 FYDP reflects these different approaches. The Deputy Under Secretary for Industrial Affairs and Installations established a task force to ensure that consistent and comparable approaches are used to estimate personnel and dollar savings in future budget submissions. Subsequently, the Acting Director of the Office of the Secretary of Defense’s Program Analysis and Evaluation Office issued guidance incorporating the task force’s recommendations, which required Defense components to program both dollar savings and personnel reductions for the 2000-2005 FYDP. Further, the DOD Comptroller required Defense components to specifically identify investment and transition costs and report both gross and net savings in their fiscal year 2000 budget submissions. Competitive Sourcing Savings Were Projected Using Historic Experience But Were Not Linked to Specific Study Plans Savings from ongoing and future competitive sourcing studies as projected in the 1999 FYDP were the result of broad-based estimates drawn from previous experience. When the FYDP was being prepared, the projections were not linked to specific functions then under study or planned for future study. Consequently, it is not feasible to link projected savings with the current cost of individual functions. In previous reports, we urged caution in the use of historical savings assumptions in the absence of any efforts to adjust these assumptions for changes that can occur over time and that may reduce savings. The 1999-2003 FYDP was not based on detailed competitive sourcing plans developed by the services and other Defense components. These plans continue to evolve, and specific functions to be studied by location are mostly yet to be determined. Historic Data Used to Project Future Savings The estimated competitive sourcing savings included in the 1999-2003 FYDP were largely based on numbers of positions expected to be studied, average personnel costs per position, and average savings rates estimated by using historical data from prior competitions. Savings rates varied among the services. Our previous work has already shown that there are important limitations to using historical savings estimates because they may not provide an accurate indication of likely future savings. The services’ cost savings estimates ranged between 20 and 30 percent; the Navy projected savings of more than 30 percent where functions performed by military personnel would be competed. These estimates, as shown in table 2, represented what the services believed to be conservative achievable savings based on historical experience. While we believe that competitive sourcing competitions are likely to produce savings, we have previously urged caution when estimating the amount of savings likely to be achieved. The estimates used in the FYDP are based on savings estimates calculated at the end of competitive sourcing competitions. These estimates can change over time because of changes in the scope of the work or mandated wage increases. We previously noted that continuing budget and personnel reductions could make it difficult to sustain the levels of previously projected savings. We also recognized that larger savings are likely to occur when positions filled by military personnel are converted to civilian or contractor performance. Finally, we previously noted limitations in DOD’s efforts and capabilities to track changes in program costs and savings after the results of competitions are implemented. Actual savings data has not been captured. Our February 1999 report noted the need for improvements in the databases used to record the results of competitive sourcing competitions. Specific Study Plans Continue to Evolve Study plans of most Defense components linking the number of positions to be studied with specific functions and locations are still evolving, and estimated savings will not be known until the studies are completed. Consequently, it is not feasible to identify the current costs of functions to be studied and their potential savings rates. In our February 1999 report, we concluded that clearer indications of actual savings will require that Defense components develop mechanisms to track actual savings over time in order to validate continuing savings from completed competitions. None of the services based fiscal year 1999 budgets or 1999-2003 FYDP submissions on a completed multiyear study plan for their competitive sourcing program, although the Air Force was furthest along. Our February 1999 report on competitive sourcing goals noted that most Defense components lacked detailed plans identifying the numbers of positions by function expected to be studied over the next few years. Detailed planning to implement the program has been largely delegated to components and field activities. These activities are responsible for determining which specific functions are suitable candidates for competitions and whether there are sufficient positions to meet overall competition goals. In addition, according to service officials, some or all of the major commands were given numbers of positions to compete and savings goals, and it is up to them to determine how best to meet the goals. OSD on December 9, 1998, directed each component to develop multiyear competition plans consistent with and presented at the same time as their fiscal year 2001-2005 Program Objective Memorandum. OSD directed that these plans should include, by fiscal year, the functions and numbers of positions to be competed. Perceived Efforts to Bypass Circular A-76 and Related Legislation Difficult to Identify The Committee questioned whether Defense components may have outsourced some activities, possibly even some involving inherently governmental functions, without following the procedures of OMB Circular A-76 or meeting the 10 U.S.C. 2461 requirements for congressional notification. Other than specific cases brought to our attention, procurement and commercial activities data systems do not identify the extent to which Defense components may be outsourcing functions without complying with these procedures or requirements. Circular A-76 does not apply to inherently governmental activities. Defense components are currently reviewing to what extent the functions performed by DOD personnel are inherently governmental or otherwise exempted from A-76 competitive sourcing. DOD expects to report the results to the Congress early this year, but the results were not available for our review when we completed our work. We have been asked to review two reengineering cases, one in the Army and one in the Air Force, in which affected parties expressed the belief that Circular A-76 procedures and 10 U.S.C. 2461 requirements should have been followed. We are currently studying these cases and expect to report on them in the near future. Conclusions The costs and savings associated with Defense Reform Initiatives incorporated in DOD’s 1999-2003 FYDP include partial costs and savings from competitive sourcing and additional BRAC initiatives. While savings are expected from other initiatives, DOD has not required the services to calculate the specific savings to be obtained from them. Likewise, while personnel reductions are included in the FYDP and some are expected to result from DRIs, DOD has not required the components to link any personnel reductions with specific DRI elements. Also, questions exist about the precision of savings expected from BRAC and competitive sourcing. DOD assumed there would be additional base closures; however, the required legislative authorization has not been given. Further, the BRAC savings estimate, should future rounds occur, has limitations in terms of projecting short-term savings that might be realized. Competitive sourcing savings incorporated in the 1999-2003 FYDP were determined using broad estimates based on prior competitive sourcing experience, but they were not linked to specific positions and functions currently under study or planned for study at specific locations. There is no systematic way to identify whether components outsource functions without following the requirements of OMB Circular A-76 or the procedures of 10 U.S.C. 2461. Such cases can be identified only when they are specifically raised. We are currently reviewing two such allegations and expect to report on them individually in the near future. We recently recommended that the Secretary of Defense require Defense components to assess whether available resources are sufficient to execute the numbers of planned competitions within the envisioned time frames and make the adjustments needed to ensure adequate program execution. We also recommended that the Secretary require Defense components to reexamine and adjust competitive sourcing study targets, milestones, expected net short-term savings, and planned operating budget reductions as necessary. Accordingly, we are not making additional recommendations in this report. Agency Comments and Our Evaluation We requested comments on a draft of this report from the Secretary of Defense. On February 4, 1999, Department and service representatives from their respective competitive sourcing offices provided us with the following comments on the draft. The representatives generally concurred with the information presented in the report. DOD officials reiterated their previously stated position that they have developed an aggressive competitive sourcing program by planning to compete nearly 229,000 positions by fiscal year 2005. They acknowledged that their program has met a number of challenges; however, they believe none of these challenges are insurmountable. They stated that through the program and budget review process, the Department reviews the competitive sourcing study targets, milestones, and objectives of the program to measure advancement toward its goals and that adjustments are made to the program as necessary. Further, they stated that several important improvement and oversight tools are being worked into the competitive sourcing program during calendar year 1999 that will address our concerns. More specifically, according to these officials, the DOD Competitive Sourcing Master Plan will, among other things, identify by fiscal year the functions and number of component positions to be competed by fiscal year 2005. Also, they stated each component is undertaking a series of program improvement initiatives that includes (1) identifying best practices, (2) assisting installation/activity execution, (3) developing internal communications and training, and (4) improving management information systems. DOD officials also stated that the components have not completed enough studies, thus far, to establish a baseline that would necessitate the reevaluation of their milestones and objectives and that as more studies are conducted, they will be able to better refine and adjust their study savings objectives. While we support DOD’s efforts to institute more comprehensive oversight tools and program improvement initiatives, we did not review any of these efforts because they have not yet been fully implemented, and we are therefore not in a position to comment on them. However, as we previously reported, we continue to believe that DOD needs to reassess the competitive sourcing study targets, milestones, expected short-term savings, and planned operating budget reductions now. The issues involve more than the number of competitions completed; they also involve the extent the planned announcements of competitions have been made and whether there are sufficient resources to complete them. This is of concern especially because of the large number of studies planned for announcement in fiscal years 1998 and 1999 and the delays encountered in getting the fiscal year 1998 studies under way. If similar delays are encountered in fiscal year 1999, they could seriously affect future program execution and DOD’s ability to achieve results in a timely manner. In addition, officials reiterated the Department’s disagreement with our statement that the precision of its future base closure costs was limited and that average net costs of future BRAC rounds will be higher than DOD estimated by using the cost experience of previous rounds. As we previously reported, our intent was to suggest that there are reasons to expect greater costs to close bases during any future implementation period than during the previous BRAC rounds because many bases with lower implementation costs and quicker offsets of closure costs have already been realigned or closed. Thus, the higher costs likely to be incurred in the future could reduce the net savings achieved during the implementation period. Nevertheless, we believe that future BRAC rounds can still result in significant savings. DOD also provided technical comments on our report, which we have incorporated as appropriate. Scope and Methodology To determine the savings included in the 1999-2003 FYDP that were the result of DRIs, we reviewed budget documents and discussed the issue with representatives from the DOD Comptroller’s office and the Army, the Navy, and the Air Force. We also drew on other work that we had underway or completed relating to competitive sourcing and the DRI. To determine the basis used to project competitive sourcing savings and personnel reductions in the 1999-2003 FYDP and whether they were based on studies of specific functions, we reviewed competitive sourcing budget submissions and the assumptions underlying the savings calculations. We also held discussions with service officials responsible for budget formulation and competitive sourcing program management. Further, we drew on work we had previously performed to evaluate DOD’s competitive sourcing plans and programs, and to review the results of recently completed competitions. To determine whether Defense components outsourced inherently governmental functions without allowing civilian employees to compete or did not meet the requirements of 10 U.S.C. 2461, we reviewed pertinent laws and other directives and discussed the issue with cognizant service officials. We conducted our review from November 1998 to January 1999 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen of the Senate Committees on Armed Services and on Appropriations and the House Committee on Appropriations; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested congressional committees. Copies will be made available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. The A-76 Process In general, the A-76 process consists of six key activities: (1) developing a performance work statement and quality assurance surveillance plan; (2) conducting a management study to determine the government's most efficient organization (MEO); (3) developing an in-house government cost estimate for the MEO; (4) issuing a Request for Proposals or Invitation for Bids; (5) evaluating the proposals or bids and comparing the in-house estimate with a private-sector offer or interservice support agreement and selecting the winner of the cost comparison; and (6) addressing any appeals submitted under the administrative appeals process, which is designed to ensure that all costs are fair, accurate, and calculated in the manner prescribed by the A-76 handbook. Figure I.1 shows an overview of the process. The solid lines indicate the process used when the government issues an Invitation for Bids, requesting firm bids on the cost of performing a commercial activity. This type of process is normally used for more routine commercial activities, such as grass-cutting or cafeteria operations, where the work process and requirements are well defined. The dotted lines indicate the additional steps that take place when the government wants to pursue a negotiated, "best value" procurement. While it may not be appropriate for use in all cases, this type of process is often used when the commercial activity involves high levels of complexity, expertise, and risk. Most Efficient Organization (MEO) activities Additional steps required for request for proposals (RFP) The circular requires the government to develop a performance work statement. This statement, which is incorporated into either the Invitation for Bids or Request for Proposals, serves as the basis for both government estimates and private sector offers. If the Invitation for Bids process is used, each private sector company develops and submits a bid, giving its firm price for performing the commercial activity. While this process is taking place, the government activity performs a management study to determine the most efficient and effective way of performing the activity with in-house staff. Based on this "most efficient organization," the government develops a cost estimate and submits it to the selecting authority. The selecting authority concurrently opens the government's estimate along with the bids of all private sector firms. According to the Office of Management and Budget’s (OMB) A-76 guidance, the government's in-house estimate wins the competition unless the private sector's offer meets a threshold of savings that is at least 10 percent of direct personnel costs or $10 million over the performance period. This minimum cost differential was established by OMB to ensure that the government would not contract out for marginal estimated savings. If the Request for Proposals--best value process--is used, the Federal Procurement Regulation and the A-76 supplemental handbook require several additional steps. The private sector offerors submit proposals that often include a technical performance proposal and a price. The government prepares an in-house management plan and cost estimate based strictly on the performance work statement. On the other hand, private sector proposals can offer a higher level of performance or service. The government's selection authority reviews the private sector proposals to determine which one represents the best overall value to the government based on such considerations as (1) higher performance levels, (2) lower proposal risk, (3) better past performance, and (4) cost to do the work. After the completion of this analysis, the selection authority prepares a written justification supporting its decision. This includes the basis for selecting a contractor other than the one that offered the lowest price to the government. Next, the authority evaluates the government's offer and determines whether it can achieve the same level of performance and quality as the selected private sector proposal. If not, the government must then make changes to meet the performance standards accepted by the authority. This ensures that the in-house cost estimate is based upon the same scope of work and performance levels as the best value private sector offer. After determining that the offers are based on the same level of performance, the cost estimates are compared. As with the Invitation for Bids process, the work will remain in-house unless the private offer is (1) 10 percent less in direct personnel costs or (2) $10 million less over the performance period. Participants in the process--for either the Invitation for Bids or Request for Proposals process--may appeal the selection authority's decision if they believe the costs submitted by one or more of the participants were not fair, accurate, or calculated in the manner prescribed by the A-76 handbook. Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. Chicago Field Office Neal H. Gottlieb, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on the Department of Defense's (DOD) Future Years Defense Program (FYDP), focusing on: (1) whether savings in DOD's fiscal year (FY) 1999-2003 FYDP were the result of DOD's Defense Reform Initiatives (DRI); (2) the extent to which savings and personnel reductions from competitive sourcing in the 1999-2003 FYDP were based on ongoing or planned studies of functions specifically identified under the Office of Management and Budget Circular A-76, and what percentage of the current costs of performing those functions were included from the projected savings from these studies; and (3) whether DOD components outsourced activities that included inherently governmental functions, without allowing civilian employees to compete under Circular A-76 procedures, or without following the study and notification requirements of 10 U.S.C. 2461. GAO noted that: (1) DOD expects savings from individual DRIs, but has not incorporated specific savings in the 1999-2003 FYDP from these initiatives, except in the areas of competitive sourcing and estimates relating to future base realignment and closure (BRAC) decisions; (2) DOD's 1999-2003 FYDP incorporated $6.2 billion of estimated savings from competitive sourcing between FY 1997 and 2003, but these estimated savings do not fully account for up-front investment costs, which could reduce the amount of actual savings in the short term; (3) the FYDP does provide a fuller estimate of the impact of investment costs associated with BRACs; (4) while DOD has requested additional BRAC rounds, Congress has not authorized them; (5) the Office of the Secretary of Defense expects DRIs to reduce personnel requirements but has not required the services to link specific reductions with individual initiatives; (6) savings from competitive sourcing reflected in the 1999 FYDP were not linked to specific functions under study or targeted for future studies; (7) in addition, DOD does not yet have the systems in place that can provide reliable cost information needed to precisely identify savings; (8) consequently, it is not feasible to accurately identify the current costs of functions to be studied or the potential savings as a percentage of these costs; (9) according to DOD, savings estimates incorporated in the FYDP represented broad projections based on the numbers of positions expected to be studied and historic savings data; (10) GAO's work has shown that historic savings estimates may have important limitations and may not accurately indicate likely current and future savings; (11) study plans of most DOD components have evolved over time, but in many cases they have not linked positions to be studied to specific functions and locations; (12) firm savings estimates probably will not be possible until individual studies are completed; (13) even then, these estimates would be subject to change; (14) procurement and commercial activities data systems do not identify the extent to which DOD components may be outsourcing functions without complying with Circular A-76 procedures or 10 U.S.C. 2461 congressional reporting requirements; and (15) such cases can be identified only when they are specifically raised by affected parties.
Background In fiscal year 2003, CMS assumed responsibility for estimating the national Medicare error rate, a responsibility that had previously been held by HHS OIG. OIG began estimating the national Medicare error rate in fiscal year 1996, and continued doing so for each subsequent fiscal year through 2002. The transfer of responsibilities for estimating the national Medicare error rate to CMS coincided with the implementation of the Improper Payments Information Act of 2002 (IPIA). The IPIA requires federal agencies to estimate and report annually on the extent of erroneous payments in their programs and activities. The IPIA defines an improper payment as any payment that should not have been made or that was made in an incorrect amount, including both under- and overpayments. All agencies that identify a program as susceptible to significant improper payments, defined by guidance from the Office of Management and Budget (OMB) in 2003 as exceeding both 2.5 percent of total program payments and $10 million, are required to annually report to Congress and the President an estimate of improper payments and report on corrective actions. In addition to estimating the national Medicare error rate for purposes of compliance with the IPIA, CMS also began producing contractor-specific error rate estimates beginning in fiscal year 2003 to identify the underlying causes of errors and to adjust action plans for carriers, DMERCs, FIs, and QIOs. To produce these contractor-specific error rate estimates for fiscal year 2004, CMS sampled approximately 160,000 claims. The contractor- specific error rate information was then aggregated by the four contractor types (carrier, DMERC, FI, and QIO), which were ultimately combined to estimate the national Medicare error rate. Under the methodology previously used by OIG to estimate the national Medicare error rate, 6,000 claims were sampled. While the sample size used by OIG was sufficient to estimate the national Medicare error rate, it was not sufficient to reliably estimate the contractor-specific error rates. Additionally, the increased sample size improved precision of the national Medicare error rate estimate. CMS Programs to Monitor the Payment Accuracy of Medicare FFS Claims The objective of the CERT Program and the HPMP is to measure the degree to which CMS, through its contractors, is accurately paying claims. Through the CERT Program, CMS monitors the accuracy of Medicare FFS claims that are paid by carriers, DMERCs, and FIs. In fiscal year 2004, the Medicare error rates by contractor type as estimated through the CERT Program were 10.7 percent for the carrier contractor type, 11.1 percent for the DMERC contractor type, and 15.8 percent for the FI contractor type. (See table 1.) Through the HPMP, CMS monitors the accuracy of paid Medicare FFS claims for acute care inpatient hospital stays—generally those that are covered under the prospective payment system (PPS). For fiscal year 2004, the Medicare error rate for the QIO contractor type, as estimated through the HPMP, was 3.6 percent. (See table 1.) CERT Program To estimate contractor-specific Medicare FFS error rates for the CERT Program, CMS reviews a sample of claims from each of the applicable contractors, which included 25 carriers, 4 DMERCs, and 31 FIs for the fiscal year 2004 error rates. These error rates are then aggregated by contractor type. (See fig. 1.) For fiscal year 2004, CMS contracted with AdvanceMed to administer the CERT Program. AdvanceMed sampled approximately 120,000 claims submitted from January 1, 2003, through December 31, 2003, to estimate the fiscal year 2004 contractor-specific and contractor-type error rates for the CERT Program. For each of the approximately 120,000 sampled claims, AdvanceMed requested the medical records from the provider that rendered the service or from the contractor that processed the related claim, if the contractor previously performed a medical review on the claim. If a provider did not respond to the initial request for medical records after 19 days, AdvanceMed initiated a series of follow-up procedures in an attempt to obtain the information. The follow-up procedures with nonresponding providers for fiscal year 2004 included three written letters and three contacts by telephone. Additionally, in fiscal year 2004, OIG followed up directly with nonresponders on claims over a certain dollar amount. If medical records were not received within 55 days of the initial request, the entire amount of the claim was classified by AdvanceMed as an overpayment error. When medical records were received from the provider or from the contractor, CERT medical review staff reviewed the claim (which billed for the services provided) and the supporting medical records (which detailed the diagnosis and services provided) to assess whether the claim followed Medicare’s payment rules and national and local coverage decisions. Claims that did not follow these rules were classified by AdvanceMed as being in error. Providers whose claims were reviewed were allowed to appeal these claims, and if the error determination for a claim was overturned through the appeals process, AdvanceMed adjusted the error rate accordingly. For the fiscal year 2004 error rate, AdvanceMed notified individual carriers, DMERCs, and FIs of their respective payment errors. For the HPMP, CMS analyzes a sample of claims across QIOs to estimate Medicare error rates by state, because QIOs are organizations with state- based service areas. CMS estimated the QIO contractor-type error rate by aggregating the QIO error rate estimates for each of the 50 states, the District of Columbia, and Puerto Rico. (See fig. 2.) Through the HPMP, CMS sampled approximately 40,000 claims for acute care inpatient hospital discharges that occurred from July 1, 2002, through June 30, 2003, to estimate the fiscal year 2004 state-specific and contractor-type error rates for QIOs. For fiscal year 2004, CMS contracted with two organizations known as Clinical Data Abstraction Centers (CDAC)—AdvanceMed and DynKePRO—that were responsible for requesting medical records from providers for each of the approximately 40,000 sampled claims. Each CDAC was responsible for reviewing the sampled claims, which were assigned on the basis of the geographic location where the discharge occurred. Upon receipt of the medical records, CDAC admission necessity reviewers screened the related claims for the appropriateness of the hospitalization and, with the exception of claims from Maryland, coding specialists independently recoded diagnosis-related groups (DRG) based on the records submitted. Because Maryland does not use DRG coding, nonphysician reviewers screened claims from Maryland to determine whether the length of the acute care inpatient hospital stay was appropriate.24, 25 Claims that failed the screening process, including those where the admission was determined to be unnecessary or where an inappropriate DRG code was used, were forwarded to the QIO responsible for the state where the discharge occurred for further review. Records not received by the CDACs within 30 days of the request for information were “canceled” and referred to the QIO to be processed as overpayment errors caused by nonresponse. The QIO referred these claims to the FI responsible for paying the claim for the necessary payment adjustments. At the QIO, claims forwarded from the CDACs underwent further review, primarily medical necessity admission reviews and DRG validations. Determinations of error were made by QIO physician reviewers. Providers whose claims were reviewed were given the opportunity to provide comments or discuss the case and pursue additional review, which could result in an appeal to an administrative law judge. After the matter was resolved, resulting in a determination that a provider was either underpaid or overpaid, the QIO forwarded the claim to the FI for payment adjustment. Maryland is the only state that does not use the PPS system for acute care inpatient hospitals. Maryland instead has an alternative payment system, known as an all-payer system, in which the state decides each hospital’s level of reimbursement and requires that all payers be charged the same rate for the same service. Medicare and Medicaid pay the state-approved rates. Claims from Maryland with length of stay errors are considered medically unnecessary services. Length of stay reviews identified cases of potential delayed discharge. For example, the patient was medically stable, and continued hospitalization was unnecessary. Estimation of the National Medicare FFS Error Rate CMS estimated the national Medicare FFS error rate by combining the three contractor-type error rates (carrier, DMERC, and FI) from the CERT Program and the one contractor-type error rate (QIO) from the HPMP. (See fig. 3.) Medicare FFS claims that were paid in error as identified by the CERT Program and the HPMP for the fiscal year 2004 error rates were sorted into one of five categories of error: Insufficient documentation: Provider did not submit sufficient documentation to support that the services billed were actually provided. Nonresponse: Provider did not submit any documentation to support that the services billed were actually provided. Medically unnecessary services: Provider submitted sufficient documentation, but the services that were billed were deemed not medically necessary or the setting or level of care was deemed inappropriate. Incorrect coding: Provider submitted documentation that supported a different billing code that was associated with a lower or higher payment than that submitted for the services billed. Other: Provider submitted documentation, but the services billed did not comply with Medicare’s benefit or other billing requirements. See table 2 for the national Medicare FFS error rate by category of error for fiscal year 2004. As reported in CMS’s fiscal year 2004 Medicare error rate report, the agency planned to use the error rates to help determine the underlying reasons for claim errors, such as incorrect coding or nonresponse, and implement corrective action plans for carriers, DMERCs, FIs, and QIOs. Draft statements of work, dated February and April 2005, for carriers, DMERCs, and FIs set goals for contractors to achieve a paid claims error rate of less than a certain percentage, to be determined by CMS. According to the standards for minimum performance on QIO statements of work that ended in 2005 for some QIOs and 2006 for other QIOs, QIOs are evaluated on 12 tasks, one of which is the HPMP. QIOs have to meet the performance criteria standards on 10 tasks set forth by CMS to be eligible for a noncompetitive contract renewal. CMS’s use of the error rates is being done in the context of the agency’s current effort to significantly reform its contracting efforts for the payment of Medicare claims. By July 2009, CMS plans to reduce the total number of contractors responsible for paying Medicare claims to 23 total contractors, which the agency refers to as Medicare administrative contractors (MAC). CMS also plans to institute performance incentives in the new contracts, which will be based on a number of different factors, including the Medicare error rates. According to CMS’s report to Congress on Medicare contracting reform, CMS believes that the consolidation of Medicare contractors and the integration of processing for Medicare claims will lead to a reduced Medicare error rate. CMS Methodology Adequate for Estimating the Error Rates in the CERT Program The methodology used by CMS in the CERT Program to estimate error rates by contractor type (carrier, DMERC, and FI) in fiscal year 2004 was adequate. We found that the sample size and the use of systematic sampling with a random start were adequate to reliably estimate the Medicare error rates by contractor type. The CERT Program also had adequate processes in place to collect medical records and to accurately identify and categorize payment errors. The statistical methods that CMS used to estimate the contractor-type error rates were valid. Sampling Methods The sample size that CMS used in the CERT Program, approximately 120,000 claims, was sufficiently large to produce reliable estimates of the fiscal year 2004 Medicare error rates by contractor type (carrier, DMERC, and FI). CMS selected 167 claims each month on a daily basis from each of the 60 contractors, including 25 carriers, 4 DMERCs, and 31 FIs. This sample generated error rate estimates by contractor type within acceptable statistical standards, such as relative precision of no greater than 15 percent. Specifically, the error rate for the carrier contractor type was 10.7 percent with a relative precision of 3.7 percent, the error rate for the DMERC contractor type was 11.1 percent with a relative precision of 13.5 percent, and the error rate for the FI contractor type was 15.7 percent with a relative precision of 4.5 percent. Further, we found that the sampling methods were adequate because CMS used a systematic sample with a random start. Sampling methods that employ a random start are designed to ensure that the sample selected is representative of the population from which it is drawn. We reviewed CERT Program documentation, which described the use of a systematic sample with a random start. The OIG contractor reviewed the computer program used for the CERT Program sample selection and verified that the claims were selected according to the documentation. CMS officials told us that the CERT Program conducts tests to compare the sampled claims to the population of claims. For example, CMS compared the percentage of claims sampled in each category of Medicare-covered service to the percentage of claims in the population by category of Medicare-covered service. CMS provided us with an example of this test for one contractor’s claims from January 2003 through June 2003. While the relative precision of the fiscal year 2004 error rate estimates by contractor type for the CERT Program was within acceptable statistical standards of no greater than 15 percent, the relative precision of half of the contractor-specific error rate estimates was not. (See app. II for contractor-specific error rate information, including the estimates and corresponding relative precision, for carriers, DMERCs, and FIs.) Thirty of the 60 contractor-specific error rates had relative precision that were not within acceptable statistical standards. Additionally, the relative precision of the contractor-specific error rates showed wide variation within each contractor type. Relative precision among carriers ranged from 8.9 percent to 17.0 percent; among DMERCs, relative precision ranged from 12.3 percent to 20.7 percent; and among FIs, relative precision ranged from 10.3 percent to 42.5 percent. As demonstrated by the range in relative precision among FIs, for example, the error rate estimate for one FI was nearly four times more reliable than the error rate estimate for another. The variation in relative precision among the contractor-specific error rate estimates was due, in part, to the sampling method CMS used for the CERT Program. CMS took an equal sample size from each contractor despite the fact that individual contractors accounted for varied amounts of Medicare claim volumes and total payments. For example, the claim volume for carriers in 2003 ranged from a minimum of 5.3 million claims to a maximum of 206 million claims; total payments for carriers in 2003 ranged from a minimum of $168 million to about $6.7 billion. CMS officials told us that they plan to reallocate the CERT Program sample at the contractor level by increasing the sample size for those contractors that are not reaching CMS’s targeted precision and by decreasing the sample size for those contractors that are reaching targeted precision and achieving low error rates. In September 2005, CMS officials reported that this change to the methodology is expected to be implemented for the fiscal year 2007 error rate estimation, which will be based on claims processed in parts of 2006 and 2007. We support CMS’s planned changes to its sampling methodology. We believe that reallocation of the sample as planned by CMS will improve the relative precision of these estimates. If future samples were based on the volume of claims or total payments of each contractor and the relative precision of the contractor-specific error rate rather than on the current basis of an equal allocation across contractors, relative precision would likely be improved for the contractor-specific error rates of those targeted contractors that were allocated a larger sample. This is because relative precision improves with increased sample size. There would also likely be decreased variation in relative precision across all contractor-specific error rates. These results could be achieved without increasing the overall sample size for the CERT Program. Medical Record Collection Process Based on our review of oversight work conducted by OIG, we found that the process CMS used to collect medical records from providers for the CERT Program was adequate. Staff of AdvanceMed, the CMS contractor responsible for administering the CERT Program, were responsible for requesting medical records for each of the approximately 120,000 sampled claims used to estimate the fiscal year 2004 error rates. According to an OIG review of CMS’s corrective actions to improve nonresponse in the CERT Program for fiscal year 2004, AdvanceMed conducted a timely and systematic follow-up with providers that did not respond to initial requests for medical records. For the medical records collection process for the fiscal year 2004 error rates, CMS implemented corrective actions in the CERT Program to address the factors associated with the high rate of nonresponse experienced during the medical records collection process for the prior fiscal year. According to the CMS fiscal year 2003 error rate report, for example, the agency found that some nonresponse in fiscal year 2003 was due to providers’ lack of familiarity with AdvanceMed. In previous years when OIG had responsibility for estimating the Medicare error rate, OIG requested medical records directly from providers; providers were familiar with OIG and understood the importance of complying with the requests. However, when the responsibility for estimating the Medicare error rate was transferred to CMS, many providers were unfamiliar with AdvanceMed and may have been reluctant to submit medical records to an unknown company. Another factor that caused provider nonresponse in fiscal year 2003, according to the CMS report, was providers’ confusion about the submission of medical records within the constraints of the privacy regulations issued by HHS under the Health Insurance Portability and Accountability Act of 1996, which limit the use and release of individually identifiable health information. According to the CMS report, CMS found that providers were sometimes unaware that sending medical records to the CERT Program contractor was permissible under the regulations. As reported in the OIG review cited previously, CMS implemented corrective actions that increased provider compliance with medical record requests in fiscal year 2004. According to the OIG report, CMS conducted educational efforts to clarify the role of AdvanceMed. Additionally, OIG further reported that CMS took action to address providers’ concerns about compliance with the privacy regulations by revising its request letters to providers to highlight AdvanceMed’s authorization, acting on CMS’s behalf, to obtain medical records as requested. OIG told us that CMS instructed carriers, DMERCs, and FIs to refer certain claims for nonresponding providers to OIG for follow-up. These improvements in the process used to collect medical records in the CERT Program helped reduce nonresponse. According to information provided to us by CMS, the percentage of error caused by nonresponse in the CERT Program decreased from 61 percent for fiscal year 2003 to 34 percent in fiscal year 2004. According to CMS’s fiscal year 2005 error rate report, the agency continued several corrective actions to address nonresponse for sampled claims for the fiscal year 2005 error rates. Further, beginning with claims sampled to estimate the fiscal year 2006 Medicare error rates, CMS transferred the medical record collection duties to a second contractor, Lifecare Management Partners, which the agency refers to as the CERT Program documentation contractor. CMS officials told us that the CERT Program documentation contractor is automating the medical record collection process and eliminating paper copies of documentation. Identification and Categorization of Payment Errors Based on our review of OIG’s fiscal year 2004 CERT Program evaluation, we concluded that the processes used in the CERT Program to identify and categorize payment errors for fiscal year 2004 were adequate because the medical record reviews were performed appropriately and the CERT Program staff conducting the reviews were adequately trained and qualified. Staff of the CERT Program contractor, AdvanceMed, reviewed the medical records to verify that claims were processed according to Medicare payment rules; if not, a claim was found to be in error and assigned to one of five categories of error (insufficient documentation, nonresponse, medically unnecessary, incorrect coding, or other). We reviewed work conducted by OIG that found AdvanceMed, the CMS contractor responsible for administering the CERT Program, had appropriate controls in place to ensure that the medical record reviews were performed in accordance with established CERT Program procedures. We also reviewed work by OIG, which examined the educational and training requirements for medical record reviewers as established in the CERT Program and assessed selected training files for selected medical record reviewers. OIG officials told us that they found these selected CERT Program medical record reviewers to be adequately trained and qualified. OIG found that AdvanceMed did not complete all required quality assurance reviews within the designated time frame. CMS told OIG that it planned to reduce AdvanceMed’s workload. AdvanceMed conducts quality assurance reviews on a sample of medically reviewed claims to validate the initial reviewer’s decision on whether a claim was paid in error. OIG found that for the fiscal year 2004 CERT Program, AdvanceMed completed 984 of the required 2,587 quality assurance reviews by the required date. To determine whether these quality assurance reviews ensured the reliability of the CERT Program claims review process, OIG randomly sampled 45 of the 2,587 claims selected for quality assurance reviews. Of these 45 claims, AdvanceMed had completed a quality assurance review on 5 claims. OIG reported that the results of the 5 quality assurance reviews confirmed the results of the initial medical record reviews. Further, OIG reported that AdvanceMed stated that a backlog of medical reviews prevented the completion of the required quality assurance reviews within the prescribed time frame. In response to the OIG report on the fiscal year 2004 CERT Program evaluation, CMS commented that with Lifecare Management Partners assuming responsibilities for medical record collection for the fiscal year 2006 Medicare error rate estimation, AdvanceMed’s workload would be reduced. As a result, CMS commented that this will free up the necessary resources for AdvanceMed to comply with the quality assurance requirements. Further, in its response to the OIG report, CMS commented that both AdvanceMed and Lifecare Management Partners are required to report to the agency on the results of the quality assurance activities conducted. According to OIG’s evaluation of the fiscal year 2005 CERT Program, OIG found that AdvanceMed completed all of the required quality assurance reviews. Statistical Methods We found that the statistical methods used to estimate the error rates and standard errors by contractor type (carrier, DMERC, and FI) for the CERT Program were adequate. Based on our review of the computer programming code that generated the error rate estimates and standard errors by the CERT Program subcontractor responsible for calculating the contractor-type error rates, The Lewin Group, we found that the statistical methods were based on standard statistical principles and were used appropriately. For each contractor type, the stratified combined ratio estimation method was used to calculate the error rate by taking the difference between the overpaid dollars and the underpaid dollars divided by the total dollars paid by Medicare for FFS claims of each contractor type. The payment errors from the sample were then extrapolated to the population for each contractor type to estimate total payment errors. Further, The Lewin Group used a standard statistical method to calculate the standard errors of each of the contractor-type error rates. This method is appropriate for obtaining the standard error of an estimate when the stratified combined ratio estimation method is used and is valid for large sample sizes, such as that used for the CERT Program. CMS Methodology Adequate for Estimating the Error Rate in the HPMP We found that the methodology used by CMS was adequate to produce a reliable estimate of the fiscal year 2004 Medicare error rate for the one contractor type (QIO) in the HPMP. We found the methodology adequate because the sample size was large enough to produce a reliable error rate estimate. Additionally, the sample was representative of the population. We found also that the methodology was adequate because the HPMP contractors responsible for collecting the medical records for the sampled claims, as well as for identifying and determining errors, had appropriate controls in place to ensure that established procedures were followed. Further, the statistical method that CMS used to calculate the contractor- type error rate was valid. Sampling Methods The sample size that CMS used for the HPMP, about 40,000 claims, was sufficiently large to produce a reliable estimate of the fiscal year 2004 error rate for the QIO contractor type. Using a systematic sample, CMS selected 62 discharge claims per month for the District of Columbia, Puerto Rico, and each state except Alaska. CMS selected 42 claims per month for Alaska. The QIO contractor-type error rate was 3.6 percent with a relative precision of 5.6 percent. The relative precision for the QIO contractor-type error rate estimate is within acceptable statistical standards (a relative precision of no greater than 15 percent). For the QIO contractor-type error rate to be a reliable estimate, it was necessary that the sample of discharge claims from which the error rate was estimated be representative of the population from which it was drawn. CMS’s documentation stated that the HPMP used a systematic sample selection process with a random start, which is a generally accepted method of sampling that is designed to ensure that the sample drawn is representative of the population. Our review of the computer programming code that selected the sample, however, found that a random start was not used. To determine whether the HPMP sample was compromised by the lack of a random start and whether it represented the population from which it was drawn, we examined the OIG contractor’s comparison of the June 2003 sample to a re-created version of the June 2003 population file from which the sample was drawn. Based on our review, we found that the HPMP sample was representative of the population from which it was drawn in terms of average dollar amount per claim. While relative precision of the fiscal year 2004 QIO contractor-type error rate estimate was within acceptable statistical standards, relative precision of most of the state-specific QIO error rate estimates was not. (See app. II for state-specific QIO error rate information, including the error rate estimates and corresponding relative precision.) Only three states’ error rate estimates—Kentucky, Massachusetts, and New Mexico— had relative precision of less than 15 percent. Additionally, there was wide variation in relative precision of the state-specific QIO error rate estimates. Relative precision of the state-specific QIO error rates ranged from 10.5 percent in Massachusetts to 83.3 percent in Mississippi. The differences in relative precision of these state-specific QIO error rate estimates indicate that the error rate estimate for the QIO that served Massachusetts was eight times more reliable than the error rate estimate for the QIO that served Mississippi. The variation in relative precision was due, in part, to the sampling methods used by CMS for the HPMP. CMS took an equal sample size for each state except Alaska, despite the fact that there was significant variation between states in the overall volume of discharge claims and total payments. The number of discharges per state varied from a low of 15,166 in Wyoming to a high of 825,845 in Florida. Similarly, total dollars paid for acute-care inpatient hospital stays varied from less than $100 million in Wyoming to a high of $7.5 billion in California. Although in February 2006 a CMS official told us the agency has no plans to reallocate the HPMP sample, CMS could adopt a similar sampling strategy as it plans to do for the CERT Program. If future state samples were based on the volume of discharge claims or total payments per state and the relative precision of the state-specific QIO error rates, rather than on the current basis of an equal allocation per state, relative precision would likely be improved for the state-specific QIO error rates in those states that were allocated a larger sample since relative precision improves as sample size increases. There would also likely be decreased variation in relative precision across all state-specific QIO error rates. These results could be achieved without increasing the overall sample size for the HPMP. In addition to issues with the wide variation of relative precision of the state-specific QIO error rate estimates, we also found large differences in the average dollar amount per claim between the state-specific samples for some states and the respective state populations. These differences suggest that the samples drawn for more than half of the states were not representative of each state’s population. Based on our examination of the OIG contractor’s comparison of the state samples and the state populations for June 2003, we found that the ratio of the average dollar amount per claim in a state’s sample to the average dollar amount per claim in a state’s population varied from 62 percent in Maryland to 143 percent in Kentucky. Twelve states had a ratio above 110 percent, and 16 states had a ratio below 90 percent. It is still possible for the national HPMP sample to be representative of the national HPMP population even if all of the state-specific samples are not representative of their state populations. The larger size of the HPMP sample overall mitigates the problems identified in the smaller state-specific samples. Medical Record Collection Process Based on our review of oversight work of the HPMP conducted by OIG, we found that the process CMS used for collecting medical records from providers was adequate. OIG selected 46 discharge claims that were sampled for the HPMP to determine if the CDACs, AdvanceMed and DynKePRO, followed established HPMP procedures for obtaining and reviewing medical records to identify payment errors. OIG found that the CDACs generally had appropriate controls in place to ensure that the medical records were obtained and reviewed according to established HPMP procedures. Of the 46 discharge claims reviewed, OIG found that in two instances a required follow-up letter to the provider was not sent due to an error by a substitute CDAC employee. However, the medical records for these two discharge claims were obtained within 30 days of the original request, which resulted in no adverse effect on the error rate estimates. Overall, nonresponse for fiscal year 2004 represented approximately 5.1 percent of the total QIO contractor-type error rate of 3.6 percent, or 0.2 percent of all discharge claims reviewed through the HPMP. The issue with providers not responding to requests for medical records was not as significant an issue for the HPMP as it was for the CERT Program. According to the CMS report on the fiscal year 2005 error rate, nonresponse was less problematic in the HPMP because of several factors, including the following: (1) providers were more likely to respond to requests from the HPMP since the average claim value was higher than the average claim value in the CERT Program; (2) providers were more familiar with the HPMP than with the CERT Program; and (3) providers were paid the cost of providing medical records by the HPMP, but not by the CERT Program. Identification and Categorization of Payment Errors Based on our review of OIG’s fiscal year 2004 HPMP evaluation, we concluded that the CDACs (AdvanceMed and DynKePRO) generally had processes in place to adequately identify and categorize claims paid in error in the HPMP for fiscal year 2004. OIG officials told us that they found the medical record reviewers, both admission necessity reviewers and DRG coding specialists, at the two CDACs met CMS’s qualifications for these positions. As part of its review of the fiscal year 2004 HPMP, OIG reviewed 46 discharge claims that were part of the sample for estimating the QIO contractor-type error rate. Based on that review, OIG reported that the CDACs generally had appropriate controls in place to ensure that admission necessity and DRG validation reviews were performed in accordance with CMS established procedures and that the results of those reviews were adequately maintained, updated, and reported. As part of the internal HPMP quality control process, two activities were conducted regularly to ensure the reliability and accuracy of CDAC reviews both within each CDAC and across the two CDACs. Each CDAC randomly chose 30 claims per month to be reviewed by two of its medical record reviewers for intra-CDAC tests. Each CDAC compared the results of the two medical record reviews to determine the reliability of reviews within the CDACs and reported the results of the comparisons to CMS. The CDACs performed inter-CDAC tests to assess the reliability of the reviews between the two CDACs. For these tests, an additional 30 claims were chosen at random per quarter by each of the CDACs for review by a medical records reviewer at the other CDAC. As part of its evaluation of the fiscal year 2004 HPMP, OIG selected 45 claims that went through the intra-CDAC process and 42 claims that went through the inter-CDAC process to determine if these quality control activities ensured the reliability of the CDAC review process. OIG reported that the quality control reviews were generally operating effectively to ensure the reliability of the review process and the consistency of the error rate determination decisions. From the same evaluation of the fiscal year 2004 HPMP, OIG found that the CMS contractor tasked with calculating the dollar amounts paid in error, Texas Medical Foundation, used a method that produced an amount of dollars in error that in some cases differed from what OIG found to be the amount of dollars in error. For claims identified by a QIO as having errors caused by changes in DRG codes, Texas Medical Foundation used a method that produced different dollar amounts in error than would have been produced if it had used the software that FIs used to pay the original discharge claims. The Texas Medical Foundation calculated a different amount in error for about 76 percent of 200 incorrectly coded claims that OIG reviewed. However, OIG reported that the differences did not have a significant effect on the QIO contractor-type error rate estimate. A CMS official told us that the agency has not invested in modifying the software for use by the Texas Medical Foundation for technical and financial reasons. For example, the software requires modifications using a specific programming language for which CMS has limited personnel with the needed expertise. Statistical Methods We verified the statistical methods CMS used to estimate the QIO contractor-type error rate and standard error in the HPMP by reviewing the computer programming code that produced this information. We found that the methods CMS used were adequate because they were based on standard statistical methods and were applied appropriately. To estimate the QIO contractor-type error rate, CMS weighted each state- specific QIO error rate according to that state’s share of the total Medicare FFS payments for acute-care inpatient hospital claims nationwide. This method is referred to as a stratified mean per unit estimation. Like the CERT Program, CMS used a standard statistical method to calculate the standard error of the estimate. In our review of the computer programming code that generated the QIO contractor-type error rate estimate, we found that CMS used annual instead of monthly weights in its estimate of the annual total dollars paid in error. It would have been more appropriate for CMS to have used monthly weights because the HPMP sample was drawn on a monthly, not an annual, basis. However, when we reviewed the OIG contractor’s comparison of the estimate of annual dollars paid in error using annual weights to what the estimate would have been had CMS used monthly weights, we concluded that the use of annual weights did not significantly affect the QIO contractor-type error rate estimate. A CMS official told us and provided us with documentation that beginning with the HPMP’s fiscal year 2005 error rate estimation process, monthly weights are being used. CMS Methodology Adequate for Estimating the National Error Rate CMS appropriately combined the error rates under the CERT Program and the HPMP to estimate the fiscal year 2004 national Medicare error rate. CMS estimated the national Medicare error rate by averaging the error rates of the four contractor types (carrier, DMERC, FI, and QIO), weighted by each contractor type’s share of total Medicare FFS payments. Likewise, CMS calculated the standard error, or precision, of the national error rate based on the standard error of each of the four types of contractors’ error rate estimates, weighted by each contractor type’s proportion of total Medicare FFS payments. The methods CMS used to calculate the national error rate and the standard error were statistically valid, since the units of measurement, which in this case were Medicare claims, of the four error rates that were combined were mutually exclusive (independent) among contractor types. Each contractor type consisted of multiple individual contractors. These contractors were independent in that one contractor’s estimated error rate or standard error did not affect the estimates of other contractors, since the claims in the population and in the sample were not overlapping among contractors. Concluding Observations Since assuming responsibility for estimating the national Medicare error rate in fiscal year 2003, CMS has made changes to the methodology, which have provided CMS with more detailed information about the error, thereby allowing the agency to better identify the underlying causes of error and implement corrective action plans to address them. For example, CMS significantly increased the size of the sample used to estimate the Medicare FFS claims paid in error. The increased sample size allowed the agency to estimate not only the error rate at the national level, but also more detailed error rates at the contractor-type and contractor- specific levels. Further, CMS has made changes in the way it collects medical records from providers in an effort to reduce the rate of error caused by nonresponse and insufficient documentation. These changes may affect the error rate estimates and thus the comparability of the estimates over time. Consequently, users of the error rate information should exercise caution when making year-to-year comparisons. Our work focused on the methodology CMS used to estimate the national Medicare error rate and contractor-type error rates for fiscal year 2004. For these error rates, we found the methodology adequate for that year. Under CMS’s contracting reform initiative, there will be fewer individual contractors (carriers, DMERCs, and FIs). If CMS maintains the same overall sample size, the sample sizes of the remaining individual contractors would be increased. Reliability of the contractor-specific error rate estimates is likely to improve with the larger sample sizes. Until then, the wide variation in reliability of the contractor-specific error rate estimates may preclude meaningful comparisons across individual contractors. Agency Comments We received written comments from HHS (see app. III.) In responding to our draft report, HHS noted that we found the CMS methodology adequate for estimating the fiscal year 2004 national Medicare FFS error rate. HHS also noted that CMS is continually committed to refining the processes to estimate, as well as lower, the level of improper payments in the Medicare FFS program. In its comments, HHS noted improvement in the national Medicare error rate from fiscal years 2004 to 2005. The department attributed the decline in the error rate to marked improvement in the nonresponse (which CMS now calls “no documentation”) and the insufficient documentation error rates. Commenting on the adequacy of the fiscal year 2005 methodology was beyond the scope of our work; however, as we noted in the draft report, changes in the methodology may affect the estimation of the error rates and thus the comparability of these error rates over time. For example, we discussed in the draft report that CMS has made changes in the way it collects medical records from providers in an effort to reduce the rate of error caused by nonresponse and insufficient documentation. These changes primarily affected HHS’s processes for calculating an annual error rate estimate for the Medicare FFS program. This may represent a refinement in the program’s estimation methodology rather than improved accountability over program dollars. The national Medicare error rates for fiscal years 2004 and 2005 provided by HHS in its comments are not comparable to the error rates cited in this report for fiscal years 2004 and 2005. HHS provided gross error rates, which were calculated using gross dollars paid in error. Gross dollars paid in error were calculated by adding dollars paid in error that were due to underpayments to those that were due to overpayments. As noted in the draft report, we reported net error rates. Net error rates were calculated using net dollars paid in error. Net dollars paid in error were calculated by subtracting dollars paid in error that were due to underpayments from those that were due to overpayments. HHS also provided technical comments, which we have addressed as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, the HHS Inspector General, the Administrator of CMS, and appropriate congressional committees. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7101 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology We reviewed the following components of the Centers for Medicare & Medicaid Services’s (CMS) methodology for estimating the fiscal year 2004 error rate: Sampling methods, including sample size, sample selection, sample representation, and precision of the estimates. The medical records collection process. Identification and categorization of claims payment error, including the medical record review process and quality assurance reviews. Statistical methods used to estimate the error rates and precision. To conduct our analysis of CMS’s sampling methods, we reviewed work performed by the Department of Health and Human Services (HHS) Office of Inspector General (OIG) contractor that assessed these methods and CMS documentation for the fiscal year 2004 Medicare error rate. For the Comprehensive Error Rate Testing (CERT) Program, we reviewed the program manual, which described the CERT Program sampling methods as well as CMS’s Medicare error rate reports for fiscal years 2003 and 2004. For the Hospital Payment Monitoring Program (HPMP), we reviewed the program manual and the HPMP computer programming code that generated the sample to verify that the sample was taken in accordance with the procedures outlined in the manual. Additionally, we reviewed the OIG contractor’s comparison of the June 2003 sample and a re-created version of the June 2003 sampling frame, or population, for the HPMP. It was not possible for the OIG contractor to obtain the exact June 2003 population file because the file is continuously updated and previous versions are not retained. We did not believe it was necessary to compare every month’s sample to the population from which it was drawn because of the large size of the sample (approximately 40,000 discharge claims) and population (approximately 11.5 million discharge claims), and the fact that the sample was drawn in the same manner each month. To conduct our analysis of CMS’s medical record collection and review processes and identification and categorization of payment errors, we relied primarily on reports published by OIG. Since 2003, OIG has conducted annual reviews of the CERT Program and the HPMP as part of its review of work performed for HHS by contractors. These annual reviews examine whether the CERT Program and HPMP contractors have appropriate controls in place to ensure that the medical record reviews and quality assurance reviews were performed in accordance with established procedures. We reviewed OIG’s annual reviews of the CERT Program and the HPMP for fiscal year 2004. Our analysis of provider nonresponse within the CERT Program relied on two OIG studies of CMS’s actions to reduce nonresponse implemented for the CERT Program for fiscal year 2004. For the HPMP, we also reviewed four intra-Clinical Data Abstraction Center (CDAC) reports and two inter-CDAC reports, which were quality assurance reviews intended to assess the consistency of review decisions both within and across CDACs. To conduct our analysis of CMS’s statistical methods, we reviewed the OIG contractor’s computer programming code, which replicated CMS’s estimation of the error rates for carriers, durable medical equipment regional carriers (DMERC), and fiscal intermediaries (FI), as calculated by the CERT Program subcontractor responsible for statistical analysis of the error rates for fiscal year 2004. We reviewed CMS’s computer programming code, which calculated the HPMP error rate for quality improvement organizations (QIO). In conducting these reviews of the computer programming codes for both the CERT Program and the HPMP, we verified that each code appropriately implemented a methodology that employed standard statistical principles and was used appropriately. To inform all aspects of our study, we interviewed OIG officials with oversight responsibility for the error rate estimation, OIG contractor staff who conducted the evaluation of the statistical methodology, CMS officials with programmatic responsibilities for the CERT Program and the HPMP, and staff of the CERT Program subcontractor for statistical analysis. We performed our work from April 2005 through March 2006 in accordance with generally accepted government auditing standards. Appendix II: Fiscal Year 2004 Error Rate Information by Contractor Type—Carriers, DMERCs, FIs, and QIOs Appendix II: Fiscal Year 2004 Error Rate Information by Contractor Type—Carriers, DMERCs, FIs, and QIOs year 2004(in dollars) CMS estimated paid claims error rate (percentage) CMS estimated standard error(percentage) precision(percentage) 13.5 year 2004(in dollars) CMS estimated paid claims error rate (percentage) CMS estimated standard error(percentage) precision(percentage) 4.5 year 2004(in dollars) CMS estimated paid claims error rate (percentage) CMS estimated standard error(percentage) precision(percentage) 23.8 year 2004(in dollars) CMS estimated paid claims error rate (percentage) CMS estimated standard error(percentage) precision(percentage) Carriers are health insurers and pay claims submitted by physicians, diagnostic laboratories and facilities, and ambulance service providers. DMERCs are health insurers and pay claims submitted by durable medical equipment suppliers. For the fiscal year 2004 error rate,TriCenturion, a program safeguard contractor, was responsible for medical review in one of the four DMERC regions. Program safeguard contractors are Medicare contractors that conduct activities to address or prevent improper payments. As such, it was TriCenturion, not the DMERC, which was responsible for lowering the error rates in its region. FIs are almost exclusively health insurers and pay claims submitted by home health agencies, non- prospective payment system (PPS) hospitals, hospital outpatient departments, skilled nursing facilities, and hospices. PPS is a reimbursement method used by Medicare where the payment is made based on a predetermined rate and is unaffected by the provider’s actual costs. QIOs (formally known as peer review organizations) are responsible for ascertaining the accuracy of coding and payment of paid Medicare FFS claims for acute care inpatient hospital stays—generally those that are covered by PPS—for Medicare beneficiaries in all 50 states, the District of Columbia, and Puerto Rico. Unlike carriers, DMERCs, and FIs, however, QIOs do not process and pay claims. These activities are conducted by FIs. Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Debra Draper, Assistant Director; Lori Achman; Jennie Apter; Dae Park; and Ann Tynan made key contributions to this report.
The Centers for Medicare & Medicaid Services (CMS) estimated that the Medicare program paid approximately $20 billion (net) in error for fee-for-service (FFS) claims in fiscal year 2004. CMS established two programs--the Comprehensive Error Rate Testing (CERT) Program and the Hospital Payment Monitoring Program (HPMP)--to measure the accuracy of claims paid. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 directed GAO to study the adequacy of the methodology that CMS used to estimate the Medicare FFS claims paid in error. GAO reviewed the extent to which CMS's methodology for estimating the fiscal year 2004 error rates was adequate by contractor type for (1) the CERT Program, (2) the HPMP, and (3) the combined national error rate (including both the CERT Program and the HPMP). GAO reviewed relevant CMS documents and reports related to the CERT Program and the HPMP. In addition, GAO reviewed work performed by the Department of Health and Human Services (HHS) Office of Inspector General (OIG) and its contractor that evaluated CMS's fiscal year 2004 statistical methods and other aspects of the error rate estimation process. GAO also conducted interviews with officials from CMS, HHS's OIG, and their contractors. The methodology used by CMS for the CERT Program was adequate to estimate the fiscal year 2004 error rates by contractor type--carrier, durable medical equipment regional carrier (DMERC), and fiscal intermediary (FI). Carriers pay claims submitted by physicians, diagnostic laboratories and facilities, and ambulance service providers. DMERCs pay claims submitted by durable medical equipment suppliers. FIs pay claims submitted by hospitals, home health agencies, hospital outpatient departments, skilled nursing facilities, and hospices. The methodology was adequate because CMS used a large sample--about 120,000 claims--and an appropriate sample selection strategy. For these fiscal year 2004 error rate estimates, CMS made improvements in the collection of medical records that supported the sampled claims. These medical records were appropriately reviewed to determine whether there were errors in payment. CMS used valid statistical methods to estimate the fiscal year 2004 error rates for the carrier, DMERC, and FI contractor types. The methodology used by CMS for the HPMP was adequate to estimate the fiscal year 2004 error rate by quality improvement organizations (QIO), which are responsible for ascertaining the accuracy of coding and payment of Medicare FFS paid claims for acute care inpatient hospital stays. CMS's sampling methods were adequate because the agency used a large sample, approximately 40,000 claims, that was representative of the population from which it was drawn in terms of average dollar amount per claim. Also, the HPMP had adequate processes in place to ensure appropriate determinations of error. CMS used valid statistical methods to estimate the fiscal year 2004 error rate for the QIO contractor type. The fiscal year 2004 contractor-type error rate estimates for the CERT Program and the HPMP were appropriately combined to determine the national Medicare error rate through the use of a valid statistical method. CMS estimated the national Medicare error rate by averaging the carrier, DMERC, and FI contractor-type error rates in the CERT Program and the QIO contractor-type error rate in the HPMP, weighted by each contractor type's share of total Medicare FFS payments. In written comments, HHS noted that GAO found CMS's methodology adequate for estimating the fiscal year 2004 national Medicare FFS error rate. HHS also noted that CMS is continually committed to refining the processes to estimate, as well as lower, the level of improper payments in the Medicare FFS program.
Background VA provides health care services to various veteran populations— including an aging veteran population and a growing number of younger veterans returning from the military operations in Afghanistan and Iraq. VA operates approximately 150 hospitals, 130 nursing homes, 850 outpatient clinics, as well as other facilities to provide care to veterans. In general, veterans must enroll in VA health care to receive VA’s medical benefits package—a set of services that includes a full range of hospital and outpatient services, prescription drugs, and long-term care services provided in veterans’ own homes and in other locations in the community. VA’s health care budget estimate includes both the total cost of providing VA health care services as well as estimates of anticipated funding from several sources. These sources include new appropriations, which refer to the appropriations to be provided for the upcoming fiscal year, and with respect to advance appropriations, the next fiscal year. For example, VA estimated it needed $54.6 billion in new appropriations for fiscal year 2014 and $55.6 billion in advance appropriations for fiscal year 2015. In addition to new appropriations, sources of funding include resources expected to be available from unobligated balances and collections and VA’s reimbursements that VA anticipates it will receive in the fiscal year. collections include third-party payments from veterans’ private health care insurance for the treatment of nonservice-connected conditions and veterans’ copayments for outpatient medications. VA’s reimbursements include amounts VA receives for services provided under service agreements with the Department of Defense (DOD). In its budget justification, VA includes estimates related to the following: Ongoing health care services, which include acute care, rehabilitative care, mental health, long-term care, and other health care programs. In addition to new appropriations that VA may receive from Congress as a result of the annual appropriations process, funding may also be available from unobligated balances of multiyear appropriations, which remain available for a fixed period of time in excess of 1 fiscal year. For example, VA’s fiscal year 2013 appropriations provided that about $1.95 billion be available for 2 fiscal years. These funds may be carried over from fiscal year 2013 to fiscal year 2014 if they are not obligated by the end of fiscal year 2013. See Pub. L. No. 113-6, div. E, tit. II, § 226(b), 127 Stat. 198, 407 (2013). Initiatives, which are proposals by the Secretary of VA, the President, or Congress to provide, expand, or create new health care services. Some of the proposed initiatives can be implemented within VA’s existing authority, while other initiatives would require a change in law. Proposed savings, which are changes in the way VA manages its health care system to lower costs, such as changes to its purchasing and contracting strategies. Collections and reimbursements, which are resources VA expects to collect from health insurers of veterans who receive VA care for nonservice-connected conditions and other sources, such as veterans’ copayments, and resources VA expects to receive as reimbursement of services provided to other government agencies or private or nonprofit entities. Each year, Congress provides funding for VA health care through three appropriations accounts: Medical Services, which funds health care services provided to eligible veterans and beneficiaries in VA’s medical centers, outpatient clinic facilities, contract hospitals, state homes, and outpatient programs on a fee basis. Medical Support and Compliance, which funds the management and administration of VA’s health care system—including financial management, human resources, and logistics. Medical Facilities, which funds the operation and maintenance of the VA health care system’s capital infrastructure, such as costs associated with NRM, utilities, facility repair, laundry services, and groundskeeping. Advance appropriations for fiscal year 2014 for the three accounts were made in the following proportions: Medical Services at 80 percent, Medical Support and Compliance at 11 percent, and Medical Facilities at 9 percent. In our prior work reviewing the President’s budget requests for VA health care services, we have reported on a variety of problems related to the reliability, transparency, and consistency of VA’s estimates and information included in its congressional budget justifications. In June 2012 we reported that in its fiscal year 2013 budget justification VA was not transparent about the agency’s fiscal year 2013 estimates for initiatives and ongoing health care services as well as VA’s estimate for initiatives in support of the fiscal year 2014 advance appropriations request. We also raised concerns regarding the reliability of VA’s fiscal year 2013 estimate for NRM, which did not address the long-standing pattern in which VA’s NRM spending has exceeded the agency’s estimates. VA concurred with a recommendation we made to improve the transparency of its estimates for initiatives and ongoing health care services, but did not concur with a recommendation related to the transparency of the agency’s initiative estimate in support of the advance appropriations request. VA also concurred with a third recommendation we made to improve the reliability of the agency’s NRM estimates. In September 2012 we also found that VA did not label health care services consistently in its budget justifications so that it was clear what services were being referred to across appropriations accounts. VA agreed with our recommendation to improve the consistency of the labels used for health care services. Most recently, in February 2013 we raised concerns about the reliability of VA’s estimates for non-NRM facility-related activities.estimates, and VA concurred with our recommendation. VA Expanded the Use of the EHCPM to Develop Its Health Care Budget Estimate and Changed How It Reported Information on Certain Administrative Costs VA Expanded the Use of the EHCPM by Using a Blend of EHCPM Estimates of Care Provided and Current Spending Data for Most Long-Term Care Services VA expanded the use of the EHCPM by using estimates of the amount of care provided—which is known as workload—from the EHCPM to estimate resources needed for 14 long-term care services for fiscal years 2014 and 2015. VA included the 14 long-term care services in the EHCPM, but the agency did not use the estimates of needed resources developed for fiscal years 2014 and 2015 because, according to VA officials, the EHCPM expenditure estimates were determined to be too high to produce reliable estimates of needed resources in light of current expenditure data. As an alternative, the estimates for fiscal years 2014 and 2015 were based on the most current expenditure data available, as VA has done in prior years, and workload estimates from the EHCPM. As a result of this blended approach, VA used the EHCPM, in part or in whole, to develop estimates for 74 health care services that accounted for more than 85 percent of VA’s budget estimate supporting the President’s fiscal year 2014 budget request for VA health care. This represents an increase compared to last year when VA used the EHCPM to estimate needed resources for 59 health care services, or approximately 78 percent of the agency’s fiscal year 2013 budget estimate. (See fig. 1.) While the EHCPM accounted for a larger proportion of VA’s health care budget that supports the President’s budget request, revisions were made to the estimates developed by the EHCPM and other methods. As we have previously reported, these revisions resulted from the iterative and multilevel review process in VA and OMB and reflect the policy decisions and more current information, among other things. VA Was Inconsistent in Reporting Certain Administrative Costs among Its Three Appropriations Accounts Another change in VA’s fiscal year 2014 budget justification was how VA reported its estimate for certain administrative costs. In prior budget justifications, VA has reported its estimates for administrative personnel costs under “Administration” and estimates for administrative contracts under “Administrative Contract Services.” VA reported estimates for these administrative costs in each of the three appropriations accounts as well as for “Medical Care,” which reported the total costs for “Administration” and “Administrative Contract Services” across all three accounts. In its fiscal year 2014 budget justification, VA used a new budget category label—“Administrative Personnel”—when reporting estimated costs related to administrative personnel. VA also identified some of the costs reported under the new label by providing examples of the types of positions the agency considers administrative personnel, such as filing clerks, receptionists, police staff, chaplains, and other staff that are necessary for the effective operations of VA medical facilities. However, VA did not consistently use the new “Administrative Personnel” label in its fiscal year 2014 budget justification. VA used the new label when reporting its Medical Care estimate for administrative personnel costs, but not when reporting the agency’s estimates for each of the three appropriations accounts. Instead, VA used the label “Administration,” even though the estimates reported under this label—$2.0 billion for Medical Services, $3.5 billion for Medical Support and Compliance, and $366 million for Medical Facilities—represented the same personnel costs in the estimate of $5.9 billion reported for “Administrative Personnel.” (See table 1.) VA officials explained that the use of “Administrative Personnel” was incorrect and that the “Administration” label will be used in future budget justifications. The inconsistency for reporting estimates for these costs may imply that VA used different labels to report estimates for different administrative costs. As such, the costs reported under these labels are unclear to Congress and other users of VA’s budget justification. In addition to inconsistent labeling, VA also was not consistent in its reporting of information on the types of personnel positions included in the agency’s estimates for administrative personnel costs reported. VA provided information on the types of personnel positions included in the total estimate for administrative personnel costs under the budget category label “Administrative Personnel,” but did not provide similar information for the estimates for each appropriations account labeled under “Administration.” By not providing information on the types of positions included in these estimates, VA was not transparent about how the information that was provided applied to the estimates of administrative personnel costs in each of the appropriations accounts. For example, it is unclear to what extent police staff and chaplains will be funded across the three appropriations accounts. Further, VA did not provide complete information on the costs included in its estimates for “Administrative Personnel” and “Administrative Contract Services.” Regarding its estimate for “Administrative Personnel” VA did not disclose that this estimate reflected other costs in addition to those costs associated with administrative personnel. According to VA officials, these other costs include those associated with administrative training programs and summer employment programs and are relatively small compared to the total estimate for “Administrative Personnel.” Regarding its estimates for “Administrative Contract Services” VA provided no information in its budget justification on the types of costs—which include contracts for maintenance of information technology and videoconferencing systems, management and professional services, laundry and dry-cleaning services, and janitorial services—reflected in the estimates. By not providing complete information on the costs included in its estimates for “Administrative Personnel” and “Administrative Contract Services,” VA was not transparent about all the costs reflected in these estimates. The lack of transparency regarding the costs included in its estimate of $5.9 billion for administrative personnel and its estimate of $2.3 billion for administrative contracts is inconsistent with the House Appropriations Committee’s recent request that more information on administrative costs be included in VA’s congressional budget justification and results in incomplete information for congressional deliberation. Increase in the Request Compared to the Earlier, Advance Appropriations Request Reflected Changes VA Made to Supporting Estimates The President’s fiscal year 2014 budget request for VA health care services was $54.6 billion, about $158 million more than the earlier, advance appropriations request for the same year. This increase came as a result of changes made to the estimate supporting the fiscal year 2014 request compared to the estimate for the advance appropriations request. Specifically, the President’s fiscal year 2014 request reflected an estimate of funding needed for initiatives that increased by $1.021 billion and an estimate for ongoing health care services that decreased by $519 million. This increase in the initiatives estimate was further offset by an estimate of $482 million in proposed savings from operational improvements and management initiatives, which resulted in a net increase in expected total obligations of $20 million. A decrease of $138 million in anticipated resources from collections and reimbursements with the increase in expected total obligations resulted in the net increase in the President’s request of $158 million. (See table 2.) The following summarizes the changes in VA’s estimates resulting in the net change of $158 million: Increase in the estimated funding needed for initiatives. According to VA officials, as a result of the reduced estimate for ongoing health care services and the estimated savings from management initiatives and operational improvements, VA increased the estimate of funding needed for its initiatives to end homelessness among veterans, create new models of patient-centered care, and improve veteran mental health, among others. This estimate reflected funding needed for initiatives for which funding was not requested in the fiscal year 2014 advance appropriations request. Decrease in the estimate for ongoing health care services. VA used updated assumptions and data in the EHCPM, which lowered its estimate for ongoing health care services. For example, VA updated its assumption for civilian employees’ pay in fiscal years 2013 and 2014 to account for the pay freeze which reduced the projected base salary of VA employees for these fiscal years and into the future. VA also used updated data from the most recently completed fiscal year to help ensure that its estimates better reflect current experience. Increase in estimates of proposed savings from new acquisition savings and other initiatives. VA identified $482 million in estimated savings as a result of new initiatives, such as capping travel for VA employees at 2013 budgeted levels, and other operational improvements. These savings further reduced expected total obligations compared to the earlier advance appropriations request for fiscal year 2014. Reduction in estimate for collections and reimbursements. The reduction in collections and reimbursements primarily reflected a decrease in the amount VA anticipated receiving from reimbursements, which include fees for services provided under service agreements with DOD. According to VA officials, the change to VA’s estimates for reimbursements was based on the use of more current data and the fact that VA no longer assumes it will be able to achieve reimbursement “recoveries” from prior fiscal years. Steps Taken to Improve the Transparency, Consistency, and Reliability of Information and Estimates in VA’s Budget Justification VA took steps to address, in varying degrees, five of the six problems we previously identified related to the reliability, transparency, and consistency of information in VA’s congressional budget justification. (See table 3.) Specifically, VA took steps to address five of our six prior recommendations to improve estimates and information supporting the President’s fiscal year 2014 budget request: Transparency for VA’s estimates for initiatives in support of the advance appropriations request. VA improved the transparency of its estimate for initiatives in support of the advance appropriations request by including a statement in the agency’s budget justification that indicated the estimate for initiatives did not reflect all the funding that may be required if the initiatives are to be continued. In June 2012, we reported that VA did not make it clear that part of the increase in its fiscal year 2013 initiatives estimate occurred because VA’s earlier estimate in support of the advance appropriations request did not include funding for all the initiatives the agency intended to continue.estimate for initiatives in support of the advance appropriations request by stating whether these estimates reflect all the funding that may be required if all initiatives are to be continued. Even though the agency did not concur with our recommendation, VA in its fiscal year 2014 budget justification stated that the final estimate for initiatives would be determined during the fiscal year 2015 budget process when updated data on initiatives were available. By clearly stating that its estimates for initiatives in support of the advance appropriations request will be addressed in the subsequent year’s budget process, VA provided new information relevant to understanding the estimates. We recommended that VA improve the transparency of its Consistency of language used to label health care services. VA improved the consistency of the language used to label health care services throughout its budget justification. In a September 2012 report, we found that VA used inconsistent labels when referring to the same health care services at different places in its fiscal year 2013 For example, VA referred to mental health budget justification.services as “psychiatric care” in the detailed presentation of the Medical Support and Compliance and Medical Facilities accounts and referred to the same services as “mental health” in the detailed presentation for the Medical Services account. We recommended that VA label health care services consistently so it would be clear which services were being referred to across appropriations accounts. VA concurred with our recommendation. In its fiscal year 2014 budget justification, VA used the same label for mental health, inpatient, and other services across all three appropriations accounts, which it had not previously done. In doing so, VA improved both the clarity and usefulness of the information included in its budget justification. Reliability of estimates for non-NRM facility-related activities. VA improved the reliability of its estimates for non-NRM facility-related activities. In a February 2013 report, we found that lower than estimated spending for non-NRM facility-related activities, such as utilities and janitorial services, allowed VA to spend significantly more on NRM than it originally estimated in recent years. We recommended that VA determine why it has overestimated spending for non-NRM and use the results to improve future, non-NRM budget estimates. VA concurred with this recommendation. According to VA officials, the agency updated assumptions it used to predict growth for non-NRM facility-related activities in order to better reflect VA’s experience during the last 3 to 5 fiscal years. For example, in prior years, VA has estimated spending between $700 and $900 million on “Administrative Contract Services,” which was at least $360 million more than VA’s actual spending for this category. VA’s fiscal year 2014 estimate of $395 million for “Administrative Contract Services” appears to be a more reliable estimate of spending based on recent experience. By improving the reliability of information presented in its congressional budget justification regarding non-NRM facility-related activities, as we previously recommended, VA improved the usefulness of such information. Reliability of estimates for NRM. According to VA officials, the agency also has taken steps to improve the reliability of its fiscal year 2014 NRM estimate. In June 2012, we reported that VA’s NRM spending has historically exceeded NRM estimates because these estimates have not consistently accounted for additional NRM spending by VA medical facilities. According to officials, VA’s estimate for NRM was based on a policy decision. We recommended that VA improve the reliability of its estimates for NRM by accounting for resources that VA medical facilities have consistently spent for this purpose. VA concurred with our recommendation. According to VA officials, the agency revised its method for estimating NRM and reduced its overall estimate of spending for the Medical Facilities account, which includes NRM. Specifically, VA officials indicated that the agency revised its method for estimating NRM to better account for expected spending. The resulting NRM estimate, combined with the previously discussed reduction in its non-NRM estimate, resulted in a decrease in estimated spending for VA’s Medical Facilities account. In prior years, additional NRM spending was the result of VA medical facilities using funds from the Medical Facilities account on NRM that were originally expected to be spent on other activities— such as utilities, grounds maintenance, and janitorial services. Reductions in the overall amount available from the Medical Facilities account would reduce the amount available for additional spending for NRM, so a decrease in VA’s overall estimate for its Medical Facilities account could potentially reduce the availability of additional resources for NRM beyond its fiscal year 2014 estimate. Reliability of estimates for proposed savings. VA has taken steps to address some, but not all, of our prior concerns regarding the reliability of its estimates for proposed savings, which included savings from operational improvements and management initiatives. In a February 2012 report, we determined that some of the estimates for operational improvements included in VA’s fiscal year 2012 budget justification may not have been reliable estimates of future savings. We concluded that without a sound methodology VA ran the risk of falling short of its estimated savings, which may ultimately require VA to make difficult trade-offs to provide health care services with the available resources. We recommended that VA develop a sound methodology for estimating proposed savings from its operational improvements. VA concurred with the recommendation and officials told us during our prior review that the agency was working to address deficiencies in its methodology for estimating these savings. The information that we reviewed on VA’s methodology for estimating proposed savings for fiscal year 2014 to date confirmed that VA has taken some steps to address our prior concerns. For example, VA provided a basis for the assumptions used to calculate some of its proposed savings from acquisitions and employee travel. However, the information did not indicate that VA had fully implemented our recommendation for all operational improvements and management initiatives included in the estimates for proposed savings. In regard to the sixth problem we identified, VA did not address a lack of transparency we previously found regarding its estimates for initiatives and ongoing health care services. In June 2012, we reported that VA did not disclose that it used a new reporting approach that combined both funding for initiatives and funding for certain ongoing health care services in its initiatives estimate. We recommended that VA improve the transparency of its estimates for initiatives and ongoing health care services by stating whether the estimates for initiatives included funding for ongoing health care services. VA concurred with our recommendation. According to officials, VA used the same reporting approach for initiatives in its fiscal year 2014 budget justification as the agency used in its fiscal year 2013 budget justification. However, we found no statement in the budget justification indicating this or more specifically whether the estimates for initiatives included funding for ongoing health care services. By not stating in its budget justification whether the estimates for initiatives included funding for ongoing health care services, VA was not transparent about the total amount of funding the agency may need in fiscal year 2014 for ongoing health care services that would require funding regardless of whether funding for certain initiatives continued. Conclusions In its congressional budget justification, VA provides Congress and other users with information on the agency’s health care budget estimate and other information that supports the policies and spending decisions represented in the President’s budget request. In response to our prior work, VA has taken steps to improve the consistency, reliability, and transparency of its estimates and information supporting the President’s budget request for VA health care. In particular, VA has taken steps to improve (1) the transparency of its estimates for initiatives in support of the advance appropriations request, (2) the consistency of its language used to label health care services, (3) the reliability of the estimates for other facility-related activities funded through the Medical Facilities account, (4) the reliability of its estimates for NRM, and (5) the reliability of proposed savings from operational improvements and management initiatives. However, VA did not indicate whether the estimates it reports for initiatives included funding needed for ongoing health care services, as we previously recommended. While VA has addressed to varying degrees the problems we previously identified, it is important that VA ensure that the recommendations from our prior work regarding the information and estimates in VA’s budget justification are fully implemented. Until these recommendations are fully implemented, the problems we previously identified will continue to limit the usefulness of related information to Congress and other users of VA’s budget justification. In addition, our work shows that VA made key changes to its budget methodology—namely, VA used the EHCPM, in part, to develop estimates for most long-term care services. VA also changed how the agency reported its estimates for administrative costs, although VA did not do so consistently and comprehensively throughout its fiscal year 2014 budget justification. VA introduced a new budget category label “Administrative Personnel” for reporting its total estimate for administrative personnel costs, but used the old “Administration” label when reporting estimates for the same costs in each of VA’s three health care appropriations accounts. Additionally, VA defined some of the costs included in the “Administrative Personnel” label, but did not do so for “Administration” or “Administrative Contract Services” in its budget justification. This lack of transparency as well as the inconsistent labeling of administrative personnel costs results in unclear and incomplete information that limits its usefulness to Congress and other users of VA’s budget justification. Recommendations for Executive Action To improve the clarity and transparency of information in VA’s congressional budget justifications that support the President’s budget request for VA health care, we recommend the Secretary of Veterans Affairs take the following two actions: use consistent terminology to label estimates of administrative personnel costs and provide consistent and comprehensive information explaining the costs included in each budget category for administrative costs. Agency Comments We provided a draft of this report to VA and OMB for comment. In its written comments—reproduced in appendix I—VA generally agreed with our conclusions and concurred with our recommendations. In concurring with our first recommendation regarding terminology to label estimates of administrative personnel costs, VA stated that it will incorporate consistent terminology to label estimates for administrative and personnel costs in the fiscal year 2015 President’s budget request. In concurring with our second recommendation regarding information explaining the costs included in administrative costs, VA stated that it will provide consistent and comprehensive information explaining the costs included in each budget category for administrative costs in the fiscal year 2015 President’s budget request. OMB had no comments. We are sending copies of this report to the Secretary of Veterans Affairs and the Director of the Office of Management and Budget, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website http://www.gao.gov. If you or your staff have any questions about this report, please contact Randall B. Williamson at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Veterans Affairs Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, James C. Musselwhite and Melissa Wolf, Assistant Directors; Kye Briesath, Krister Friday, Aaron Holling, Felicia Lopez, Lisa Motley, and Brienne Tierney made key contributions to this report. Related GAO Products Veterans’ Health Care: Improvements Needed to Ensure That Budget Estimates Are Reliable and That Spending for Facility Maintenance Is Consistent with Priorities. GAO-13-220. Washington, D.C.: February 22, 2013. Veterans’ Health Care Budget: Better Labeling of Services and More Detailed Information Could Improve the Congressional Budget Justification. GAO-12-908. Washington, D.C.: September 18, 2012. Veterans’ Health Care Budget: Transparency and Reliability of Some Estimates Supporting President’s Request Could Be Improved. GAO-12-689. Washington, D.C.: June 11, 2012. VA Health Care: Estimates of Available Budget Resources Compared with Actual Amounts. GAO-12-383R. Washington, D.C.: March 30, 2012. VA Health Care: Methodology for Estimating and Process for Tracking Savings Need Improvement. GAO-12-305. Washington, D.C.: February 27, 2012. Veterans Affairs: Issues Related to Real Property Realignment and Future Health Care Costs. GAO-11-877T. Washington, D.C.: July 27, 2011. Veterans’ Health Care Budget Estimate: Changes Were Made in Developing the President’s Budget Request for Fiscal Years 2012 and 2013. GAO-11-622. Washington, D.C.: June 14, 2011. Veterans’ Health Care: VA Uses a Projection Model to Develop Most of Its Health Care Budget Estimate to Inform the President’s Budget Request. GAO-11-205. Washington, D.C.: January 31, 2011. VA Health Care: Spending for and Provision of Prosthetic Items. GAO-10-935. Washington, D.C.: September 30, 2010. VA Health Care: Reporting of Spending and Workload for Mental Health Services Could Be Improved. GAO-10-570. Washington, D.C.: May 28, 2010. Continuing Resolutions: Uncertainty Limited Management Options and Increased Workload in Selected Agencies. GAO-09-879. Washington, D.C.: September 24, 2009. VA Health Care: Challenges in Budget Formulation and Issues Surrounding the Proposal for Advance Appropriations. GAO-09-664T. Washington, D.C.: April 29, 2009. VA Health Care: Challenges in Budget Formulation and Execution. GAO-09-459T. Washington, D.C.: March 12, 2009. VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement. GAO-09-145. Washington, D.C.: January 23, 2009.
The Veterans Health Care Budget Reform and Transparency Act of 2009 requires GAO to report on the President’s annual budget request to Congress for VA health care services. GAO’s previous work has focused on issues related to the consistency, transparency, and reliability of information in VA’s congressional budget justifications. Building on GAO’s past work and in light of the President’s most recent request for VA health care, this report examines (1) changes in how VA used the EHCPM to develop VA’s budget estimate supporting the President’s budget request for fiscal year 2014 and changes in how VA reported information related to this estimate in its budget justification; (2) key changes to the President’s fiscal year 2014 budget request compared to the advance appropriations request for the same year; and (3) the extent to which VA has addressed problems previously identified by GAO related to information in VA’s congressional budget justifications. GAO reviewed the President’s fiscal year 2014 budget request, VA’s fiscal year 2014 budget justification, and VA data. GAO interviewed VA officials and staff from the Office of Management and Budget. The Department of Veterans Affairs (VA) expanded the use of the Enrollee Health Care Projection Model (EHCPM) in developing the agency’s health care budget estimate that supported the President’s fiscal year 2014 budget request. VA expanded the use of the EHCPM by using, for the first time, the model’s estimate for the amount of care provided—workload—to develop estimates of the resources needed for 14 long-term care services. However, VA continued to use the most current expenditure data rather than EHCPM estimates for projecting needed resources for these services due to concerns about the reliability of the EHCPM expenditure data. Using this new blended approach, VA used the EHCPM in whole or in part, to develop estimates for 74 health care services that accounted for more than 85 percent of VA’s health care budget estimate. Additionally, VA used a new budget category label for its estimate of certain administrative personnel costs, “Administrative Personnel,” and identified the types of positions this estimate included. However, VA did not consistently use the new label across its three health care appropriations accounts. Instead, VA used “Administration” and provided no information clarifying the costs included in the estimates. Further, VA did not disclose all the costs included under “Administrative Personnel,” nor did VA identify the costs included in one other category containing administrative costs, “Administrative Contract Services.” The lack of transparency regarding administrative costs and inconsistent labeling resulted in Congress and other users of VA’s budget justification not having clear and complete information regarding the agency’s estimates for such costs. The President’s fiscal year 2014 budget request for VA health care services was about $158 million more than the earlier, advance appropriations request for the same year. The estimate for initiatives increased by $1.021 billion and the estimate for ongoing health care services decreased by $519 million. The increase in the initiatives estimate was further offset by $482 million in estimated savings from new acquisition savings and other initiatives, which resulted in a net increase of $20 million. This increase, along with a decrease of $138 million in anticipated resources from collections and reimbursements, resulted in the net increase of $158 million in the President’s fiscal year 2014 request. VA has taken steps to address, in varying degrees, five of the six problems GAO previously identified related to information in VA’s budget justification. Specifically, VA has taken steps to improve (1) the transparency of its estimates for initiatives in support of the advance appropriations request, (2) the consistency of the language used to label health care services across its three health care appropriations accounts, (3) the reliability of its estimates for certain facility-related activities, (4) the reliability of its estimate for facility maintenance and improvement, and (5) the reliability of its estimates for proposed savings. However, VA did not address (6) the transparency of its estimates for initiatives and ongoing health care services. While VA improved aspects of the information in its fiscal year 2014 budget justification, it is important that VA ensure that the six recommendations from GAO’s prior work regarding such information are fully implemented. Until these recommendations are fully implemented, the problems GAO previously identified will continue to limit for Congress and others the usefulness of information related to the estimates that support the President’s budget request for VA health care.
Background The primary mission of the SEC is to protect investors and maintain the integrity of the securities market. The Securities Act of 1933 requires that, prior to the offering or sale of securities, the issuer must register the securities offering with the SEC by filing a registration statement. The registration statement must contain financial and other material information concerning the securities and the issuer. Following the securities’ registration, the Securities Exchange Act of 1934 requires that the issuer make periodic filings disclosing its financial status and changes in condition. For example, issuers must file annual reports containing financial statements, which must be prepared in conformity with GAAP and audited by independent public accountants. During fiscal year 2000, the SEC received over 14,000 registrants’ filings. The SEC reviews selected issuers’ filings to ensure compliance with accounting and disclosure requirements. The SEC has enforcement authority under federal securities laws to take legal action against companies that do not comply with the securities laws. SEC’s critical role to protect investors’ interests has been made even more challenging with the significant changes in the global economy and capital markets over the past few years. The current business environment is characterized by a globalized, highly competitive economy; explosive growth in the development and use of technology; expansion in the number of public companies; and the unprecedented growth and in some cases subsequent decline in the market value of those securities. Furthermore, growth in equity values has placed tremendous pressure on public companies’ management to reach earnings or other performance targets and to meet or exceed the earnings expectations of the security analysts and investors. Missing these targets may cause a significant decline in a security’s market value and reduce management’s compensation in those cases when it is tied to achieving target earnings and/or stock market prices. Several major instances of misstated earnings have resulted in massive declines in the values of the affected companies. Recently, the SEC has become increasingly concerned with the inappropriate use of GAAP and the resulting effect on reported earnings and, in some cases, has required companies to restate their earnings. The SEC’s DCF oversees the disclosure of information, which is required by federal securities laws, to the investing public. DCF’s staff routinely reviews the disclosure documents filed by public companies with SEC and consults with OCA to resolve issues arising from the review of registrants’ filings. OCA is the SEC’s principal advisor on accounting and auditing matters. OCA also reviews registrants’ specific accounting treatment of complex issues as a result of prefiling inquiries from the registrants themselves. OCA encourages registrants to consult on those financial reporting and auditing issues that involve unusual, complex, or innovative transactions for which no clear authoritative guidance exists. As the SEC’s principal advisor on accounting and auditing matters, OCA provides rulemaking and interpretation initiatives that supplement private sector accounting standards and provide implementation guidance for financial disclosure requirements. OCA provides general interpretive and accounting advice through interpretive releases and letters, staff accounting bulletins, responses to telephone inquiries, speeches, and active participation with the standard-setting bodies. Under the Securities Exchange Act of 1934, the SEC has specific authority to establish accounting and reporting standards as part of its mandate to administer and enforce the provisions of the federal securities laws. Soon after its creation, the SEC decided to rely on accounting standards established in the private sector as long as such standards had substantial authoritative support. Since 1973, the Financial Accounting Standards Board (FASB) has been the designated organization in the private sector that establishes standards for financial accounting and reporting. The SEC officially recognizes GAAP standards established by FASB as authoritative. As such, the SEC requires compliance with GAAP in the presentation of financial statements. FASB’s deliberations are open to the public, and its standards are subject to public exposure and comment prior to issuance. The SEC is involved in establishing accounting standards through the oversight of, and close working relationship with FASB, and other professional standard-setting bodies. The SEC is also involved in establishing accounting standards through the adoption of rules and publication of interpretive guidance. Rules and interpretive releases, such as in the Codification of Financial Reporting Policies and Regulation S-X of the SEC, have an authority similar to pronouncements by FASB for SEC registrants. The SEC staff issues Staff Accounting Bulletins that represent interpretive guidance and practices followed by DCF and OCA in administering the disclosure requirements of the SEC. The SEC has relied on generally accepted auditing standards (GAAS) promulgated by the AICPA’s Auditing Standards Board (ASB) as the standard for independent audits. ASB’s deliberations are open to the public, and its standards are subject to public exposure and comment prior to issuance. The SEC monitors the structure, activity, and decisions of not only FASB, but also FASB’s Emerging Issues Task Force (EITF). EITF was formed in 1984 to provide timely financial reporting guidance on emerging issues before divergent practices became widespread and entrenched. Task force members are drawn primarily from public accounting firms but also include representatives of industry. The Chief Accountant of the SEC or his designee attends EITF meetings regularly as an observer and participates in the discussions, but does not have a vote. If the group reaches a consensus on an issue, generally FASB takes this as an indication that no further board action is needed. If no EITF consensus is possible, it may be an indication that action by FASB is necessary. EITF proceedings are documented in EITF Abstracts. The SEC staff and FASB and EITF members work together in an ongoing effort to improve the standard-setting process and to respond to various regulatory, legal, and business changes promptly and appropriately. In carrying out its responsibilities, OCA works with the private sector accounting profession, including the AICPA SEC Practice Section and the AICPA SEC Regulations Committee. The AICPA SEC Practice Section is part of the profession’s self-regulatory system, with a goal of protecting the public interest by improving the quality of CPA firms’ practice before the SEC. The AICPA SEC Practice Section establishes requirements for member firms and has a program to monitor those requirements. Member requirements include adhering to quality control standards and submitting to a peer review of each firm’s accounting and auditing practice every 3 years. The AICPA SEC Regulations Committee is part of the AICPA SEC Practice Section that acts as the primary liaison between the profession and the SEC on technical matters relating to SEC rules and regulations. The AICPA SEC Regulations Committee provides input to the SEC on accounting and auditing matters and communicates important SEC developments to its AICPA members. The AICPA SEC Regulations Committee includes accounting firms that belong to the AICPA SEC Practice Section as well as members from academia and industry. Scope and Methodology To fulfill our objectives, we interviewed officials and professional staff members from the SEC’s OCA. We reviewed relevant policies and procedures, including the Protocol for Registrant Submissions to OCA (effective December 1999) and OCA’s Policies for Handling Registrants Matters (dated August 2000). We focused on the procedures and controls employed by the SEC for resolving registrants’ prefiling accounting issues and issues on filings in which DCF consults with OCA. To gain an understanding of OCA’s procedures and the controls employed by the SEC throughout the process, we reviewed OCA’s case files of written submissions from registrants and their auditors. Although we reviewed cases to gain an understanding of the SEC’s process and the related issues, we did not perform testing to evaluate whether the SEC properly implemented its procedures throughout its caseload, nor did we evaluate the SEC’s final accounting positions on the cases that we reviewed. We interviewed representatives from the AICPA’s SEC Practice Section and SEC Regulations Committee, FASB’s EITF, and several CPA firms. We also interviewed representatives from Financial Executives International (FEI) and its Committee on Corporate Reporting. FEI is a professional association of senior financial executives, with many members from SEC registrant companies, which communicates its members’ views on emerging issues to standard-setting bodies and legislators. We also interviewed representatives from SEC registrant companies to obtain their views on the SEC’s process for handling accounting issues. We conducted our work from December 2000 through May 2001, in accordance with generally accepted government auditing standards. We requested comments from the SEC, the AICPA SEC Practice Section, the AICPA SEC Regulations Committee, and FEI. We received written comments and technical comments from the SEC and the AICPA. FEI advised us that they did not have official comments on this report. The SEC’s and the AICPA’s written comments are discussed in the “Agency Comments and Our Evaluation” section of this report and are reprinted in appendixes I and II. We incorporated the technical comments provided by the SEC and the AICPA throughout this report as appropriate. Accounting-Related Inquires to OCA OCA receives both prefiling and active filing accounting issues for review through oral inquiries and written submissions from registrants and their auditors and from DCF. Oral inquiries received by OCA involve broad issues that are often not registrant specific. Registrants or their auditors can call OCA to ask prefiling accounting questions. Oral inquiries and OCA’s responses are considered informal and therefore not binding for a subsequent filing. Oral inquiries are sometimes done on a “no-name” basis, whereby the registrants or their auditors telephone OCA to ask questions without giving their names. However, OCA encourages registrants or their auditors to put accounting inquiries in writing to ensure a clear understanding of the facts, especially those involving complex, unusual, or innovative transactions for which no clear authoritative guidance exists. Prefiling written submissions from the registrants and their auditors are registrant specific, and OCA considers its position to be binding for purposes of deciding whether a registration has complied with the SEC’s accounting and disclosure requirements. OCA also receives inquiries through consultations with DCF on issues related to filings from registrants. Some of DCF’s inquiries are oral and others are considered written inquiries when the issues are substantive and involve extensive OCA’s review. Inquiries from DCF include questions relating to (1) accounting issues and auditing matters that involve basic policies of SEC, (2) auditor’s independence or qualifications, or (3) new, unusual, or controversial accounting issues relating to a registrant’s financial statement presentation. OCA receives various types of accounting-related inquiries through the processes described above. OCA tracks written submissions but does not track oral inquiries. OCA provided the following caseload information on written submissions and DCF consultations for calendar year 2000: OCA received 113 new submissions during calendar year 2000 and carried over 21 submissions from 1999 for a total caseload of 134 written submissions for calendar year 2000. Of the 134 cases, OCA reported that it closed 116 cases, leaving 18 cases that were carried over to calendar year 2001. Approximately 38 percent of the 113 written submissions received by OCA came from DCF. The registrants and their auditors submitted the remaining 62 percent of the cases to OCA. According to OCA, the written submissions it receives involve issues that are complex and involve significant judgment. Examples of the type of accounting issues frequently reviewed include business combination issues, such as the application of the pooling versus purchase methods of accounting and complex issues surrounding revenue recognition and financial instruments. OCA’s position on these accounting issues can have a significant impact on a company’s reported earnings and financial condition and a correspondingly large impact on the stock value of a company. The following represents the breakdown by type of the 113 submissions received by OCA during calendar year 2000: business combinations (29), revenue recognition (25), financial instruments (19), capital accounts (11), consolidations and equity method (9), stock compensation (4), auditor’s independence (2), deferred income taxes (3), foreign reporting issues (2), financial statement presentation (3), and asset impairment, accounting changes, leasing, earnings per share, contingencies, and interest capitalization (1 each). OCA’s Procedures and Controls for Reviewing Accounting Matters Because of concerns regarding the communications between auditors and the SEC, the AICPA issued a “best practices” guide in 1996 for member firms’ communications with SEC staff in order to promote effective, efficient communications among SEC staff, registrants, and their auditors. The AICPA provided this document to the SEC, which in turn issued its Protocols for Registrant Submissions to the Office of Chief Accountant in December 1999. The SEC protocols are available to the public on the SEC’s Web site and the AICPA’s Web site; they set out the formal procedures for registrants’ inquiries to the SEC on accounting matters. The protocols cover the following: oral and no-name inquiries, written submissions from registrants on prefiling accounting issues, meetings with SEC staff, and correspondence with SEC staff regarding registrants’ understanding of the staff’s position on an accounting issue. The SEC’s protocols cover the process for submitting accounting issues to OCA for review, conducting meetings with the SEC, and closing out issues with the SEC. The protocols do not include information about the internal process that SEC uses for its review and decision-making on registrant accounting matters. The protocols also do not provide information on the SEC’s procedures for dealing with issues on filings in which DCF consults with OCA on accounting issues. In December 1999, OCA began to document its internal procedures for its review of registrants’ accounting matters and its procedures for dealing with issues on filings in which DCF consults with OCA on accounting issues. Completed in August 2000, these written internal procedures include key steps and controls in the SEC’s process for dealing with registrants’ accounting issues. These internal procedures have not been made available to registrants or the accounting profession. The following are OCA’s key steps and controls as described in the protocols and SEC’s current internal procedures. Key Steps From the SEC’s Published Protocols OCA requires registrants to submit standard, comprehensive information for written submissions so that OCA can fully understand the issues. The required information is listed in the protocols, and includes the following: a clear description of the accounting, financial reporting, or auditing all facts that may influence a decision as to the proper accounting treatment for the transaction; the accounting treatment proposed by the registrant and the basis for that conclusion, including an analysis of all the relevant accounting literature, as well as all alternatives considered and rejected; and a statement regarding the conclusion of the registrant’s auditor on the proposed accounting treatment. Upon resolution of an issue, the SEC protocols state that the registrant should prepare and send a letter to the SEC describing the registrant’s understanding of the SEC staff’s position. Key Steps From the SEC’s Internal Procedures Document OCA maintains teams of experts specializing in specific accounting issues, and individual issues are referred to the appropriate team of experts. OCA’s teams of experts are to follow various prescribed steps for resolving each issue, which can include researching the accounting literature, researching the disposition of prior cases, and consulting with internal and external subject matter experts, including representatives from FASB and representatives from the “Big 5” accounting firms.If a majority of the team dealing with an issue disagrees with the registrant’s proposed accounting treatment, the decision is to be discussed with the team leader and a Deputy Chief Accountant and/or the Chief Accountant before communicating with the registrant. All issues that are not clearly answered by the accounting literature or staff precedents or are unusual, novel, or controversial are to be discussed with a Deputy Chief Accountant. A Deputy Chief Accountant discusses issues with the Chief Accountant where deemed appropriate. The Chief Accountant is to be notified if previous SEC staff positions are being reversed or if a registrant would be required to restate its financial statements. In such situations, the SEC staff must first obtain the approval of a Deputy Chief Accountant, and then must discuss the case with the Chief Accountant before notifying the registrant of its decision. The Chief Accountant may overturn the decision of the SEC staff if he becomes convinced that it is the best course of action. Whenever a team leader discusses the resolution of a matter with a registrant or the registrant’s auditor, at least one other team member is required to be present. Once a decision is reached, the SEC is to document the decision in a memorandum for its files. The memorandum is to include relevant background information and facts of the case, the question raised, and alternate accounting treatment(s) that were considered but not accepted. A registrant may appeal the staff’s position to a Deputy Chief Accountant, the Chief Accountant, or the Commission. Views of Registrants, Their Auditors, and the SEC The representatives from SEC registrants and the accounting profession with whom we spoke with said they had a range of experiences, both positive and negative, with the SEC’s handling of accounting issues. Some of the representatives we spoke with expressed common concerns regarding the SEC’s process for deciding on accounting issues. Specifically, we were told that the SEC’s process for handling accounting issues and the basis for the SEC’s position is not always apparent to the registrants and their auditors, and representatives cited the need for additional transparency in the SEC’s internal processes. In addition, the representatives we spoke with expressed concern about the difficulty in tracking the variety of sources used by the SEC in determining acceptable accounting and financial reporting. According to these representatives, many of the sources used by the SEC are in addition to, and outside of, the private sector standard-setting process. Transparency of OCA Policies and Basis for Decisions Representatives and members of the SEC registrant and accounting profession we spoke with did not identify any specific problems with the SEC protocols issued in 1999. The protocols deal with the process to be followed when submitting accounting issues or questions to the SEC. However, the representatives expressed concern that OCA’s process for handling the issues is not clear, and registrants and their auditors are often unsure of how the SEC reached its decisions and on what basis. Representatives suggested that additional transparency regarding the SEC’s process would help them to understand how issues are being handled and resolved by OCA. Representatives of registrants and the accounting profession expressed a need for additional information regarding the following: general status information, including time estimates for resolving issues and status of the review; how accounting issues are assigned to SEC staff members; how OCA consults with standard-setting bodies and other large accounting firms, including how OCA ensures that information presented in these consultations is unbiased and how the results of consultations are used in resolving issues; the SEC’s approval processes for determining whether registrant restatements are necessary; how OCA coordinates with DCF, including how OCA and DCF minimize duplication of information requested from the registrants and auditors; and OCA’s final position on accounting issues. The representatives we spoke with also stated that registrants and auditors who have only occasional dealings with OCA would especially benefit from additional transparency regarding SEC’s procedures for deciding on registrant-specific accounting issues. The representatives said that large corporations and their auditors who are involved in frequent SEC registration filings have over time established effective working relationships with SEC staff and can often obtain information on SEC procedures through their frequent dealings with the SEC. The representatives also said that other registrants and auditors who have not developed ongoing working relationships with the SEC have greater difficulties working through the process, and additional information about OCA’s process would be beneficial. Also, many registrants said they believe that because they do not understand OCA’s process, they must rely too heavily on their external auditors, even though the application of accounting methods is ultimately the registrants’ responsibility. There is the perception that only the major accounting firms are aware of OCA’s process and that it is almost mandatory to have these accounting firms lead the effort for them. While registrants would want to consult with and have the support of external auditors, registrants said that if they better understood OCA’s processes, they might be able to take the lead in the process without having to rely so heavily on their external auditors. In addition, many of the representatives from the AICPA, who have considerable experience in dealing with OCA, expressed uncertainty about OCA’s process and said they saw a need for additional information. Representatives said that they are reluctant to appeal OCA’s staff decisions to the Chief Accountant, Deputy Chief Accountant, or the Commission for three reasons. First, registrants have the impression that the SEC staff’s supervisors have reviewed the matters prior to communicating with registrants and their auditors, and are in support of the staff’s positions. Second, registrants have the perception that, in the appeal process, the SEC may open other accounting issues. Finally, the appeal process also adds to the registrants’ time and cost. Representatives estimated that it can cost from $25,000 to $100,000 for legal and accounting fees to bring an issue to OCA, and appeals would add to this cost. Representatives told us that only a few decisions have been appealed and that the SEC’s initial decisions were not changed through the appeal process. The SEC’s Response OCA representatives referred to OCA’s written procedures for internal processing in response to the concerns of registrants and their auditors. OCA officials stated that they would consider making public some information regarding their internal procedures for handling registrants’ matters, as well as explanations of the key steps and communications that should occur between the SEC and the registrants throughout the process. At the same time, SEC officials also stated that certain information relating to the staff’s internal policies requested by the registrants and their auditors would not be provided. OCA officials provided the following responses to the specific issues raised by the representatives of the registrants and the accounting profession whom we spoke with in preparing this report. OCA officials stated it would be difficult to provide registrants with an estimate of how long it will take to review and rule on issues because this process is not completely within the SEC’s control. The SEC staff often will request additional information from the registrant after receiving an initial written submission. However, registrants do not always respond to these requests for additional information promptly or the registrants’ circumstances may change, thereby changing the scope of the issues. Because the SEC’s process depends to a certain extent on the nature and timing of responses from the registrants, SEC officials stated that they would be unable to provide definitive time estimates for handling written submissions. The SEC did, however, state that often a sense of urgency or a specific deadline exists with regard to resolving an accounting issue, due to a pending transaction. In those cases, SEC officials said the SEC staff and the registrants work very closely and interactively to resolve the issue based on the timing needs of the registrants. Accounting issues are assigned to OCA professional staff members who work in teams in specialized areas. Under OCA procedures, assigned OCA staff generally calls the registrant within 3 days of receiving the issue with follow-up questions or to schedule a conference call involving the registrant and its auditors. Through this communication, the registrant also becomes aware of the specific SEC staff members assigned to its case, and OCA is able to determine whether the registrant has certain timing needs for resolving the issue. Also, the SEC provides a list of staff names by specialized work area at the annual conference sponsored by the AICPA SEC Regulations Committee. In researching a specific accounting issue, OCA staff members sometimes consult with standard-setting bodies and the other accounting firms. OCA staff members may prepare a “white paper” detailing the facts of the case. The paper generally summarizes the issue and basic facts that are specific to the registrant and poses the one, key accounting question relevant to the case. The paper does not identify the registrant. An OCA official responded that the registrants and their auditors might be concerned that the facts presented to the standard-setting bodies may be biased by the staff members. However, OCA representatives emphasized that it is the responsibility of the Chief Accountant and a Deputy Chief Accountant to ensure that the issues and facts are fairly presented and that OCA does not advocate a certain position. The SEC’s internal procedures require that the Chief Accountant be notified if a registrant will be required to restate its financial statements. In such situations, the SEC staff must first obtain the approval of a Deputy Chief Accountant and then must discuss the case with the Chief Accountant before notifying the registrant of its decision. The Chief Accountant may overturn the decision of the SEC staff if he becomes convinced that it is the best course of action. In reviewing registrants’ filings, DCF sometimes requests assistance or consultation services from OCA to resolve difficult accounting issues. Some of DCF’s inquiries of OCA are oral and, if the questions are easily resolved, do not involve further interaction with the registrant. According to SEC officials, in cases in which additional information is needed from the registrant, both the DCF staff member reviewing the filing and an OCA staff member are present when the registrant is called for additional information. This internal procedure helps to ensure continuity and prevents or minimizes any duplication of information requests between DCF and OCA. Depending on the issues, OCA staff may also have further follow-up questions on previously submitted information from the registrant if it was unclear. After OCA staff members complete their review, OCA provides an oral response to the registrant along with an explanation of the basis for OCA’s position and then documents its decision in a memorandum for its files. The SEC asks the registrant to provide a letter documenting the registrant’s understanding of OCA’s position. This procedure is set forth in the SEC’s protocols and is intended to ensure that the registrant clearly understands OCA’s position and the basis for its decisions. However, registrants do not always respond to OCA’s request, especially when they disagree with SEC decisions. Although the SEC does not provide written responses to registrants’ issues, it issues its staff accounting bulletins as a way to communicate broad issues to the registrant community. Sources Used by the SEC to Make Decisions on Accounting Issues Representatives of the registrants and the accounting profession expressed concerns that the SEC is using a variety of sources in addition to the authoritative standards and interpretations issued by the private sector standard-setting bodies as criteria for making decisions on accounting issues. Representatives expressed concern about the variety of SEC interpretive guidance, which they believe is being used by the SEC in its decisions on accounting issues. Many of the representatives we spoke with stated that it is becoming increasingly difficult to keep track of the variety of guidance being issued and used by the SEC, especially for the smaller accounting firms with limited resources. The representatives we spoke with cited the following guidance being used by the SEC as criteria: SEC Financial Reporting Releases; SEC Accounting and Auditing Enforcement Releases; SEC Staff Accounting Bulletins; SEC Frequently Asked Questions (FAQ) documents; SEC announcements at EITF meetings (such SEC announcements become part of public record, and some believe that this is setting new rules through the announcement process); DCF Outline of Current Issues and Rulemaking Projects, which contain pending rulemaking, recent rule adoptions, current disclosure issues on mergers and acquisitions, significant no-action and interpretive letters, and accounting issues; speeches by SEC staff members and commissioners; letters to the AICPA, EITF, and others to express SEC staff positions, including interpretations of other SEC formal interpretive guidance; highlights of joint meetings of SEC staff and AICPA SEC Regulations Committee and International Practices Task Force; and comment letters—for example, SEC staff positions are sometimes identified only as comments arise, and the SEC staff position is applied for the first time in a registrant review environment. Registrants told us that rulemaking is coming from various places—the SEC, FASB, and EITF. The registrants want to know what is expected for fair presentation and disclosure so that they can comply. However, they said that the criteria being used by SEC are sometimes unclear, even to their auditors. The members of the accounting profession we spoke with said that they assist their clients in determining what is acceptable reporting under GAAP, but they too are often uncertain as to what the SEC’s position will be in the matter. Consequently, they often bring such issues to the SEC, not for the purpose of inquiring what is acceptable under GAAP, but for the purpose of determining whether their application of the accounting standards will be acceptable to the SEC. Representatives of the registrants and the accounting profession express concerns that the SEC staff is using sources other than standards and guidance that have been through due process for determining what is acceptable financial reporting. They believe that the SEC staff defines acceptable accounting and reporting requirements through its interpretive guidance, without going through a formal due process under rulemaking. Due process provides a public forum for affected parties to comment on the impact of new standards or rules on particular industries and businesses. Registrants and external auditor representatives expressed concern that this process has resulted in the SEC staff setting GAAP as criteria for determining what is acceptable accounting and financial reporting for purposes of registrants’ filings. As stated in the background section of this report, the SEC has specific authority to establish rules governing the financial reports of public companies and to ensure fair financial reporting. The SEC’s Response OCA officials provided the following responses to the specific issues raised by the representatives of the registrants and the accounting profession whom we spoke with in preparing this report. OCA officials acknowledged that, in its review of accounting and disclosure issues, the SEC staff uses a variety of sources, including SEC Financial Reporting Releases, SEC Accounting and Auditing Enforcement Releases, SEC Regulation S-X, Staff Accounting Bulletins, answers to FAQs, speeches, and letters. As stated in the AICPA’s Statement on Auditing Standards, No. 69, The Meaning of “Present Fairly in Conformity with Generally Accepted Accounting Principles in the Independent Auditor’s Report”, the SEC’s rules and interpretive releases have an authority similar to pronouncements of FASB for SEC registrants. SEC rules, communicated through issuance of SEC Financial Reporting Releases, are approved through the Commission. Staff Accounting Bulletins, answers to FAQs, speeches, and letters are staff positions that act as interpretations of existing GAAP. Most registrants and their auditors have found them to be useful sources in their filings to the SEC. OCA officials stated that SEC has made these materials readily available. Commercial publishers, such as Commerce Clearing House, Inc., publish a loose-leaf document covering federal securities laws that contain the Codification of Financial Reporting and Policies, Regulation S-X, and the Staff Accounting Bulletins. The SEC’s rules and releases are included in the Code of Federal Regulations by topic index and are published weekly in the SEC Docket. In addition, the Staff Accounting Bulletins, answers to FAQs, speeches, and letters are posted on the SEC’s Web site. The SEC officials also stated that the SEC began posting speeches and letters on its Web site after members of the accounting profession requested that they be published to aid the registrants and their auditors in understanding the SEC’s positions for administering SEC disclosure requirements. With regard to the concern from the registrants that the SEC is using interpretive guidance, such as Staff Accounting Bulletins, to set GAAP without due process, the SEC officials stated that Staff Accounting Bulletins are interpretive guidance and do not represent new GAAP. SEC staff, through speeches, describes new fact patterns appearing in industry and provides guidance for handling these new types of cases under existing GAAP. Also, the SEC published answers to FAQs as a guide to registrants and their auditors in submitting filings to the SEC. The Staff Accounting Bulletins and speeches can be tied back to existing accounting literature and are meant to be communicated to everyone. If an issue is unclear, OCA will send the issue to EITF for resolution. The SEC officials believe that, since the interpretive guidance is not new GAAP, it is not subject to due process. Current Relations Between the SEC and the Accounting Profession and Registrants “While one would expect occasional tensions, the current relationship between the profession and the SEC seems under unusual stress. The Panel views this situation as counterproductive to continued improvement in financial reporting, which is a shared goal of both the profession and the SEC. The Panel believes that this important relationship must be restored to its historic level of candor, trust, and respect.” Many of the comments we heard from the registrants’ representatives, representatives from the accounting profession, and SEC officials over the course of our work are consistent with the conclusions of the Panel on Audit Effectiveness regarding the stressed relationships between the registrants, their auditors, and the SEC. In fact, representatives from the accounting profession and registrants stated that they believe that tensions between registrants, the accounting profession, and the SEC have been higher during the past few years than during any recent period. An OCA official stated that the relationship between the industry and the SEC has ebbed and flowed throughout the years depending on economic and business events and the related issues with the Commission. He stated that tension should exist between the SEC and the companies it regulates, but it is a “constructive tension,” which has evolved and has made the U.S. markets work well. He also stated that FEI has been conducting a study on the quality of financial reporting. In this study, there have been a large number of restatements in recent years, some were a result of the SEC’s actions, but most were from registrants, and auditors’ actions. He stated that the impact of financial reporting is greater today than ever before. Conclusions and Recommendations An effective working relationship between the registrants, the accounting profession, and the SEC is important for ensuring that investors are protected and that the integrity of the securities market is maintained. This working relationship would benefit from increased transparency of OCA procedures in resolving accounting matters, especially for those registrants and auditors who have infrequent dealings with OCA. Due to the common concerns expressed by representatives of registrants and the accounting profession and SEC’s recognition that additional information would be beneficial, we recommend that the Chairman of the SEC direct the Chief Accountant to implement procedures to improve the availability of information to registrants regarding OCA’s process for deciding on accounting issues. Such procedures would include expanding the protocols or issuing additional public information to explain the SEC’s current policies and procedures for handling registrant’s matters, including general communications to registrants and auditors about the status of the assignment of accounting issues to SEC staff members, how the SEC conducts its consultations with other accounting firms and FASB, and how the results of such consultations are considered in its decisions, the SEC’s approval process for determining when registrant restatements are necessary, coordination between DCF and OCA, and when decisions are considered to be final. We found differences in views between SEC officials and representatives of the registrants and accounting profession regarding the accessibility of the variety of SEC rules and interpretive guidance, and methods of communicating OCA’s positions on accounting issues. Therefore, we recommend that the Chairman of the SEC direct the Chief Accountant to meet with representatives from the accounting profession and registrants to determine how best to disseminate information on rules and interpretive guidance and meet with representatives from the accounting profession and registrants to determine how the SEC could provide additional written information on the reasons for its decisions, especially when they involve complex and unusual accounting issues. Agency Comments and Our Evaluation We requested comments from the SEC, the AICPA, and FEI. We received written comments from SEC’s Chief Accountant on behalf of the SEC’s OCA, and the Chair of the AICPA SEC Practice Section on behalf of the AICPA’s SEC Practice Section and the AICPA’s SEC Regulations Committee. FEI advised us that they did not have official comments on this report. The SEC’s and the AICPA’s written comments are discussed below and reprinted in appendixes I and II, respectively. We also received technical comments from both the SEC and the AICPA that we incorporated throughout this report as appropriate. SEC Comments The SEC’s Chief Accountant, commenting on behalf of the SEC’s OCA expressed appreciation for the constructive nature of our recommendations and stated that actions are being planned by the SEC to implement our recommendations. Regarding our recommendation to improve the availability of information to registrants about the OCA’s processes for decisions on accounting issues, OCA plans to publish its internal procedures, with minor modifications. In addition, the OCA plans to publish an article, which will describe how accounting issues typically flow through the SEC’s OCA. Regarding our recommendations that OCA meet with representatives of the registrants and accounting profession to determine (1) how best to disseminate information on rules and interpretive guidance and (2) how the SEC could provide additional written information on the reasons for its decisions, the SEC agreed that discussions would be helpful and appropriate. OCA anticipates either adding these issues to the periodic meetings with the AICPA’s SEC Regulations Committee and other appropriate committees, or convening a special meeting to discuss these two issues. The SEC’s OCA also provided additional details on planned modifications to its internal procedures for decisions on accounting matters, and its outreach programs that inform the public of OCA’s decisions and positions on accounting issues. In its comments, the SEC’s OCA also provided information on the size of OCA’s staff and the scope of its workload. These additional details can be found in OCA’s written comments, which have been reprinted in appendix I. Comments From the AICPA SEC Practice Section and the AICPA SEC Regulations Committee In its written comments, the AICPA noted the critical role SEC plays to individual investors who place their trust in the capital markets. The AICPA also recognized that the SEC staff executes its critical mission under difficult and challenging circumstances, including pressures that result from market timing and limited resources. The AICPA stated that if our recommendations were properly implemented, they could provide an opportunity to promote improved transparency of the SEC processes and communications among registrants, the accounting profession, and the SEC. Related to our recommendations, the AICPA provided additional suggestions for specific discussion topics regarding the SEC’s communications with registrants about its procedures and its process. The AICPA’s suggestions deal with the following areas: the SEC’s “white papers” used in its consultation process; timing of SEC responses; the SEC’s referrals of matters to the standard-setting bodies; the SEC’s approving official for restatements; and codification of SEC staff positions. We believe that discussions between the SEC and the accounting profession on the above issues would be constructive as part of the meetings between the SEC, registrants, and the accounting profession. Additional details can be found in AICPA’s written comments, which have been reprinted in appendix II. We are also sending copies of this report to the Acting Chairman of the Securities and Exchange Commission, the Director of the American Institute of Certified Public Accountants’ Professional Standards and Services, and the President and Chief Executive Officer of Financial Executives International. If you have any questions, please call me at (202) 512-2600 or Jeanette Franzel, Acting Director, at (202) 512-9471 or contact her via e-mail at [email protected]. Key contributors to this report were Darryl Chang, Charles Ego, Peggy Smith, and Meg Mills. Appendix I: Comments From the Securities and Exchange Commission Appendix II: Comments From the American Institute of Certified Public Accountants Ordering Information The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are also accepted. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St., NW (corner of 4th and G Sts. NW) Washington, DC 20013 Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. To Report Fraud, Waste, and Abuse in Federal Programs Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-mail: [email protected] 1-800-424-5454 (automated answering system)
This report reviews the Securities and Exchange Commission's (SEC) resolution of accounting issues submitted by companies that have or are contemplating publicly traded securities. Companies are required by law to register their securities with SEC by filing a registration statement. This statement must contain financial and other information on the securities and the issuer. SEC's Office of the Chief Accountant (OCA) is responsible for providing guidance to companies to ensure that they comply with the reporting requirements of the law. Generally, registrants submit issues to OCA for which there is no authoritative guidance. These issues tend to involve unusual, complex, or innovative transactions. Some of the accounting issues frequently reviewed include business mergers and issues surrounding revenue recognition and financial instruments. Representatives of registrants and the accounting profession have had both positive and negative experiences with SEC's handling of accounting issues. Several representatives expressed concerns over the transparency of SEC's decision making process and SEC's use of accounting sources outside of generally accepted accounting procedures.
Background Historically, governments worldwide—including that of the United States—have provided areas stricken by major disasters with aid in the recovery process. Unlike many other countries, the U.S. government has previously neither asked for nor accepted disaster assistance directly from foreign countries, choosing instead to direct offers of assistance to nongovernmental organizations such as the Red Cross. However, on August 29, 2005, Hurricane Katrina struck the coasts of Louisiana, Mississippi, and Alabama, causing billions of dollars in damage, and 3 days afterward the federal government, through the DOS announced worldwide assistance would be accepted. As of December 31, 2005, 76 countries and international organizations, such as UNICEF, donated $126 million in cash to the U.S. government; various types of in-kind donations, such as food, clothing, and blankets; and foreign military goods and services, such as the use of ships and diving teams. Seven countries donated both cash and in- kind items. There are several federal legislative and executive provisions that support preparation for and response to emergency situations. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (the Stafford Act) primarily establishes the programs and processes for the federal government to provide major disaster and emergency assistance to states, local governments, tribal nations, individuals, and qualified private nonprofit organizations. FEMA has responsibility for administering the provisions of the Stafford Act. When and if natural disasters or terrorist attacks occur within the United States, the National Response Plan (NRP), released in December 2004 by DHS, provides federal agencies with a framework to coordinate federal support to states and localities in need. The NRP works on a tiered request system, according to which requests for assistance flow from localities to states to the federal government when or if local and state resources are exhausted. However, under the NRP, the federal government—DHS—can in certain cases declare a catastrophic incident and provide assistance without waiting for requests for assistance. According to the NRP, as events occur and shortcomings are identified, FEMA is responsible for updating the plan. In the NRP section that discusses the principal organizational elements, issues that require policy adjudication or that fall outside the Secretary of Homeland Security’s areas of authority---as defined by the Homeland Security Act, the Stafford Act, and other relevant statues, Executive Orders, and directives---are elevated for resolution through the Homeland Security Council and the National Security Council system. In the case of Hurricane Katrina, the NSC led an interagency working group to decide how the donated funds would be used. The Stafford Act also provides the President or his delegate with the authority to accept and use gifts or donations in furtherance of the purposes of the Stafford Act. FEMA, as the President’s delegate, utilized this provision to accept international in-kind donations for the Hurricane Katrina recovery efforts. FEMA, through its mission assignments to other agencies, directed the use of the assistance to response efforts. Various other agencies have gift authorities, including DOS and DOD. However, an agency’s gift authority typically restricts the acceptance of gifts or donations to those activities that are within the accepting agency’s mission. For the purposes of Hurricane Katrina, FEMA—through the Stafford Act— accepted donations for the response and recovery efforts. Pursuant to Homeland Security Presidential Directive 5, the Secretary of Homeland Security is the principal federal official for domestic incident management, and the Secretary of State is charged with the responsibility to coordinate international activities related to the prevention, preparation, response, and recovery from a domestic incident within the United States. Further, the Secretary of State and Secretary of Homeland Security are to establish appropriate relationships and mechanisms for cooperation and coordination between their two departments. In the case of Hurricane Katrina, DOS, through a Hurricane Katrina Task Force, coordinated the acceptance of foreign monetary and nonmonetary assistance on behalf of the U.S. government and DHS (FEMA), respectively. The Task Force consisted of representatives from DOS and USAID/OFDA. The NRP designates 15 Emergency Support Functions that identify specific disaster responses and the organizations that have significant roles in responding to the disasters. Two key annexes apply to international disaster assistance. The International Coordination Support annex provides guidance on carrying out responsibilities for international coordination in support of the government’s response to an Incident of National Significance. Under this annex, DOS is charged with coordinating requests for foreign assistance based on needs conveyed by DHS or other federal agencies. DOS facilitates communication with foreign governments on behalf of the United States that can assist and/or support response, mitigation, and recovery efforts and acts as an intermediary for requests and foreign offers of assistance to the U.S. government. The NRP also includes a Financial Management Annex. This annex requires federal agencies to use proper federal financial principles, policies, and regulations, and management controls to ensure proper accountability of funds. To safeguard the assets, agencies can use the Comptroller General’s Standards for Internal Controls in the Federal Government. These standards provide federal agencies with the framework necessary to establish internal controls and thus safeguard and monitor assets and inventory to prevent waste, loss, or unauthorized use. USDA and FDA were also involved in the Hurricane Katrina relief effort. USDA is responsible for regulating the importation of animals and animal- derived materials to ensure that exotic animal and poultry diseases are not introduced into the United States, as well as to ensure that imported meat, poultry, or egg products are fit for human consumption. FDA regulates the importation of foods (except for certain meats and poultry products), drugs (human, animal, and biological), cosmetics, medical devices and radiation emitting devices, as defined in the Federal Food, Drug, and Cosmetic Act. For conventional operations, USDA and FDA are notified by U.S. Customs and Border Protection (CBP) prior to the import of food items or medical supplies in order to ensure that the items are acceptable for receipt in the United States. The ad hoc processes to accept, receive, and distribute international assistance varied depending on the type of assistance being offered. However, whether the assistance was in the form of cash, or in-kind donations, including foreign military donations, offers were supposed to be initially coordinated through the DOS Hurricane Katrina Task Force. However, we noted that not all foreign assistance was coordinated through DOS. For example, an unknown quantity of in-kind assistance came to the United States directly from foreign militaries. On September 15, 2005, the President ordered a comprehensive review of the federal government’s response to Hurricane Katrina. The administration released its report, The Federal Response To Hurricane Katrina: Lessons Learned, on February 23, 2006. The report contained 125 recommendations, one of which requires DOS and DHS to lead an interagency effort to develop procedures for reviewing, accepting, or rejecting any offers of international assistance for a domestic catastrophic incident, including a mechanism to receive, disburse, and audit any cash assistance. Officials from DOS, FEMA, and DOD told us that by June 1, 2006, they will provide policies and procedures for managing international assistance to the Homeland Security Council. Our report complements the findings in the administration’s report. Foreign Countries Donated Millions in Cash, But Policies, Procedures, and Plans Were Not in Place In the absence of policies, procedures, and plans, DOS developed an ad hoc process to manage the cash donations flowing to the U.S. government from other countries to address Hurricane Katrina relief efforts. Through this process, $126 million was donated to the U.S. government which DOS recorded in a designated account at the U.S. Treasury to hold the funds. An NSC-led interagency working group was established to make policy decisions about the use of the funds. FEMA officials told us they had identified and presented to the working group a number of items that the donated funds could be spent on. Once accepted by FEMA under the Stafford Act, donated funds would be limited to use on activities in furtherance of the Act. The working group wanted greater flexibility in the use of the donated funds, and thus held the funds pending the group’s determination as to which agency or agencies should ultimately accept and use the monies. By September 21, 2005 about $115 million had been received and in October 2005, $66 million of the donated funds were accepted by FEMA and spent on a case management grant. As of March 16, 2006, $60 million remained undistributed in the DOS-designated account at the Treasury that does not pay interest. As discussed previously, undistributed funds accepted by FEMA under the Stafford Act and recorded at Treasury can receive interest. Because DOS expects additional cash donations to be received, it is important that cash management policies and spending plans are in place to deal with the forthcoming donations so that the purchasing power of the donated cash is maintained. Ad Hoc Procedures Allowed Reasonable Accountability for Cash Donations For offers of cash assistance, the DOS Hurricane Katrina Task Force developed ad hoc procedures to track and account for amounts offered and received as events evolved. Ad hoc procedures were necessary because specific policies and procedures for handling international cash donations to the federal government had not been developed. The DOS Hurricane Katrina Task Force evaluated each monetary offer by working with foreign donors to determine whether there were any specific restrictions or conditions associated with the offers. In making their donations, foreign donors did not generally place restrictions or conditions on amounts pledged. DOS also encouraged governments and private foreign donors to direct their cash contributions to the Red Cross and other organizations. Additionally, DOS coordinated with other federal agencies to determine whether any U.S. government sanctions imposed on a donating country prevented the acceptance of its offer. Once an offer was accepted on behalf of the U.S. government, DOS provided the donor with instructions on how to wire transfer the funds to a designated Department of the Treasury account maintained at the Federal Reserve Bank of New York specifically for DOS. In other cases, countries and private citizens wrote checks to the U.S. government that were deposited and routed to the same account following normal operating procedures. Figure 1 below shows the process developed to receive cash donations. As of December 31, 2005, DOS reported that $126 million had been donated by 36 countries and international organizations. Our review noted that although DOS’s procedures were ad hoc, they did ensure the proper recording of international cash donations that have been received to date, and we were able to reconcile the funds received with those held in the designated DOS account at Treasury. DOS expects that additional donations could come in from several countries including $400 million in pledged oil products from a foreign country. DOS officials told us that the foreign country’s governing body must approve the donation before this pledge can be executed and that the country intends to monetize—convert to cash—the oil products if and when its governing body approves the donation. Cash Donation Management Policies, Procedures, and Plans Not in Place In the absence of international cash donation management policies, procedures, and plans, an NSC-led interagency working group was established to determine uses for the international cash donations. In October 2005, $66 million of the donated funds had been accepted by FEMA under the Stafford Act and used for a Hurricane Katrina relief grant. As of March 16, 2006 the other $60 million from international donations remained undistributed. We were told that the NSC-led interagency working group did not transfer the funds to FEMA because it wanted to retain the flexibility to spend the donated funds on a wider range of assistance than is permitted under the Stafford Act. During this period and while deliberations were ongoing, the funds were kept in a DOS account that did not pay interest, thereby diminishing the purchasing power of the donated funds and losing an opportunity to maximize the resources available for relief. Under the Stafford Act, FEMA could have held the funds in an account that can pay interest, but Treasury lacks the statutory authority to credit these DOS-held funds with interest. If there are dual goals of flexibility and maintaining purchasing power, there are a number of options that could be considered. Table 1 below shows the dates of key events in the receipt and distribution of the international cash donations according to documentation received and interviews with DOS and FEMA officials. In early September 2005, FEMA officials had identified an account at the U.S. Treasury for recording international cash donations and had developed a number of potential uses for the donations that would help meet relief needs of the disaster. By September 21, 2005, about $115 million in foreign cash donations had been received. In its input to the NSC-led interagency working group, dated September 22, 2005, DOS recognized that every effort should be made to disburse the funds to provide swift and meaningful relief to Hurricane Katrina victims without compromising needed internal controls to ensure proper management and effective use of the cash donations and transparency. FEMA officials told us that on September 23, 2005, they had identified and proposed to the NSC-led interagency working group that the international cash donations could be spent on the following items for individuals and families affected by Hurricane Katrina: social services assistance, medical transportation, adapting homes for medical and handicap needs, job training and education, living expenses, building materials, furniture, and transportation. In responding to our draft report, a DHS official said that at the next meeting of the interagency working group on October 7, 2005, FEMA, at NSC’s request, presented a more detailed description of certain potential activities, including a proposal to finance case management services for households affected by Hurricane Katrina. On October 20, 2005, with the NSC-led interagency working group consensus, DOS transferred to FEMA $66 million of the international donations for the purpose of financing case management services for up to 100,000 such households. These services will provide case workers to help individual households define what their needs are and to obtain available assistance. On October 28, 2005, FEMA awarded a $66 million 2-year case management grant to the United Methodist Committee on Relief. With these funds, the United Methodist Committee on Relief will lead and manage a national consortium consisting of 10 primary organizations that will provide case management services to victims of Hurricane Katrina. As of February 2006, the remaining $60 million had not been released, pending the NSC-led interagency working group determination as to which agency or agencies should ultimately accept and use the remaining funds. The NSC-led interagency working group set various parameters for using the funds, including that the funds should be used for “bricks and mortar” projects, such as buildings that provide tangible evidence of how contributions were used. We were told that the NSC-led interagency working group determined that use of those funds, once accepted by FEMA under the Stafford Act, would be more limited than the wider range of possible uses available if the funds were held until their ultimate use was determined and then accepted under the gift authorities of other agencies. DOS and FEMA officials told us that for the remaining $60 million in donated funds, the NSC-led interagency working group was considering a series of proposals received from various entities, both public and private. At the time of our review, a member of the NSC-led interagency working group told us they had agreed that the vital needs of schools in the area would be an appropriate place to apply the donations and they were working with the Department of Education to finalize arrangements to provide funding to meet those needs. FEMA officials told us that under the Stafford Act, they could use donated funds for projects such as rebuilding schools, but projects for new school buildings are not consistent with Stafford Act purposes unless replacing a damaged one. Also, according to a DHS official, the Act would have required that receiving entities match FEMA funds for these purposes. However, because of the devastation, the entities would have difficulty matching FEMA funds, which in essence limited FEMA from doing these types of projects. According to DHS, FEMA considered whether it would be useful for donated funds to contribute to the non-federal share for applicants having trouble meeting the non-federal share, but would need legislative authority to use it to match federal funds. We contacted NSC to further discuss these matters; however, NSC did not respond to our requests for a meeting. On March 16, 2006, DOS and the Department of Education signed a Memorandum of Agreement regarding the use of $60 million of the international cash donations. We did not review the details of this agreement. Advance planning is very important given that outstanding pledges of $400 million or more that DOS officials indicated will likely be received. While acknowledging that the U.S. government has never previously had occasion to accept such large amounts of international donations for disaster relief, going forward, advance planning is a useful tool to identify potential programs and projects prior to the occurrence of an event of such magnitude. The administration’s report The Federal Response To Hurricane Katrina: Lessons Learned, released on February 23, 2006, recognized that there was no pre-established plan for handling international donations and that implementation of the procedures developed was a slow and often frustrating process. The report includes recommendations that DOS should establish before June 1, 2006, an interagency process to determine appropriate uses of international cash donations, and ensure timely use of these funds in a transparent and accountable manner, among others. DOS officials recognized that the ad hoc process needed to be formalized and planned to develop such procedures by June 1, 2006. While the NSC-led interagency working group was reviewing various proposals on the further use of the funds beyond the initial $66 million, the remaining $60 million was being held in a DOS account at the U.S. Treasury that does not pay interest. Treasury lacks the statutory authority to credit these DOS-held funds with interest. Since these funds have not yet been used, their purchasing power has diminished due to inflation. If these funds had been placed in an account that could be credited with interest to offset the erosion of purchasing power, the amount of funds available for relief and recovery efforts would have increased while decision makers determined how to use them. The U.S. government would be responsible for paying the interest if these funds were held in an account at the Treasury that can earn interest. Although the Stafford Act does not apply to the donated funds maintained in the DOS account at Treasury, the Stafford Act does provide that excess donated funds may be placed in Treasury securities, and the related interest paid on such investments would be credited to the account. This Stafford Act provision applies only to donated funds that have been accepted by FEMA. Had the foreign monetary donations been placed in Treasury securities, we estimate that by February 23, 2006, the remaining funds for relief efforts would have increased by nearly $1 million. Although Treasury lacks the authority to invest the foreign monetary donations received by DOS, the FEMA account does permit the government to protect the purchasing power of foreign monetary donations. As noted previously, outstanding pledges totaling over $400 million could be received in the near future. Advanced planning and procedures for the decision-making process in the disbursement of funds is important so that this money can be utilized for disaster relief in a timely manner or be placed in an account to earn interest for the benefit of relief and reconstruction efforts while decisions are being made on how to spend the funds. When developing policies, procedures, and plans to provide the flexibility given by leaving the international donations in the DOS account, it is important that consideration also be given to strategies that can help maintain the purchasing power of the international donations. If the goal is to maintain both their purchasing power and flexibility, then among the options to consider are seeking statutory authority for DOS to record the funds in a Treasury account that can pay interest similar to donations accepted under the Stafford Act, or to allow DOS to deposit the funds in an existing Treasury account of another agency that can pay interest pending decisions on how the funds would be used. Policies and Procedures Were Lacking in the Acceptance and Distribution of in-Kind Donations, Including Foreign Military Donations The agencies having responsibilities regarding international assistance did not have policies and procedures in place to ensure the acceptance and distribution of in-kind donations, including foreign military donations, received from 43 countries and international organizations. With little guidance, DOS, FEMA, OFDA, and DOD created ad hoc policies and procedures in an effort to provide the assistance to victims as quickly as possible. However, we did note areas in which the ad hoc procedures were missing internal controls to ensure sufficient agency oversight of the assistance and to ensure that the assistance was used as intended. For example, the lack of guidance, inadequate information up front about the nature and content of foreign offers of in-kind assistance, and insufficient advance coordination before acceptance resulted in food and medical items, such as Meals Ready to Eat (MREs) and medical supplies, that arrived and did not meet USDA or FDA standards and thus could not be distributed in the United States. Also, the ad hoc procedures allowed for confusion about which agency—FEMA or DOD—accepted and was responsible for oversight of foreign military donations. Process Developed for Accepting, Receiving, and Distributing In-Kind Donations For offers of in-kind assistance, FEMA worked in close coordination with the DOS Task Force to determine whether it should accept the offers. Specifically, FEMA provided the Task Force with a list of supplies the agency could use to assist in recovery efforts. The Task Force compared the offers of assistance against a list of needed supplies provided by FEMA. As matches were identified by the Task Force, DOS relayed a message to the donor that the offer would be accepted on behalf of the United States for use in Hurricane Katrina relief efforts. Once a message of acceptance was relayed to the foreign country or international organization donating the in-kind assistance, the Office of Foreign Disaster Assistance was tasked by FEMA with the responsibility of providing logistical support for physical receipt of the donation. USAID/OFDA coordinated with DOD–Northern Command to establish a location that could be used to receive international donations. The location had to be both accessible to numerous flights delivering supplies and in close proximity to the areas devastated by Hurricane Katrina, and USAID/OFDA and DOD–Northern Command determined that the Little Rock Air Force Base best qualified for these criteria. Accordingly, USAID/OFDA coordinated with foreign donors for in-kind donations to arrive in Little Rock, Arkansas, where agency personnel would unload donations and, upon request from FEMA, forward the donations to a distribution point. Figure 2 below shows the process developed for accepting, receiving, and distributing in-kind donations. Lack of Guidance Regarding the Tracking and Confirmation of Receipt for International Assistance In the absence of guidance, USAID/OFDA created a database to track the assistance as it arrived. We found, under the circumstances, that USAID/OFDA reasonably accounted for the assistance, especially given the lack of manifest information and the amount of assistance that was arriving within a short time. Compounding difficulties in USAID/OFDA’s ability to record the assistance as it arrived were planes that arrived from the North Atlantic Treaty Organization for which the organization would not provide reliable manifest information. Internal controls, such as a system to track that shipments are received at intended destinations, provides an agency with oversight, and for FEMA in this case, they help ensure that international donations are received at FEMA destination sites. On September 14, 2005, FEMA and USAID/OFDA agreed that USAID/OFDA would track the assistance from receipt through final disposition. However, the system USAID/OFDA created did not include confirming that the assistance was received at the FEMA distribution sites. In part, USAID/OFDA did not set up these procedures on its own in this situation because USAID/OFDA had never distributed assistance within the United States as its mission is to deliver assistance in foreign countries. FEMA officials told us that they assumed USAID/OFDA had these controls in place. FEMA and USAID/OFDA officials could not provide us with evidence that confirmed that the assistance sent to distribution sites was received. Without these controls in place to ensure accountability for the assistance, FEMA does not know if all or part of these donations were received at FEMA distribution sites. Had USAID/OFDA created a system to track the items transported through receipt at distribution sites and had FEMA overseen the USAID/OFDA process, FEMA would be able to determine the extent to which all or part of the foreign assistance was received at the FEMA distribution sites. Lack of Guidance Resulted in the Arrival of Food and Medical Items that Could Not be Used The lack of guidance, inadequate information up front about foreign offers of in-kind assistance and insufficient advance coordination before agreeing to receive it, resulted in food and medical items, such as MREs and medical supplies, that came into the United States and did not meet USDA or FDA standards and thus could not be distributed in the United States. The food items included MREs from five countries. Because of the magnitude of the disaster, some normal operating procedures governing the import of goods were waived. According to USDA and FDA officials, under normal procedures, entry documents containing specific information which are filed with CBP, are transmitted to USDA and FDA for those agencies’ use in determining if the commodities are appropriately admissible into the United States. Based on U.S. laws and regulations, the agencies then determine whether the items to be imported meet U.S. standards. CBP authorized suspension of some normal operating procedures for the import of regulated items like food and medical supplies without consultation or prior notification to USDA or FDA. Thus, USDA and FDA had no involvement in the decision–making process for regulated product donations, including MREs and medical supplies before the United States agreed to receive them. FEMA notified USDA and FDA on approximately September 4, 2005, that food and medical supplies were received by the U.S. government and approved by CBP for entry into the United States. However, FEMA officials told us that they did not accept MREs from one country even though these MREs were shipped and stored in the warehouse along with the MREs from the other countries. On approximately September 4, the items were either in route or had already arrived at the staging area in Little Rock, Arkansas. The USDA and FDA then sent personnel to Little Rock to inspect the donations. Simultaneously, USAID/OFDA personnel, unaware that some of these donations would not be eligible for distribution and trying to expedite provision of relief supplies, forwarded approximately 220,000 MREs to distribution points. When USAID/OFDA officials became aware that the MREs they distributed were not approved by the USDA, they recalled the items back to Little Rock, Arkansas, pending USDA inspection. According to USDA inspectors, they determined that a number of the MREs donated to the United States contained meat and poultry products from countries that, based on U.S. regulations, were excluded from exporting meat to the United States. According to USDA, the MREs from one country were banned because of concerns regarding Bovine Spongiform Encephalopathy (BSE) meat contamination, or because the MREs originated in countries lacking food inspection systems equivalent to those in the United States. In addition, FDA found that many of the medical supplies received in Little Rock were not approved for use in the United States because they were either labeled with instructions in a language other than English or stored under conditions that were deemed unsanitary. Both USDA and FDA, based on regulations intended to protect public health, prevented distribution of some international donations. Per FEMA guidance, USAID/OFDA received 359,600 rations of MREs that could not be distributed within the United States. USAID/OFDA, on behalf of FEMA, has been storing the MREs and medical supplies at a private warehouse in Little Rock, Arkansas, until DOS and FEMA determine what to do with them. As of February 1, 2006, FEMA and DOS had paid the warehouse $62,000, with an additional $17,600 contract pending for the month of February. In addition to the storage cost, there is an unquantifiable cost in the diplomatic impact of rejecting foreign donations after they have been received. DOS has arranged for some of the MREs to be shipped to foreign countries in need and DOS officials told us that the receiving countries will be paying the shipping costs. As of February 3, 2006, approximately 40 percent of the 359,600 rations of MREs have been forwarded to two other countries. The DOS plans to forward an additional 21 percent to other countries by February 28, 2006. While the disposition of the remaining 40 percent of the MREs and the medical supplies still stored in the private warehouse is uncertain, DOS will continue to pay storage fees. The following picture displays the numerous pallets of MREs stored in Little Rock, Arkansas as of November 9, 2005. The costs for FEMA’s receipt and later storage of the MREs and medical supplies that were not distributed for the disaster were attributable in part to the lack of policies and procedures needed to guide it through the process of coordinating and accepting international in-kind assistance. First, our review noted that FEMA’s list of items that could be used for disaster relief that was provided to DOS was very general and did not provide any exceptions, for example about contents of MREs. Also, DHS commented on our report that FEMA repeatedly requested from DOS additional information about the foreign items being offered to determine whether or not they should be accepted and DOS did not respond. Had FEMA supplied DOS officials with more detailed information early on about what could be accepted and what could not be ultimately distributed, and had DOS requested and received additional details from potential donors on the nature and contents of the assistance such as MREs, they might have prevented the unusable products from coming into the United States. FEMA officials told us that in the event of another disaster of this size, they would coordinate with USDA, FDA, and other agencies as required to avoid similar problems. Policies and Procedures Were Lacking in the Oversight of Foreign Military Donations In the absence of policies and procedures, DOS, FEMA, and DOD created ad hoc policies and procedures to manage the receipt and distribution of foreign military goods and services; however, this guidance allowed for confusion about which agency had oversight of these donations. Also, there were no controls or procedures to assure that all foreign military donations were vetted through the DOS process. The offers of foreign military assistance included, for example, the use of amphibious ships and diver salvage teams. For foreign military donations, the DOS Hurricane Katrina Task Force coordinated with FEMA and DOD, through Northern Command, to determine whether the offer of assistance could be utilized. Northern Command reviewed the offers of assistance and compared them against the mission assignments it received from FEMA that included such tasks as clearing ports and waterways of debris. If Northern Command believed the foreign militaries’ offers of assistance could be utilized to accomplish a mission assignment, the command coordinated the receipt of the assistance with the foreign donor. Figure 4 below shows the process developed for acceptance and receipt of foreign military assistance. The ad hoc procedures, however, allowed confusion about which agency— DOD or FEMA—was to formally accept the foreign military assistance and therefore, each agency apparently assumed the other had done so under their respective gift authorities. It is unclear whether FEMA or DOD accepted or maintained oversight of the foreign military donations that were vetted through the DOS task force. A FEMA official told us that they were unable to explain how the foreign military donations were used because FEMA could not match the use of the donations with mission assignments it gave Northern Command. Establishing accountability is an important internal control activity to help ensure that there is an organization to account for the goods and services and that they are used as intended. While we have found no evidence to suggest that any of the foreign military goods or services were not used as intended, establishing and maintaining oversight provides more assurance that these donations were used as intended. Moreover, FEMA and Northern Command officials told us of instances in which foreign military donations arrived in the United States that were not vetted through the DOS task force. For example, we were told of foreign military MREs that were shipped to a military base and distributed directly to hurricane victims. Having policies and procedures in place would have instructed federal officials to coordinate all foreign military offers of assistance through the DOS task force which would work with FEMA and DOD to determine the best use for the items. DOD officials acknowledged the need for policies and procedures and are trying to establish policies and procedures to manage international assistance. When we asked about shipments that were not vetted through the task force, neither DOS, FEMA, nor DOD officials could provide us information on the type, amount, or use of these donations. As a result, we can not determine if these items of assistance were safeguarded and used as intended. Conclusions We recognize that since the United States government had never before received such substantial amounts of international disaster assistance, DOS, FEMA, OFDA, and DOD needed to create ad hoc procedures to manage the acceptance and distribution of the assistance as best they could. Going forward, it will be important to have in place clear policies, procedures, and plans on how international cash donations are to be managed and used, which would enhance the accountability and transparency of these funds. In addition, there is a need to consider whether international donations should be treated on the same basis as Stafford Act donations for the purpose of Treasury crediting interest to such donations. Since this was the first time international donations were accepted, this situation was not contemplated. If the goal is to maintain both purchasing power and flexibility, then among the options to consider are seeking statutory authority for DOS to record the funds in a Treasury account that can pay interest, or to allow DOS to deposit the funds in an existing Treasury account of another agency that can pay interest pending decisions on how the funds would be used. In addition, focusing immediate attention on the potentially forthcoming donations of $400 million, as well as the $60 million in presently available funds, would be prudent. With respect to the donations of food and medical supplies, we agree that normal procedures should be waived to expedite recovery efforts when necessary; however, food and medical supplies are essential in any disaster and the health and safety of the public should be considered when accepting food and medical assistance from the international community. Moreover, the failure to track in-kind donations after they were loaded onto trucks resulted in a lack of assurance that all of the international assistance FEMA accepted was safeguarded, and used as intended. The need to have proper knowledge, acceptance, and oversight of foreign military donations is equally important. Recommendations for Executive Action As mentioned previously, in February 2006, the administration issued its report on the federal response to Hurricane Katrina and the lessons learned in that response. In the report, the administration made 125 recommendations, including several to improve the management of international donations. Specifically, DOS and DHS are required to lead an interagency effort to improve the management of international donations, which includes developing procedures for reviewing, accepting, or rejecting any offers as well as developing a mechanism to receive, disburse, and audit any cash donations. To help ensure that the cognizant agencies fulfill their responsibility to account for and effectively manage foreign donations and maintain adequate internal controls over government resources, we recommend that the Secretary, Department of Homeland Security, in consultation with the Secretary, Department of State, establish within the National Response Plan—or other appropriate plans—clearly delineated policies and procedures for the acceptance, receipt, and distribution of international assistance. As the agencies develop and implement the administration’s recommendations, we believe they should also incorporate the following actions and procedures into their guidance. Develop policies, procedures, and plans to help ensure international cash donations for disaster relief and assistance are accepted and used appropriately as needed. Consider cash management options as discussed in the conclusions section above and place international cash donations in an account that would pay interest while decisions are pending on their use to maintain the purchasing power of those donations. Maintain oversight of foreign donated in-kind assets by tracking them from receipt to disbursement, to reasonably ensure that assistance is delivered where it is intended. Establish plans for the acceptance of foreign-donated items that include coordinating with regulatory agencies, such as USDA and FDA, in advance, in order to prevent the acceptance of items that are prohibited from distribution in the United States, regardless of waivers that might be established to expedite the importing of foreign assistance; these plans should also include DOS obtaining information on acceptable or unacceptable items in order to communicate to the international community what is needed or what can not be accepted. We also recommend that the Secretary of Defense, in consultation with the Secretaries of State and Homeland Security, take the following two actions: Establish within the National Response Plan or other appropriate plans—clearly delineated policies and procedures to ensure that foreign military offers of assistance for domestic disasters are coordinated through the DOS to ensure they are properly accepted and safeguarded and used as intended. Develop and issue internal DOD guidance to commanders on the agreed-upon process to coordinate assistance through DOS. Agency Comments and Our Evaluation We asked the Secretaries of Defense, Homeland Security, State, and Treasury to comment on our draft. We also asked USDA, FDA, and USAID/OFDA to provide comments. DOD and DHS generally agreed with our recommendations and provided written comments on a draft of this report, included at appendixes II and III, respectively. We received technical comments from DOS, DOD, USAID/OFDA, FEMA, FDA, and USDA which we incorporated as appropriate. DOD agreed with the recommendations pertaining to it and suggested that we adjust the wording of the recommendation that procedures be developed to assure that foreign military donations be routed through DOS. We adjusted the recommendation based on DOD’s suggestion. In its technical comments, DOD also suggested specific information on the process to coordinate international offers of assistance through DOS, including ensuring that the offers match U.S. requirements, meet U.S. standards, and are received at the right locations. These specifics may be considered as the agencies develop policies, procedures, and plans for the management of future international assistance. DHS generally agreed with our recommendations and noted that, in some cases, actions were already underway to address the recommendations. Regarding our recommendation to develop policies, procedures, and plans for international cash donations for disaster relief to assure they are accepted and used appropriately as needed, DHS noted that, in coordination with Treasury and the Office of Management and Budget, it is already developing a system to manage such donations. DHS also agreed with our recommendation regarding cash management options that would maintain the purchasing power of the cash donations while decisions are pending on their use. DHS added that this recommendation was consistent with what FEMA did during Hurricane Katrina, pointing out that on September 6, 2005, FEMA established an interest bearing account to hold international funds and began identifying programs and needs that would not be eligible for FEMA assistance but could benefit from monetary donations. DHS also agreed that FEMA should maintain oversight of foreign donated in-kind assets to the distribution points. DHS noted that FEMA and USAID/OFDA agreed that it was USAID/OFDA’s responsibility to track incoming international donations. We acknowledge this agreement, but note in our report, however, that the in-kind donations were not tracked to the final distribution points with confirmation that they arrived and note that USAID/OFDA and FEMA could not provide evidence that this had been accomplished. We clarified the report in this regard. DHS agreed with our recommendation regarding the need to coordinate with regulatory agencies such as USDA and FDA in advance to prevent the receipt of items that could not be distributed in the United States. DHS noted that FEMA coordinated with these agencies during Hurricane Katrina, and made constant requests to DOS to obtain more information from the donors about the donations to determine whether or not they could be properly accepted. We agree that more specificity is needed about the nature and content of items the United States can accept and foreign nations are offering through DOS channels, such as MREs, and reflected DHS’s comment in the final report. Without such information, it may not be possible to undertake appropriate coordination with regulatory agencies such as USDA and FDA and make a sound determination as to whether the items should be accepted and could be used in the United States before they arrive. DHS also agreed that all foreign military offers of assistance for domestic disasters should be coordinated through DOS for official acceptance or denial. However, we continue to believe that clear procedures are needed regarding which agency—FEMA or DOD—accepts and maintains oversight of such donations in advance. We adjusted our draft report to reflect the apparent confusion over the acceptance of foreign military donations. We also received technical comments from DOS, DOD, USAID/OFDA, FEMA, FDA, and USDA, which we incorporated as appropriate. We are sending copies of this report to the Secretaries of Homeland Security, Defense, and State and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http:/www.gao.gov. Please contact Davi M. D’Agostino at (202) 512-5431 or [email protected] or McCoy Williams at (202) 512-9095 or [email protected] if your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Scope and Methodology To meet our objectives for this report we relied on information gathered through our visits and interviews with key personnel within the Department of State’s (DOS) Hurricane Katrina Task Force; Office of the General Counsel for the DOS; Department of Homeland Security (DHS) Inspector General; Office of the General Counsel for Federal Emergency Management Agency (FEMA); FEMA Response Division; FEMA Recovery Division; Office of the Chief Financial Officer for FEMA; FEMA/Financial and Acquisitions Management Division; FEMA/Grants Management Division; United States Agency for International Development (USAID)/Office of Foreign Disaster Assistance (OFDA); OFDA/Response Alternatives for Technical Services; Office of the Assistant Deputy Under Secretary of Defense; Office of the Assistant Secretary of Defense; Northern Command (NORTHCOM)/ Joint Staff Logistics; NORTHCOM/ Joint Staff Civil Affairs; NORTHCOM/Political Advisor; Department of Treasury (Treasury)/Cash Accounting; Treasury/Chief Systems Integrity Division; Food and Drug Administration (FDA); and United States Department of Agriculture (USDA)/Food Safety and Inspection Service. We conducted our work in Washington, D.C.; Little Rock, Arkansas; Colorado Springs, Colorado, and New York, New York, from October 2005 through February 2006, in accordance with generally accepted government auditing standards. DOS and FEMA officials told us the National Security Council (NSC) had established an interagency working group that had a role in determining how the international cash donations were to be used. We contacted NSC to discuss its role in managing the international cash donations; however, NSC did not respond to our request for a meeting. To determine the amount of cash that was donated by foreign countries and the extent to which it has been used to assist hurricane victims, we gathered information from interviews with DOS, FEMA, and Treasury. To assess the reliability of foreign cash donations received by the U.S. government from the date Hurricane Katrina hit the United States until December 31, 2005, we talked with DOS, FEMA, and Treasury officials to gain an understanding of the procedures followed in recording the funds. We also validated $123,611,143, which is 97.8 percent of the Hurricane Katrina collections reflected in the Department of Treasury records by comparing to supporting documentation such as Treasury wire transfers and DOS check receipt documents. We also traced the transfer of $66 million in funds from DOS to FEMA. We determined the data were sufficiently reliable for the purpose of this report. To obtain an understanding of the oversight controls over FEMA’s-2 year case management grant, we interviewed officials from FEMA/Grants Management Division, the United Methodist Committee on Relief, and DHS Office of Inspector General, as well as reviewed pertinent documents such as the grant proposal and agreement. We also contacted the NSC to discuss why an interagency working group and not FEMA was used to manage the donated cash and the process by which they established the parameters governing how the cash was to be used. NSC did not respond to our requests for a meeting. To determine the extent to which those federal agencies with responsibilities regarding the international assistance offered to the United States had policies and procedures in place to ensure the appropriate accountability for the acceptance and distribution of in-kind donations, including foreign military donations, we relied on information gathered during interviews with officials from DOS, DHS, DOD, USDA, and FDA. We reviewed the National Response Plan International Coordination Support Annex and Financial Management Annex; Robert T. Stafford Act; Homeland Security Presidential Directive 5; Federal Food, Drug, and Cosmetic Act; and 9 CFR 94.18 to determine the responsibilities of federal agencies. We also obtained, reviewed, and analyzed the Memorandum of Agreement between the Department of State and The Department of Homeland Security; a FEMA-created international assistance flow chart and processes document; a NORTHCOM-created international donations flowc hart; USAID Commodity Dispatch Procedures; and FDA Import Procedures to assist in understanding the roles of federal agencies. We reviewed and analyzed summaries of international assistance received; instructional and acceptance cables from the Department of State; instructions provided to FEMA accountants for recording in-kind donations; and USAID Commodity Dispatch Procedures for FEMA to call forward international donations from the arrival site in Little Rock, Arkansas. To assess the reliability of data provided, we talked with knowledgeable agency officials about the data provided and reviewed relevant documentation. We visited Smart Choice Delivery warehouse in Little Rock, Arkansas to discuss and observe the international Meals- Ready-to-Eat that were stored in the facility. We obtained and reviewed the contract between the Office of Foreign Disaster Assistance and Smart Choice Delivery. In addition, we interviewed representatives from the American Red Cross and the United Nations International Children’s Fund in order to understand the process and procedures of leading non- governmental agencies that are experienced in accepting non-monetary donations. Comments from the Department of Defense Comments from the Department of Homeland Security GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, Kay Daly, Kate Lenane, Charles Perdue, Jay Spaan, Lorelei St James, Pamela Valentine, Cheryl Weissman, and Leonard Zapata made key contributions to this report. Related GAO Products Hurricane Katrina: Status of the Health Care System in New Orleans and Difficult Decisions Related to Efforts to Rebuild It Approximately 6 Months After Hurricane Katrina. GAO-06-576R. Washington, D.C.: March 28, 2006. Agency Management of Contractors Responding to Hurricanes Katrina and Rita. GAO-06-461R. Washington, D.C.: March 15, 2006. Hurricane Katrina: GAO's Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Disaster Preparedness: Preliminary Observations on the Evacuation of Hospitals and Nursing Homes Due to Hurricanes. GAO-06-443R. Washington, D.C.: February 16, 2006. Investigation: Military Meals, Ready-To-Eat Sold on eBay. GAO-06-410R. Washington, D.C.: February 13, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA's Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-403T. Washington: D.C., February 13, 2006. Statement by Comptroller General David M. Walker on GAO's Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C., February 1, 2006. Federal Emergency Management Agency: Challenges for the National Flood Insurance Program. GAO-06-335T. Washington, D.C.: January 25, 2006. Hurricane Protection: Statutory and Regulatory Framework for Levee Maintenance and Emergency Response for the Lake Pontchartrain Project. GAO-06-322T. Washington, D.C.: December 15, 2005. Hurricanes Katrina and Rita: Provision of Charitable Assistance. GAO- 06-297T. Washington, D.C.: December 13, 2005. Army Corps of Engineers: History of the Lake Pontchartrain and Vicinity Hurricane Protection Project. GAO-06-244T. Washington, D.C.: November 9, 2005. Hurricanes Katrina and Rita: Preliminary Observations on Contracting for Response and Recovery Efforts. GAO-06-246T. Washington, D.C.: November 8, 2005. Hurricanes Katrina and Rita: Contracting for Response and Recovery Efforts. GAO-06-235T. Washington, D.C.: November 2, 2005. Federal Emergency Management Agency: Oversight and Management of the National Flood Insurance Program. GAO-06-183T. Washington, D.C.: October 20, 2005. Federal Emergency Management Agency: Challenges Facing the National Flood Insurance Program. GAO-06-174T. Washington, D.C.: October 18, 2005. Federal Emergency Management Agency: Improvements Needed to Enhance Oversight and Management of the National Flood Insurance Program. GAO-06-119. Washington, D.C.: October 18, 2005. Army Corps of Engineers: Lake Pontchartrain and Vicinity Hurricane Projection Project. GAO-05-1050T. Washington, D.C.: September 28, 2005. Hurricane Katrina: Providing Oversight of the Nation's Preparedness, Response, and Recovery Activities. GAO-05-1053T. Washington, D.C.: September 28, 2005.
In response to Hurricane Katrina, countries and organizations donated to the United States government cash and in-kind donations, including foreign military assistance. The National Response Plan establishes that the Department of State (DOS) is the coordinator of all offers of international assistance. The Federal Emergency Management Agency (FEMA) within the Department of Homeland Security (DHS) is responsible for accepting the assistance and coordinating its distribution. In light of widespread congressional and public interest in U.S. agencies' accountability in receiving and distributing assistance to hurricane victims, this report is one of several initiated under the authority of the Comptroller General to review the federal government's response to Hurricane Katrina. It examines (1) the amount and use of internationally donated cash, and (2) the extent to which federal agencies have adequate policies and procedures to ensure proper accountability for the acceptance and distribution of that assistance. Because the U.S. government had not received such substantial amounts of international disaster assistance before, ad hoc procedures were developed to accept, receive and distribute the cash and in-kind assistance. Understandably, not all procedures would be in place at the outset to provide a higher level of accountability. The Administration recognized the need for improvement in its recent report on lessons learned from Hurricane Katrina. GAO was able to track the cash donations received to designated U.S. Treasury accounts or disbursed. In the absence of policies, procedures, and plans, DOS developed an ad hoc process to manage $126 million in foreign cash donations to the U.S. government for Hurricane Katrina relief efforts. As cash donations arrived, a National Security Council (NSC)-led interagency working group was convened to make policy decisions about the use of the funds. FEMA officials told GAO they had identified and presented to the working group a number of items that the donated funds could be spent on. The NSC-led interagency working group determined that use of those donated funds, once accepted by FEMA under the Stafford Act, would be more limited than the wider range of possible uses available if the funds were held and then accepted under the gift authorities of other agencies. In October 2005, $66 million of the donated funds were spent on a FEMA case management grant, and as of March 16, 2006, $60 million remained undistributed in the DOS-designated account at the Treasury that did not pay interest. Treasury may pay interest on funds accepted by FEMA under the Stafford Act. According to DOS, an additional $400 million in international cash donations could arrive. It is important that cash management policies and spending plan options are considered and in place to deal with the forthcoming donations so that the purchasing power of the donated cash is maintained for relief and reconstruction. FEMA and other agencies did not have policies and procedures in place to ensure the proper acceptance and distribution of in-kind assistance donated by foreign countries and militaries. In-kind donations included food and clothing. FEMA and other agencies established ad hoc procedures. However, in the distribution of the assistance to FEMA sites, GAO found that no agency tracked and confirmed that the assistance arrived at their destinations. Also, lack of procedures, inadequate information up front about the donations, and insufficient coordination resulted in the U.S. government agreeing to receive food and medical items that were unsuitable for use in the United States and storage costs of about $80,000. The procedures also allowed confusion about which agency was to accept and provide oversight of foreign military donations. DOD's lack of internal guidance regarding the DOS coordinating process resulted in some foreign military donations that arrived without DOS, FEMA, or DOD oversight.
Background EPA determined that 23 states needed enhanced I&M programs in order to meet national air quality standards. Figure 1 shows the 23 states that are required to implement enhanced I&M programs. Because the ozone levels in many areas exceeded the national ozone standard, the Congress recognized that reducing ozone levels would be a long-term effort for some states and established interim goals and milestones in title I of the Clean Air Act Amendments of 1990. Areas that exceeded the national ozone standard were classified as “nonattainment areas,” and according to the severity of their ozone problems, states were given future dates ranging from 3 to 20 years to attain the ozone standard. Title I required most ozone nonattainment areas to develop plans for EPA’s approval that showed which control measures they would need to achieve a 15-percent reduction in VOC emissions by November 1996. Furthermore, the states with serious to extreme nonattainment areas were required to prepare plans showing how they would achieve additional VOC reductions beyond 1996. Enhanced I&M programs are designed to measure the pollution that vehicles release when they are operated under simulated driving conditions. EPA issued an enhanced I&M regulation in November 1992 that required the states to meet or exceed a stated performance standard based on a model program that included IM-240 testing equipment. Although the amendments required the states to implement their enhanced I&M programs by November 1992, EPA’s regulation postponed the required start date to January 1995 and required full implementation of the program by January 1996. Appendix II describes the statutory and regulatory requirements for the enhanced I&M program. In August 1996, EPA recognized that the states’ delays in implementing their enhanced I&M programs would prevent many of them from achieving the 15-percent reduction in VOC emissions. Subsequently, in February 1997, EPA issued guidance to allow the states that revised their enhanced I&M programs under the September 1995 revised enhanced I&M regulation or the National Highway System Designation Act of 1995 (P.L. 104-59, Nov. 28, 1995) to have more flexibility in developing and implementing their programs. In order for the states to operate under the relaxed requirement, they had to demonstrate that their 15-percent reduction in VOC emissions would be achieved as soon as possible after November 1996, but no later than November 1999. The guidance allowed states to resubmit their VOC reduction plans to show that they would achieve the required reductions from the implementation of their enhanced I&M programs by November 1999. According to EPA, the states that had not implemented their enhanced I&M programs as of November 1997 may be unable to demonstrate how they will achieve required VOC reductions. Many States Have Not Implemented Enhanced I&M Programs None of the 23 states met the November 1992 statutory date for implementing their enhanced I&M programs, and only 2 had begun testing vehicles by EPA’s January 1995 deadline for starting their programs. In total, 12 states had begun testing vehicles under enhanced I&M programs by April 1998. A number of factors account for the delays in implementing enhanced I&M programs, including opposition to the stringent requirements of EPA’s enhanced I&M regulation, the reluctance of some state legislatures to provide authority and funding for the programs, and difficulties in obtaining test equipment and software support. The 12 states that are testing vehicles account for 43 percent of the 52 million vehicles subject to the enhanced I&M testing. Furthermore, several of the other 11 states are scheduled to start testing vehicles within the next few months. For example, California and Georgia, which have 9.4 million vehicles that will be subject to enhanced I&M testing, are scheduled to start testing in June 1998 and July 1998, respectively. Appendix III shows the implementation and approval status and the number of vehicles subject to enhanced I&M testing for each of the 23 states. States Have Encountered Difficulties in Implementing Programs According to EPA, states opposed EPA’s enhanced I&M regulation because the regulation did not allow them enough flexibility in designing and implementing their programs. The 1992 regulation required all enhanced I&M programs to meet or exceed a performance standard based on a model program that used computer-controlled test equipment and centralized “test-only” inspection centers. Some states believed that centralized programs resulted in fewer inspection centers, often making the testing programs less convenient for vehicle owners and potentially resulting in longer delays than previous I&M programs. Furthermore, the states believed that consumers would be inconvenienced by the 1992 enhanced I&M regulation because of the test-only feature of the model program, which required the owner of any vehicle that failed the inspection to go elsewhere to have repairs made and to return to the same inspection center for retesting. While the 1992 enhanced I&M regulation permitted the states to implement decentralized programs that allowed inspection centers to test and then repair vehicles, EPA determined that these programs were less effective in identifying and repairing vehicles with excessive emissions. Because of the opposition to the stringency of the 1992 regulation, EPA issued a revised enhanced I&M regulation in September 1995, and the Congress enacted the National Highway System Designation Act of 1995, which gave the states more flexibility to develop and implement their programs. For example, the revised regulation allowed the states to implement less stringent enhanced I&M programs if they could demonstrate emission reductions from other sources. The regulation also allowed the states more leeway in inspecting and repairing failed vehicles. Eight of the 23 states took advantage of the flexibility allowed by the revised regulation by implementing less stringent enhanced programs. Additionally, the National Highway System Designation Act of 1995—which prohibited EPA from requiring the states to have centralized IM-240 enhanced I&M programs—allowed the states to revise their programs to include decentralized testing and provided an 18-month interim approval period for them to demonstrate that their revised programs could achieve the needed emissions reductions. Eight of the 23 states have implemented or plan to implement the more flexible enhanced I&M programs under the act. Even though the revised enhanced I&M regulation and the National Highway System Designation Act of 1995 allowed more flexibility, nine states indicated in response to our survey that difficulties in obtaining legislative authority delayed the implementation of their enhanced I&M programs. For example, Massachusetts had planned to start inspecting vehicles under an enhanced I&M program in July 1997. However, as of November 1997, the date to which Massachusetts had committed to begin program operations, the state legislature had not enacted the needed legal authority for an enhanced I&M program, and vehicle testing had not begun. In December 1997, EPA notified Massachusetts that its enhanced I&M program was disapproved. Currently, Massachusetts is planning to begin testing vehicles in May 1999. Similarly, the Maryland legislature attempted to make the enhanced I&M program voluntary instead of mandatory, as required by the Clean Air Act Amendments of 1990, and this attempt delayed the implementation of the state’s program. However, the governor’s veto of this legislation paved the way for Maryland to start testing vehicles under its enhanced I&M program in the fall of 1997. In response to our survey, 13 states indicated that they have experienced problems with obtaining needed testing equipment or software support from vendors, which have delayed the implementation of their programs. These problems were especially apparent in late 1997 and early 1998, when several states were scheduled to start testing vehicles. According to EPA officials, only a limited number of vendors supply the testing equipment and the computer software needed for enhanced I&M inspection centers. With the high demand for the equipment in recent months, vendors have been unable to fill all orders. For example, Georgia had planned to have 300 inspection centers operating under an enhanced I&M program by July 1997. However, because of the vendor’s problem with delivering the equipment and providing software support, Georgia now plans to start testing vehicles in July 1998—a year later than originally planned. Overall, our survey of the 23 states identified a number of factors that delayed the states’ efforts to implement enhanced I&M programs. These included opposition to the stringent requirements of EPA’s initial program, difficulties in obtaining testing equipment, delays by EPA in issuing the initial regulation, difficulties in obtaining authority from state legislatures, and difficulties in certifying inspection centers and technicians. Figure 2 shows the factors cited by states as reasons for their delays. Public Acceptance of Enhanced I&M Programs Is Important The states recognize the importance of informing the public about the reasons for enhanced I&M programs. In fact, 14 states said that it was very or extremely important to educate the public about their enhanced I&M programs. Furthermore, seven said that they tried to educate the general public to a great or very great extent about the frequency of testing, the costs of tests, testing locations, and other pertinent information about the program. Seven states also said that they tried to educate the general public to a great or very great extent about the reasons for implementing enhanced I&M programs. For example, in implementing an enhanced I&M program, Georgia contracted with an advertising agency to develop and disseminate information through television and radio spots and distributed printed materials through community groups and organizations. A recent survey of the effectiveness of Georgia’s public information campaign for its I&M program showed that consumers believe that cars are the largest contributing factor to air pollution. The study also showed that 88 percent of Georgia’s consumers were aware of the current I&M program, and 76 percent believed that the program was doing a good job. In contrast, Maine initially tried to implement an enhanced I&M program in 1994 with little or no public relations efforts. After very strong public opposition to the program, the governor cancelled it. According to EPA, the opposition to the program was caused, in part, by the perception that the enhanced I&M program was being implemented as an alternative to imposing control measures on certain stationary sources. As of April 1998, Maine’s enhanced I&M program had been disapproved because the state’s revised plan for it did not meet all of EPA’s requirements. Even though some states have been more successful than others in overcoming public opposition and other obstacles to implementing their enhanced I&M programs, EPA has made only a limited effort to identify the practices these successful states have used and to share them with other states that are in the early stages of developing and implementing their programs. Delays in Implementing Enhanced I&M Programs Have Slowed Efforts to Reduce Ozone Levels Because of delays in implementing enhanced I&M programs, 19 of the 23 states are in jeopardy of not meeting deadlines for attaining the national ozone standard. The 19 states are relying on the enhanced I&M programs to reduce VOC emissions. In August 1996, EPA recognized that the states could not achieve a significant portion of their 15-percent VOC reductions by November 1996 because of delays in implementing enhanced I&M programs. It therefore examined other available control measures for reducing VOC emissions. EPA required the states to demonstrate in their VOC reduction plans that enhanced I&M programs were the most practical way for them to achieve the 15-percent reduction in VOC emissions. EPA then allowed the states to revise their enhanced I&M programs to claim credit for the emissions reductions that are based on the future implementation of their programs, provided they demonstrated that the required VOC reductions would be achieved as soon as possible after November 1996 but no later than November 1999. EPA also allowed the states to resubmit their VOC reduction plans to show that they would achieve the required VOC reductions from implementing their enhanced I&M programs by November 1999. EPA encouraged the states to customize their revised VOC reduction plans to include other control measures that would be the most practical for their areas to implement in achieving the required reduction in VOC emissions. Even with the relaxed requirement, 11 of the 19 states are at risk of not meeting the required VOC reductions specified under title I of the Clean Air Act Amendments of 1990 because they had not started testing vehicles as of April 1998. According to EPA, the states that had not implemented their enhanced I&M programs as of November 1997 may be unable to demonstrate how they will achieve required VOC reductions, and are at risk of having their VOC reduction plans disapproved because of the anticipated shortfall in VOC reductions. For example: EPA’s conditional interim approval of New Jersey’s enhanced I&M program, which accounts for 26 percent of the state’s planned reductions in VOC emissions, required the program to begin by November 15, 1997, in order for all vehicles to be tested by November 1999 and for the state to receive full credit for the VOC reductions from the program. New Jersey officials advised EPA that they would not select a contractor to operate the program until April 1998. In December 1997, EPA notified New Jersey that its 15-percent reduction plan was disapproved because the state failed to meet the required November 1997 start date for its enhanced I&M program. According to a New Jersey official, it is unclear how the state will make up the shortfall in VOC reductions caused by its failure to implement an enhanced I&M program. The District of Columbia is required to reduce VOC emissions by 133 tons per day to attain the ozone standard by November 1999. Even though the District is relying heavily upon its enhanced I&M program to provide 48 percent of the overall VOC reductions, it does not plan to start inspecting vehicles under an enhanced I&M program until April 1999. While control measures are available to the District for reducing VOC emissions from other mobile and stationary sources, many of these measures have already been implemented, and, according to EPA officials, imposing further controls on these sources will not produce the reductions that the District is expecting to achieve with an enhanced I&M program. Many of the states that are required to implement enhanced I&M programs must achieve the required VOC reductions by November 1999 but still do not have final approval for their VOC reduction plans. Table 1 shows the approval status of the states’ VOC reduction plans as of April 1998. Even though most of the states are planning to have their enhanced I&M programs account for a significant amount of the required reductions in VOC emissions, EPA and the states will not know how much of the needed VOC reductions will be met by enhanced I&M programs until each program is fully approved and operational. Thus, further delays by the states in implementing enhanced I&M programs jeopardize their efforts to achieve the required VOC reductions. While the states can use mobile and stationary sources in conjunction with the mandated enhanced I&M programs to attain the ozone standard these sources, especially stationary sources, have already made significant reductions in their VOC emissions, and, according to EPA, further reductions from them will be costly and take some time to achieve. In 1992, EPA estimated that the cost to reduce VOC emissions with an enhanced I&M program was $879 per ton compared with $5,000 per ton from stationary sources. According to EPA officials, with the less stringent requirements of many of the current programs, the cost per ton of VOC reductions from the enhanced I&M programs is probably higher, but not as high as further reductions from other mobile sources or stationary sources. However, EPA is not aware of any data that show current costs. Conclusions While enhanced I&M programs are an integral part of the effort to significantly reduce emissions from motor vehicles, states’ efforts to implement their programs have been slow and troubled by numerous delays. Recognizing that states have encountered a variety of challenges in implementing enhanced I&M programs, we believe that EPA could expand its efforts at helping some of the states that are experiencing the most significant problems by sharing the best practices, such as public relations campaigns, adopted by the states with approved and/or operating programs. Furthermore, because of delays in implementing enhanced I&M programs, states have not realized the reductions in VOC emissions that they were statutorily required to achieve by 1996, nor are they likely to achieve additional reductions that EPA is now requiring by November 1999 to enable them to attain the national ozone standard. Therefore, states will have to look to other mobile sources as well as stationary sources to meet their goals for reducing VOC emissions. However, obtaining the required reductions from other sources will be difficult because many of them, especially stationary sources, have already made major reductions in their VOC emissions, and any further reductions may be costly and take some time to achieve. Recommendation In view of the pivotal role that enhanced I&M programs play in reducing VOC emissions and the delays experienced to date in implementing these programs, as well as the possibility of future delays, we recommend that the Administrator of EPA compile information on the more successful practices, such as public relations campaigns, used by the states that have implemented their enhanced I&M programs and share the information with those states that are in the early stages of developing and implementing their programs. Agency Comments We provided copies of a draft of this report to EPA for review and comment. In commenting for the agency, the Director of the Office of Mobile Sources agreed with the information presented and suggested a few editorial changes to clarify points but did not comment on the recommendation. We included EPA’s comments as appropriate. Scope and Methodology We gathered data on the enhanced I&M programs in the 23 states required to implement the programs under the Clean Air Act Amendments of 1990. Data were obtained through the use of a survey mailed to the environmental offices in each of the 23 states. The survey was pretested by officials from the states of Georgia, Maryland, and Washington, and subsequently mailed in late January 1998. Completed surveys were returned by all 23 states. A copy of the survey is in appendix I. In addition to our analyses of the data gathered from the survey, we asked EPA to update the data for some questions. We also reviewed notices in the Federal Register that provided information on the status of the states’ enhanced I&M programs as well as other pertinent documentation. Additionally, we visited EPA’s regional offices in Boston, Massachusetts; Philadelphia, Pennsylvania; and Atlanta, Georgia to obtain background information on issues concerning the enhanced I&M programs. We also visited EPA’s Office of Mobile Sources in Ann Arbor, Michigan, and the Office of Air Quality Planning and Standards in Durham, North Carolina, and interviewed officials about the enhanced I&M program as well as issues concerning attaining the ozone standard. We met with officials in Massachusetts and Georgia to discuss the implementation of their enhanced I&M programs. We measured progress in terms of the states with operating programs that were testing vehicles as of April 1998. We did not use EPA’s approval status to measure progress because a state’s approval status is subject to change. We performed our work from July 1997 through May 1998 in accordance with generally accepted government auditing standards. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies to the appropriate congressional committees; the Administrator of the Environmental Protection Agency; and the Director of the Office of Management and Budget. We will also make copies available to others on request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix IV. Survey Changes in Requirements for the Enhanced Inspection and Maintenance Program This appendix describes the statutory and regulatory changes leading to the Environmental Protection Agency’s (EPA) current requirements for enhanced inspection and maintenance (I&M) programs. The Clean Air Act Amendments of 1990 Title I of the Clean Air Act Amendments of 1990 (P.L. 101-549—Nov. 15, 1990) required the 23 states with the most serious ozone and carbon monoxide problems to implement enhanced I&M programs. Specifically, the states with serious, severe, or extreme ozone nonattainment areas with 1980 urban populations of 200,000 or more; serious and certain moderate carbon monoxide nonattainment areas with urban populations of 200,000 or more; and areas with a population of 100,000 or more in the Ozone Transport Region, regardless of their attainment status; were required to implement enhanced I&M programs. The enhanced I&M programs were required to have centralized inspection centers and perform annual inspections unless the state demonstrated to EPA that a decentralized or biennial program would be equally effective. Title I also required EPA to issue regulations for the enhanced I&M program by November 15, 1991, and the states to implement their enhanced I&M programs by November 15, 1992. Title I divided all of the ozone nonattainment areas into five categories—marginal, moderate, serious, severe, and extreme—and set time frames for each category to reach attainment. The attainment dates ranged from 3 years (marginal) to 20 years (extreme) after the act was enacted. Title I also required the states to demonstrate how they would reduce volatile organic compounds (VOC) emissions—one of the major pollutants that contribute to the formation of ozone. The states with moderate to extreme ozone nonattainment areas were required to prepare implementation plans by November 1993 that showed how they would reduce VOC emissions by 15 percent within 6 years after enactment. The states with serious to extreme nonattainment areas also had to prepare plans showing how they would achieve additional VOC reductions. The plans to reduce VOC emissions after 1996 were due by November 1994 and were to show how the states planned to achieve 3-percent VOC reductions annually until the nonattainment areas reach attainment. Enhanced Inspection and Maintenance Program Regulation EPA issued its regulation for the enhanced I&M program on November 5, 1992. The regulation required the states with areas switching from test-and-repair to test-only requirements to implement programs that would begin testing 30 percent of all vehicles that were subject to enhanced I&M in the nonattainment areas in January 1, 1995, and all areas to begin testing all vehicles by January 1, 1996. The regulation also required the states to meet or exceed a performance standard that was based on a model program for an annual, centralized enhanced I&M program that included IM-240 test equipment, or an equivalent test protocol approved by EPA, and covered all 1968 and later model cars and light-duty trucks. The states that elected to implement decentralized programs or a program consisting of centralized and decentralized inspection facilities were to have their emission reduction credits discounted by approximately 50 percent for the decentralized portion of their programs, unless they could demonstrate that their programs were as effective as a centralized program. The regulation also included the requirement under the Clean Air Act Amendments of 1990 that a minimum expenditure of $450 for emission-related repairs was required for vehicles to qualify for a waiver of further repairs. According to EPA, a typical urban area adopting the model program established by the regulation would, by 2000, reduce the levels of air pollutants more than they would have reduced them without an enhanced I&M program: for carbon monoxide, the additional reduction would be 31 percent, for VOCs, 28 percent, and for nitrogen oxides, 9 percent. Enhanced Inspection and Maintenance Flexibility Regulation In response to strong public opposition to its initial enhanced I&M regulation, EPA issued a regulation known as the Inspection/Maintenance Flexibility Amendments on September 18, 1995. This regulation created a less stringent enhanced I&M program by allowing certain states more flexibility in implementing their programs. Specifically, the revised regulation allowed the states that can meet the requirements of the Clean Air Act Amendments of 1990 for VOC reductions and attainment without an enhanced I&M program as effective as the one adopted by EPA in the 1992 regulation to meet a less stringent low enhanced performance standard. The new standard, referred to as the low enhanced standard, did not include the IM-240 test as part of its model program. The regulation also modified other requirements of the 1992 regulation, such as extending the implementation of the minimum expenditure of $450 until January 1998. National Highway System Designation Act of 1995 The National Highway System Designation Act of 1995 (P. L. 104-59, Nov. 28, 1995) also responded to public opposition to the 1992 enhanced I&M regulations. Specifically, the act prohibited EPA from requiring a centralized, IM-240 enhanced I&M program and stopped EPA’s use of the 50-percent discount rate for decentralized or hybrid programs. Additionally, the act allowed states to submit, within 120 days after enactment, revisions to their enhanced I&M programs by proposing interim enhanced I&M programs. The act required EPA to approve enhanced I&M programs on an interim basis if the proposed credits for each element of the program reflected good-faith estimates and the revised programs complied with the Clean Air Act Amendments of 1990. The act further provided an 18-month period for the states to demonstrate that the credits they had proposed were appropriate, with no opportunity to extend the 18-month period. Enhanced Inspection and Maintenance Ozone Transport Region Flexibility Amendments Regulation On July 25, 1996, EPA issued the Inspection and Maintenance Ozone Transport Region Flexibility Amendments regulation. The regulation created a special low-enhanced standard for areas within the Ozone Transport Region that would be exempt from I&M requirements if they were not located in the region. These areas included attainment areas, marginal ozone nonattainment areas, and certain moderate nonattainment areas with populations under 200,000 within the 12-state Ozone Transport Region. Emission reduction goals in these areas were lower than those required for low enhanced I&M and basic I&M programs. The regulation provided flexibility to certain Ozone Transport Region states to implement a broader range of I&M programs than allowed under earlier regulations. Elements of the program include performing annual tests of 1968 and newer vehicles, checking on-board computer equipment for 1996 and newer vehicles, conducting remote sensing tests of 1968 through 1995 model year vehicles, and visual inspection of various control components on 1968 and newer vehicles. States’ Progress in Performing Mandatory Enhanced Inspection and Maintenance Testing, as of April 1998 Number of vehicles (in millions) These states had begun testing vehicles under an enhanced I&M program. While some of these states are testing vehicles under an I&M program, their testing does not meet all of the requirements to qualify as testing under an enhanced I&M program. Major Contributors to This Report Resources, Community, and Economic Development Division Washington, D.C. William F. McGee, Assistant Director Harry C. Everett, Evaluator-in-Charge Joseph L. Turlington, Senior Evaluator Lynn M. Musser, Technical Advisor Kellie O. Schachle, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of states' motor vehicle inspection programs, focusing on the: (1) progress made by the 23 states that are required to implement enhanced inspection and maintenance (I&M) programs, including the difficulties that the states have encountered; and (2) impact that delays in implementing enhanced I&M programs may have on the states' ability to comply with the national air quality standard for ozone. GAO noted that: (1) two of the 23 states had begun testing vehicles by the January 1995 deadline that the Environmental Protection Agency (EPA) set for implementing enhanced I&M programs, and 12 had begun testing vehicles as of April 1998; (2) a number of factors have contributed to delays in implementing programs; (3) opposition to EPA's enhanced I&M regulation--including the reluctance of some state legislatures to provide the legislative authority and funding needed to implement these programs--caused most of the 23 states to delay implementation; (4) in addition, the states had difficulty in obtaining new testing equipment and software support from vendors; (5) the delays in implementing enhanced I&M programs have jeopardized the states' ability to meet the deadlines for attaining the national ozone standard; (6) EPA has allowed the states to claim credit for future reductions in emissions of volatile organic compounds (VOC) from their enhanced I&M programs, provided they demonstrate that they will achieve the required reductions as soon as practical after November 1996; (7) if states cannot demonstrate that reductions in VOC can be obtained from the mandatory enhanced inspection and maintenance programs, they may have to look to other mobile sources as well as stationary sources to meet their goals for reducing these emissions; and (8) however, achieving further reductions from other sources will be costly and take longer than achieving the reductions from enhanced inspection I&M programs.
Background Since the September 11, 2001, terrorist attacks on the United States, DOD has launched two major overseas military operations related to the Global War on Terrorism: Operation Enduring Freedom, which includes ongoing military operations in Afghanistan and certain other countries, and Operation Iraqi Freedom, which includes ongoing military operations in Iraq. In both cases, operations quickly evolved from major combat operations into ongoing counterinsurgency and stability operations, which have continued to require large numbers of forces, ranging from about 138,000 personnel to about 160,000 personnel from 2004 to the present. These operations have required large numbers of forces with support skills, such as military police and civil affairs. While some of these skills have been in high demand across the Army, some skills, such as civil affairs, reside heavily in the Army’s reserve components and sometimes in small numbers of critical personnel. Reserve forces may be called to active duty under a number of authorities. As shown in table 1, two authorities enable the President to involuntarily mobilize forces, but with size and time limitations. Full mobilization, which would enable the mobilization of forces for as long as they are needed, requires a declaration by Congress. On September 14, 2001, President Bush declared that a national emergency existed as a result of the attacks on the World Trade Center in New York and the Pentagon in Washington, D.C., and he invoked the partial mobilization authority. As table 1 shows, this authority restricts the duration of reservists’ active duty to 24 consecutive months. OSD implements the activation of reservists for Iraq and Afghanistan under this partial mobilization authority. The Assistant Secretary of Defense for Reserve Affairs, who reports to the Under Secretary of Defense for Personnel and Readiness, is responsible for providing policy, programs, and guidance for the mobilization and demobilization of the reserve components. On September 20, 2001, OSD issued mobilization guidance that among other things directed the services as a matter of policy to specify in initial orders to reserve members that the period of active duty service would not exceed 12 months. However, the guidance allowed the service secretaries to extend orders for an additional 12 months or to remobilize reserve component members as long as an individual member’s cumulative service did not exceed 24 months. The services implement the authority and guidance according to their policies and practices. To meet the continuing demand for ground forces, in 2004 the Army extended the time that reservists must be deployed for missions related to Operation Iraqi Freedom or Operation Enduring Freedom. DOD’s and the Army’s current guidance states the goal that soldiers should serve 12 months with their “boots-on-the-ground” in the theater of operations, not including the time spent in mobilization and demobilization activities, which could add several more months to the time a reserve member spends on active duty. Further, senior DOD officials state that under DOD policy, a reservist may not be involuntarily deployed to either Iraq or Afghanistan more than once. Since September 11, 2001, there have been several rotations of troops to support Operation Enduring Freedom and Operation Iraqi Freedom. Currently, DOD refers to troop rotations based on troop deployment dates, although deployments overlap calendar years. For example, the rotation of troops that deployed or are scheduled to serve from calendar years 2004 through 2006 is known as the 04-06 rotation. The 05-07 rotation is composed of troops expected to deploy and serve from 2005 through 2007. DOD recently identified troops to deploy to either theater from 2006 through 2008 in the 06-08 rotation. DOD recently has started planning for the 07-09 rotation to identify forces for deployments from calendar years 2007 through 2009. Identifying Forces for Ongoing Operations In response to the new security environment, in May 2005 the Secretary of Defense approved a new integrated force assignment, apportionment, and allocation process, known as Global Force Management. The new process is designed to identify capabilities or forces to conduct operational missions. The Secretary tasked the Joint Forces Command with responsibility for developing global, joint sourcing solutions for conventional forces in support of combatant commander requirements. A Global Force Management Board, composed of general officer/flag officer- level representatives from the combatant commands, the services, the Joint Staff, and OSD, guides the process by reviewing emerging force management issues and making risk management recommendations to the Secretary of Defense. Under the Global Force Management process, combatant commanders determine the capabilities they will need to support ongoing operations, including identifying the numbers of personnel and specific skills required to generate the capabilities. In generating their operational plans, the combatant commanders consider whether private contractors or civilians rather than military forces could provide any of the desired capabilities. For missions that require military forces, the combatant commanders request the forces needed to provide the military capabilities from the Chairman, Joint Chiefs of Staff, who reviews and validates the requirements. When the requirements are validated, the Chairman sends the requirements for conventional forces to the Commander, Joint Forces Command, and to the Commander, Special Operations Command, for special operations forces such as civil affairs and psychological operations. The commanders, Joint Forces Command and Special Operations Command, are responsible for identifying the forces that can be deployed to meet the requirement considering global risks. The Army Forces Command, which reports to the Joint Forces Command, is charged with identifying the Army units and personnel that can be deployed to meet the requirements of the combatant commanders. The Army Special Operations Command, which reports to the Special Operations Command, is charged with identifying Army units and personnel to be deployed to support combatant commanders’ requirements. The Secretary of Defense reviews the commanders’ force sourcing recommendations and approves or disapproves them. Army Combat Support and Combat Service Support Skills Are in Increasingly Short Supply, and Data on Skilled Individuals Available for Future Deployments Are Not Integrated into the Sourcing Process Ongoing operations in Iraq and Afghanistan have created continuing high demand for certain combat support and combat service support skills, including military police, engineering, and civil affairs, and officials charged with sourcing future rotations have a limited view of what personnel remain available for future rotations. While dynamic operational requirements complicate force-planning efforts, the department will be increasingly challenged to identify forces for future rotations from a diminishing supply of readily available personnel under current deployment policies. The supply of personnel already trained in high- demand skills and eligible to deploy has decreased as operations have continued because many personnel with these skills are reservists whose deployments and duration of involuntary active duty service under the partial mobilization authority are limited by DOD and Army policy. A primary strategy used to meet requirements has been to identify personnel from other Army skills or from other services that can be reassigned or retrained with high-demand skills. However, DOD officials charged with identifying forces for future rotations have not had a source of readily available, comprehensive personnel data on deployment histories and skills across the services. Lacking such information, DOD officials developed a labor-intensive process involving a series of conferences with service representatives, the Joint Staff, and the Joint Forces Command where officials identify actions the services can take to meet the combatant commander’s requirements. DOD is taking steps to consolidate personnel, deployment, and skill data to support force management decisions through a new defense readiness reporting system. Until DOD systematically integrates such data into its process for identifying forces, it will continue to use an inefficient process and make important decisions about how to meet the combatant commander’s requirements based on limited information. Further, without complete, reliable, and accessible data that provide greater visibility over its available forces, DOD will lack analytical bases for requesting changes in or exceptions to current deployment policies when needed. As the Supply of Available, Trained Personnel for Some High-Demand Combat Support and Combat Service Support Skills Has Decreased, DOD Has Relied Increasingly on Reassigning and Retraining Personnel to Meet Requirements As operations have evolved from combat to counterinsurgency operations, requirements for forces with some high-demand skills—especially combat support and combat service support skills—have initially exceeded the number of Army personnel trained and available to deploy. As a result, DOD has relied increasingly on reassigning and retraining personnel to meet combatant commander requirements. The skills where requirements have initially exceeded the number of trained personnel include transportation, engineering, military police, quartermaster, military intelligence, civil affairs, signal corps, medical, and psychological operations. Many of these high-demand skills reside primarily in the Army’s reserve component. Reservists serving in Afghanistan and Iraq have been activated under a partial mobilization authority that enables the secretary of a military department, in a time of national emergency declared by the President or when otherwise authorized by law, to involuntarily mobilize reservists for up to 24 consecutive months. DOD policy implementing the mobilization authority states that any soldier who has served 24 cumulative months during current operations is ineligible for any further activation unless the reservist volunteers for additional duty. Further, DOD’s policy is that no reservist should be involuntarily deployed to either Iraq or Afghanistan more than once, according to senior DOD officials. Consequently, as operations continue and the number of reservists who have already deployed increases, it is likely to become increasingly difficult for DOD to identify reserve personnel skilled in high- demand areas who are eligible to deploy. One of the primary strategies DOD has used to meet requirements for some high-demand skills has been to reassign and retrain Army or other service personnel. The percentage of requirements that have been filled by reassigned or retrained Army personnel to some high-demand skills has increased as operations have continued. In addition, the combatant commander’s requirements for Army skills increasingly have been met by retraining personnel from the other services under Army doctrine. The strategy of reassigning and retraining available personnel from other services to fill combat support and combat service support requirements supports the department’s goal of deploying all reservists at least once before any are involuntarily activated for a second time. This will likely continue to be a primary strategy for providing high-demand forces as operations continue and the pool of skilled personnel who have not deployed continues to diminish. However, DOD officials charged with identifying the personnel who could be reassigned or retrained to meet requirements were challenged because they did not have information that linked data on personnel who remained eligible to deploy and their skills across the services. DOD’s Process for Identifying Forces Is Labor Intensive, and Officials Charged with Identifying Forces Have Not Integrated Comprehensive Data into DOD’s Sourcing Process Officials charged with identifying forces for future rotations did not integrate comprehensive data that would allow them to efficiently identify what skilled personnel are available to be deployed because such data were not readily available when the department began a rotational force deployment schedule. Until the need to sustain large numbers of forces for operations in Iraq and Afghanistan over a long period emerged, DOD officials did not anticipate the need for detailed information on individuals to support a rotational force schedule on a long-term basis. While officials ultimately identified forces to meet the combatant commander’s operational requirements, our review of the force identification process showed that the data used were not comprehensive and did not give officials charged with identifying forces a complete picture of what forces remained available across the services to meet the requirements. DOD officials involved with the process of identifying forces stated that supporting the rotational force schedule has not permitted them the time or resources to consolidate the services’ personnel data. In the absence of such data in the early stages of the ongoing operations, DOD officials developed a labor-intensive process that involves conferences on service and interservice joint levels where officials discuss various strategies to assign forces because they do not have data that would provide visibility over available forces. For example, while the Army Reserve and National Guard had data that identified available units, the data did not provide complete information on how many individuals remained deployable or had the required skills. Through a series of conferences, officials discussed what personnel remained available for future deployments based on data they gathered from various sources. While DOD is taking steps to link information about personnel and deployment history in its new defense readiness reporting system that could be helpful in making decisions about forces for future rotations, these data have not yet been integrated with DOD’s sourcing process. The Joint Staff and the services participated in conferences to identify forces for the 04-06 rotation in 2004 when identifying skilled personnel available for deployment became more difficult because of previous deployments, and the Army recognized the need to identify forces as early as possible so that they could be retrained in high-demand skills. The process, managed by the Joint Forces Command, has evolved over time as operations have continued and now involves months of conferences held at the service level and across the department where representatives of the services, the Joint Forces Command, the combatant commander, and others discuss strategies for meeting requirements. To meet the requirements for which the Army could not initially identify available and trained forces, the Joint Forces Command formed working groups composed of representatives from the services and Joint Forces Command, among others, to identify personnel from any of the other services who could be reassigned and retrained according to Army doctrine. The work of the joint functional working groups culminated in another conference, called the Final Progress Review, hosted by the Joint Staff at the Pentagon. During the executive sessions of the Final Progress Review, senior military leaders made decisions as to how the services, including the Army, would fill the remaining requirements. The process has enabled the department to fill requirements, but efficiency was lost because these officials did not have data that linked personnel skills and deployment availability so that trained forces remaining available under current policies could be readily identified. As a result, conference participants had to defer decisions until they could obtain more complete data. Moreover, the process does not provide assurance that forces identified are the most appropriate match considering both current requirements and future readiness. Moreover, it does not provide an ability to make future projections about whether DOD will be able to meet future requirements or will need to consider other alternatives. While DOD has begun compiling data through its new readiness reporting system that links information about personnel according to deployment history and skill set to provide better visibility of available forces, and such data were available beginning in August 2005, this information has not been integrated into the existing sourcing process. The Office of the Under Secretary of Defense (Personnel and Readiness) has taken steps to develop a new defense readiness reporting system, the Defense Readiness Reporting System, that will link data on personnel availability and skills, according to a senior agency official. The system, which consolidates data from multiple sources, such as the services and the department’s manpower data center, is in the early stages of implementation and validation. When fully implemented and validated, the Defense Readiness Reporting System could provide the integrated data that sourcing officials need. However, the information has not yet been integrated into the sourcing process to identify the most appropriate forces to meet current requirements from all the services considering their other missions. In its written comments on a draft of this report, DOD said that although integrated personnel data were not available during the entire 06-08 sourcing process, this system could now provide data and analytical support for identifying forces for future rotations. DOD said that Joint Forces Command and Special Operations Command officials responsible for identifying forces should use the system to assist in identifying available personnel in the future. Until DOD systematically integrates such data into its process for identifying forces, it will continue to use an inefficient process and make important decisions about how to meet the combatant commander’s requirements based on limited information. Further, without complete, reliable, and accessible data that provide greater visibility over its available forces, DOD will lack analytical bases for requesting changes in or exceptions to current deployment policies when needed. DOD Has Not Conducted a Comprehensive, Data- Driven Analysis of Options to Enhance the Availability of Personnel with High- Demand Skills for Future Rotations Although DOD found ways to meet the combatant commander’s requirements for high-demand skills through the 06-08 rotation, it has not undertaken a comprehensive analysis of options to support future rotations in Iraq and Afghanistan should they continue for a number of years. DOD has not undertaken a comprehensive analysis because its process for identifying forces was created to meet the specific combatant commander’s requirements for the next rotation cycle. Our previous work has shown that in the face of a changing environment, such as that of evolving military operations, valid and reliable data on the number of employees required are critical to prevent shortfalls that threaten the ability of an organization to efficiently and effectively perform its mission. However, without a comprehensive assessment of the most efficient and effective way to prepare for future rotations, including comprehensive analyses of various options, DOD will not be able to demonstrate a convincing business case for maintaining or changing its strategies, such as retraining personnel and seeking volunteers, for meeting a combatant commander’s requirements. The Joint Staff’s Limited Analyses of Options for the 07-09 Rotation Quantified Shortfalls by Units in Some High-Demand Skills In summer 2005, the Secretary of Defense asked the Director, Joint Staff, for a briefing on future force structure challenges for the next 2 to 3 years, although the Secretary did not specify how the review was to be conducted. In response to the Secretary’s request, in fall 2005, the Joint Staff conducted a study, known as Elaborate Crossbow V, with the objectives of predicting shortfalls of skilled personnel for the 07-09 rotation, recommending options to make personnel available for rotations, and identifying risks that demonstrated the difficulties officials face in identifying forces for future rotations, among other objectives. However, the study was limited to units within selected high-demand combat support and combat service support skills for operations in Iraq and Afghanistan. In the 2005 assessment, Joint Staff and DOD officials assumed that the combat commander’s requirements for support skills for the 07-09 rotation would be the same as the requirements for the 06-08 rotation, and they compared these requirements to estimates of available units. Joint Staff officials were charged with developing models that would assess the number of units that could be made available by using several options, including requesting a new partial mobilization authority and allowing redeployment of reserve personnel with residual time under current mobilization authority. Joint Staff officials requested detailed information from the Joint Forces Command and Special Operations Command on (1) the total inventory of units in the force structure, (2) the units’ arrival and departure dates from theater, (3) the number of days in theater for the last rotation for individuals in the units, (4) the amount of time individuals spent at home stations, and (5) the remaining time available under the partial mobilization authority for reservists. The Joint Staff officials planned to use the data in the models to determine if changing the underlying assumptions associated with an option would make more units available. When detailed data were available, Joint Staff officials were able to use their models to test how changing policies would affect the availability of units; however, detailed data were only available for civil affairs units. The fact that an official from the Special Operations Command had accurate and specific information on the civil affairs specialists’ dates of deployments and time remaining under the mobilization authority enabled the Joint Staff officials to test how changing policies would change the availability of units to meet the estimated requirement. For example, the analysis showed that if DOD allowed the redeployment of reserve personnel with remaining time under partial mobilization authority, more Army reserve civil affairs companies would become available. However, according to a Joint Staff official who assisted in developing the models, the Joint Staff could not conduct a thorough analysis of other units with skills in high demand because it did not have key data. While the Joint Staff’s limited review is a first step, it does not represent systematic analyses of options for continuing to support operations in Iraq and Afghanistan beyond the 06-08 rotation. Human Capital Best Practices Rely on Data- Driven Analyses to Guide Decision Making Our prior work has shown that valid and reliable data about the number of employees an agency requires are critical if the agency is to spotlight areas for attention before crises develop, such as human capital shortfalls that threaten an agency’s ability to economically, efficiently, and effectively perform its missions. We have designated human capital management as a governmentwide high-risk area in which acquiring and developing a staff whose size and skills meet agency needs is a particular challenge. To meet this challenge, federal managers need to direct considerable time, energy, and targeted investments toward managing human capital strategically, focusing on developing long-term strategies for acquiring, developing, and retaining a workforce that is clearly linked to achieving the agency’s mission and goals. The processes that an agency uses to manage its workforce can vary, but our prior work has shown that data-driven decision making is one of the critical factors in successful strategic workforce management. High- performing organizations routinely use current, valid, and reliable data to inform decisions about current and future workforce needs, including data on the appropriate number of employees, key competencies, and skills mix needed for mission accomplishment and appropriate deployment of staff across the organizations. In addition, high-performing organizations also stay alert to emerging mission demands and remain open to reevaluating their human capital practices. The change in the Army’s missions from combat to counterinsurgency operations in Iraq and Afghanistan represented a new environment, which provided DOD with the opportunity to reevaluate the mix of personnel and skills and its deployment policies to determine whether they are consistent with strategic objectives. Several Options Exist to Increase the Army’s and Other Services’ Supply of Combat Support and Combat Service Support Skills The United States is in its fifth year of fighting the Global War on Terrorism, and the operations associated with the war, particularly in Iraq and Afghanistan, may continue. DOD planners are beginning to identify forces for the 07-09 rotation. Based on our review of DOD’s deployment policies and our prior work, we identified several options that DOD could assess to increase the supply of high-demand skills to support future rotations. Each of the proposed options involves both advantages and disadvantages, and some options could be implemented in conjunction with others. Moreover, some options might be more appropriate for certain skill sets than others. However, without key data and analyses, such as the amount of time remaining under the partial mobilization authority for each reservist, decision makers will have difficulty weighing which option(s) would best achieve DOD’s overall goals of supplying trained and available forces to meet the combatant commander’s requirements while considering risks, future readiness, and recruiting and retention. Based on its challenges in providing personnel with high- demand skills in previous rotations, DOD will be faced with difficult choices on how to make personnel in high-demand skills available for future rotations. Options that could increase the supply of combat support and combat service support skills for future rotations include the following: Retraining personnel within the Army and other services in high- demand skills. DOD could consider requiring the Army to reassign and retrain more of its personnel as well as relying on the Air Force, the Navy, and the Marine Corps to reassign and retrain available personnel for high- demand Army skills. As discussed previously, the Joint Forces Command has identified significant numbers of Army and other service personnel that the Army could retrain for some high-demand skills. As of February 2006, the Joint Staff estimated that over 200,000 reservists from all the services’ reserve components could be potentially available for deployment under current policies and might be retrained for high-demand skills, and the services are attempting to verify the actual availability of reservists. However it is unclear how many reservists can be reassigned and retrained to meet Army requirements for skills and rank. OSD officials said the department would consider waiving deployment policies for targeted high-demand skill personnel only when the services can provide a strong business case for the waiver. Instead, the department intends to rely on retraining personnel and seeking volunteers to meet future requirements. Joint Staff officials are currently seeking from the services more detailed data on potentially available personnel, such as their skills and whether they can be assigned and trained for deployment. A key advantage of this option is that Air Force, Navy, and Marine Corps personnel who have not deployed already have some military skills and experience, such as an understanding of the roles and responsibilities of their senior leaders and knowledge of military roles and missions that could be useful in supporting ongoing operations. In some cases, experienced personnel from the other services may have specialized skills that are similar to the Army skills in high demand; therefore, they would need less training than newly recruited Army personnel. A disadvantage to this option would be that the other service personnel would not be available to perform missions in their respective services. Further, members of the Air Force, Navy, and Marine Corps could potentially miss training and other opportunities to enhance their careers in their parent services. Moreover, recruiting and retention could be hindered because potential recruits or experienced personnel may not want to retrain for missions and skills other than those they originally planned to perform. Adjusting force structure through increasing the number of Army positions in combat support and combat service support by further transferring positions from low-demand skills to high-demand areas. Another option focuses on shifting positions in low-demand skills to high-demand skills, either temporarily or permanently. The Army plans to transfer some low-demand positions to high-demand skills, such as military police. In addition, DOD plans to expand psychological operations and civil affairs units by 3,700 personnel as a result of Operation Iraqi Freedom and Operation Enduring Freedom, according to the 2006 Quadrennial Defense Review Report. However, according to a senior Army official, the Army is facing challenges in meeting its current planned time frames for reassigning positions because providing forces to meet the rotational requirements in Iraq and Afghanistan has created delays in planned transfers of skills and modular force transformations may require permanent changes in the numbers and types of skills needed. The advantage of creating more units with high-demand skills is that continuing operational requirements could be met with more available, trained personnel. Further, if more units with the combat support and combat service support skills that are in high demand were in the active component, DOD would not face the restrictions that apply to reserve personnel. A major disadvantage to using this option is that the Army could encounter further delays in providing personnel with high-demand skills because, according to some service officials, limitations in the availability of training facilities, courses, and instructors may reduce the numbers of personnel who can be retrained in the short term. Many of the Army’s skills in high demand reside primarily in the Army’s reserve component. Therefore, if DOD’s deployment policies remain unchanged, the Army will continue to face limitations on its use of reservists. Changing the number of days that active duty and reserve Army personnel may be in theater for a deployment. OSD could consider changing the duration of deployment for Army reservists or active duty personnel in theater, known as “boots-on-the-ground,” from the current 12 months. Current departmental guidance states that Army personnel can serve no more than 12 months within the U.S. Central Command’s theater of operations, not including the time spent in mobilization and demobilization activities. However, because mobilization and demobilization activities require about 3 months prior to deployment and 3 months after deployment, reservists deployed to Iraq or Afghanistan typically serve about 18 months on active duty. Under DOD’s policy, the Army may use reserve members for a total of 24 cumulative months. Therefore, by the time reservists are deactivated after 18 months of mobilization, they have only 6 months of deployment eligibility remaining under DOD’s policy—not enough to remobilize and redeploy for another yearlong overseas assignment. If the amount of “boots-on-the-ground” time was lengthened, from the current 12 months to 18 months, the Army could more fully use reserve personnel under the partial mobilization authority. A key advantage of this option would be that a longer deployment period would permit forces to be in theater longer and provide more force stability and continuity, but individuals could be adversely affected by longer tours of duty. In addition, a slower rotational pace would provide force planners, such as the Army Forces Command, more time to identify available personnel and decide which personnel will best meet requirements for the next rotation. However, lengthening “boots-on-the- ground” time could have negative consequences for individuals. If reservists were away from their civilian careers and families for longer time frames, individual morale could erode, and DOD could face challenges in recruiting and retaining skilled personnel. Alternatively, the Army could shorten the “boots-on-the-ground” time and involuntarily activate reservists to deploy to Iraq and Afghanistan more than once. If deployments were shortened, Army reservists would not be separated from their civilian careers for long periods, and recruiting and retention challenges could lessen. However, a major disadvantage to shortening the Army’s deployment lengths to, for example, 6 months is that the Army would have to mobilize and demobilize more personnel in a given period. According to Army and Army Forces Command officials, if reservists’ deployments were shortened without change to the “one deployment only” policy, the Army would face critical personnel shortages in many skill areas. Any shortages of available reserve personnel would likely have to be filled with active duty personnel, increasing stress on the active force. Further, less time at home for active forces could disrupt training and lower readiness for future missions. Allowing redeployment of reserve personnel with time remaining under DOD’s 24 cumulative month deployment policy. DOD’s policy is that personnel should not be deployed for more than 24 cumulative months under the partial mobilization authority or involuntarily deployed overseas a second time, irrespective of the number of months served. However, if OSD allowed the redeployment of reserve personnel the services could more fully use reservists’ 24 months of involuntary active duty. The major advantage to this option is that the Army would have access to reservists trained in high-demand skills. Further, changing the redeployment policy could enable the Army to decrease its reliance on retraining its personnel or other service personnel to meet the combatant commander’s requirements. If the Army collected detailed data about the number of days a reservist served in theater and the remaining time available under the partial mobilization authority, it could compile a comprehensive list of reservists who could possibly deploy again and identify the time frames that they would be available. However, as discussed in the previous sections of the report, DOD and the Army do not have detailed data about personnel across the services readily available. A major disadvantage of this option would be that DOD would involuntarily activate large numbers of reserve personnel for multiple deployments. Multiple deployments could disrupt a reservist’s civilian career and decrease his or her willingness to remain in the military. Another disadvantage of redeploying reservists would be that some reservists could be deployed more than once in 6 years, which differs from the Army’s plan under its force rotation model. The Army’s force rotation planning model is designed to provide reservists more predictability in deployment eligibility. Increasing the Army’s active duty end strength. Congress authorizes annually the number of personnel that each service may have at the end of a given fiscal year. This number is known as authorized end strength. In the National Defense Authorization Act for Fiscal Year 2006, Congress increased the fiscal year 2006 end strength of the Army by 10,000—from 502,400 to 512,400. Congress also authorized additional authority for increases of up to 20,000 active Army personnel for fiscal years 2007 through 2009 to support ongoing missions and to achieve transformation. However, current Army plans project a decrease in personnel to 482,400 active duty forces by fiscal year 2011. The primary advantage of increasing the Army’s end strength and funding associated positions would be that the Army could provide more active duty personnel to meet operational requirements for Iraq and Afghanistan, to accommodate the requirements for the modular force, and to help meet the Army’s rotational force planning goal of having active personnel deployed for no more than 1 out of every 3 years. Budgetary concerns could be a major drawback to this option. Decision makers would have to weigh the increased cost of permanently increasing the Army’s end strength. According to Army personnel and budget officials, in fiscal year 2005, the estimated cost to compensate, retain, and train each Army servicemember was over $100,000 annually. Further, recruiting personnel to meet the higher end strength levels may be difficult because of the uncertainty of how long operations in Iraq and Afghanistan may continue and whether new recruits could be targeted to high-demand skills. Additionally, the Army would require time to organize, train, and equip additional units to be ready to deploy for overseas operations. Using more personnel from the Individual Ready Reserve. Members of the Army’s Individual Ready Reserve, which is composed of about 112,700 members, include individuals who were previously trained during periods of active service but who have not completed their service obligations, individuals who have completed their service obligations and voluntarily retain their reserve status, and personnel who have not completed basic training. Most of these members are not assigned to an organized unit, do not attend weekend or annual training, and do not receive pay unless they are called to active duty. Members assigned to the Individual Ready Reserve are subject to recall, if needed, and serve a maximum of 24 months. As of September 2005, of the total Army Individual Ready Reserve population of 112,700, about 5,200 personnel had been mobilized. An advantage of this option is that it could provide the Army with access to personnel who already have some military experience. These reservists could be retrained in their active duty skills or retrained in different skills. A significant drawback to this option would be the time needed to identify, locate, and contact members of the Individual Ready Reserve because, as we have reported previously, the services lack vital contact information. Further, based on the Army’s recent experience when these reservists were recalled, exemptions and delays were encountered that could limit the services’ ability to use these personnel in significant numbers. Identifying forces for future rotations is likely to become more difficult for DOD without comprehensive analyses of options for meeting potential future requirements. Without complete and accurate data that link deployment information and skill areas for military personnel to assist in developing and assessing the options, the department will continue to have limited information with which to make decisions about how to fill the combatant commander’s requirements. Further, without a systematic evaluation of options, the current difficulties in providing personnel with the needed skills could worsen and requirements could go unfilled. As the Joint Staff’s limited analyses of options showed, having complete and accurate data enables planners to clearly identify how alternative options would affect their ability to efficiently identify forces. Additionally, without linking data to options, the services may have difficultly deploying all reservists at least once before other reservists are required to deploy for a second time, which is a key goal of officials in OSD. If DOD had data- driven analyses of options to increase available skilled personnel, DOD leaders would have a better basis for considering policy changes and congressional decision makers would have more complete information with which to carry out their oversight responsibilities with regard to the size and composition of the force, mobilization policies, and other issues. Conclusions Although DOD has accommodated the continuing high demands for combat support and combat service support skills, primarily through retraining and reassigning personnel, the pool of available, trained, and deployable reservists is diminishing rapidly and could leave the department with significant challenges to identifying personnel for future rotations. Until DOD’s planners and senior decision makers integrate in the sourcing process comprehensive, reliable data that link personnel by skills and deployment histories, they will have to continue to use an inefficient and time-consuming process to determine which personnel to deploy. Moreover, DOD will be limited in its ability to assess whether it can meet future requirements and to consider a range of alternatives for meeting requirements for skills that are in high demand. If DOD had better visibility over the personnel who are available to deploy and their skills, officials could reduce the amount of time they spend in identifying personnel for rotations, provide assurance that personnel identified are appropriately matched considering both the requirements and future readiness, and better manage the risks associated with moving personnel from other skills and missions to support future operations. In addition, without an integrated assessment that uses data to examine alternative courses of action, DOD planners and senior leaders will not be well positioned to make informed decisions on how to meet the requirements of future rotations, particularly if rotations continue at roughly the same level for the next few years. To meet requirements for future rotations, the department intends to continue its strategy of reassigning any eligible personnel the services can identify until all reservists from all services have been deployed at least once. However, there are additional options that DOD could consider that might increase the supply of personnel for high-demand skills for future rotations, although each option could have negative effects as well as positive ones. Data-driven analysis of options could help DOD senior leaders make difficult decisions to balance the advantages and disadvantages for each option and to apply the best-suited option to meet the varying requirements for the range of high-demand skills. Until DOD comprehensively assesses these options using detailed data linked to individual skills and deployment histories, DOD officials cannot weigh what options would be most advantageous to the combatant commander and whether potential negative effects on readiness for future operations would be minimized. Recommendations for Executive Action To facilitate DOD’s decision making to meet the demands associated with the Global War on Terrorism and to increase the availability of skilled personnel, we recommend that the Secretary of Defense take the following two actions: Integrate comprehensive data that identify active and reserve personnel according to deployment history and skill set, including personnel who are available to deploy, with DOD’s sourcing process before identifying combat support and combat service support personnel for the next rotation to Iraq and Afghanistan. Conduct comprehensive, data-driven analyses of options for meeting potential requirements for future missions to Iraq and Afghanistan. Such analyses should include an assessment of options, such as using more personnel with support skills from the Army and other services; transferring more positions to high-demand areas; changing deployment lengths; and increasing Army end strength, which would increase the availability of personnel in high-demand skills. Agency Comments and Our Evaluation The Deputy Under Secretary of Defense (Readiness) provided written comments on a draft of the classified version of this unclassified report. The department agreed with our recommendations and cited actions it is taking to implement them. The department’s comments are reprinted in appendix II. In addition, the department provided technical comments, which we incorporated as appropriate. In its comments, DOD expressed concerns that our report (1) does not fully reflect the complicated task of providing forces for dynamic operational requirements and (2) subtly suggests that DOD’s flexibility in meeting operational requirements is a sign of failed force management practices. It also stated that its use of the total force, not just the Army, enabled it to meet all combatant commanders’ requirements to date. In addition, the department stated that our recommendations should more explicitly recognize and support the use of the newly developed Defense Readiness Reporting System. It stated that the total force visibility our recommendations call for exists in that system and that the Joint Forces Command and the Special Operations Command should use the detailed, individual-level information in that system to support their sourcing processes. We agree that the process developed to identify forces is very complex. Our report described the process for identifying forces for Army combat support and combat service support requirements. Moreover, our report discussed how DOD has met the demands and how officials used multiple strategies and relied on the total force to meet requirements for high- demand skills. The report does not make a judgment about the appropriateness of the outcomes of the sourcing process. Rather, the report demonstrates that the lack of data complicated the force identification process, and that force planners did not have visibility over detailed information on personnel or how current sourcing decisions would affect the readiness of the force. However, we have modified our report to reflect that DOD’s effort to integrate personnel deployment and skill data and readiness information in the new Defense Readiness Reporting System represents a positive step toward providing the visibility over personnel and deployment histories that would be useful to force planners. Although this system has not yet been used to support the sourcing process, when it reaches full operational capability at the end of fiscal year 2007 and DOD has completed data validation, it could be a means to provide visibility over detailed information on personnel to improve the sourcing process, thereby fulfilling our recommendation. We have not modified our recommendation to require that DOD use the Defense Readiness Reporting System in its sourcing process because it is still in development. With respect to our second recommendation that DOD conduct comprehensive, data-driven analyses of options for meeting continuing operational requirements, DOD agreed that all options should be considered and said it is conducting a variety of data-driven analyses to develop clearer options aimed at better positioning forces to meet current and future operational requirements. We believe that the department’s approach will satisfy the intent of our recommendation if the department bases its assessments on data that provide decision makers complete information on the options and related risks. We are sending copies to other appropriate congressional committees and the Secretary of Defense. We will also make copies available to other interested parties upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-4402 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. Appendix I: Scope and Methodology To assess the combat support and combat service support skills that are in high demand for operations in Iraq and Afghanistan, we collected U.S. Joint Forces Command and U.S. Special Operations Command data showing how U.S. Central Command’s requirements were met for two rotations—calendar years 05-07 and calendar years 06-08. Using the data, we compared the number of requirements from U.S. Central Command to the number of requirements that the Army could meet and determined whether and to what extent combat support and combat service support skills initially experienced shortages for the 05-07 and 06-08 rotations. To identify what strategies the Department of Defense (DOD) took to identify forces in cases where demand exceeded the initial supply, we examined the decisions made by officials at the U.S. Joint Forces Command and the U.S. Special Operations Command as documented in their data. We also compared the U.S. Central Command’s documents, which identified the specific capabilities and deployment time frames, to the U.S. Joint Forces Command and U.S. Special Operations Command data to identify specific instances where the Army reassigned and retrained its personnel or where personnel from the other services were reassigned and retrained to perform Army requirements. We also reviewed the Joint Chiefs of Staff’s analyses of the U.S. Central Command’s requirement and the actions taken by DOD to meet the requirements for the 04-06, 05-07, and 06-08 rotations. Since the U.S. Joint Forces Command did not have complete data on how the department identified forces for the 04-06 rotation, we attributed the 04-06 sourcing results to the Joint Staff. We met with an official in the Joint Staff Directorate for Operations to discuss our analysis comparing the combatant commander’s requirements for the 05-07 rotation to DOD’s 05-07 sourcing decisions to ensure our methodology was comparable to the Joint Staff official’s analysis. We also discussed our methodology of analyzing the U.S. Joint Forces Command’s data for the 05-07 and 06-08 rotations with officials in the command’s Joint Deployment Operations Division. To assess the reliability of the 05-07 and 06-08 rotation data, we reviewed existing information about the data and the systems that produced them, interviewed officials knowledgeable about the data, and performed limited electronic testing. When we found missing information or discrepancies in the key data elements, we discussed the reasons for the missing information and data discrepancies with officials in the Joint Deployment Operations Division, U.S. Joint Forces Command. We determined that the 05-07 and 06-08 rotation data were sufficiently reliable for our purposes. In addition, to assess the extent to which DOD has visibility over what forces remain available to meet future requirements, we collected and examined the Joint Staff, U.S. Joint Forces Command, and Department of the Army briefings that document the decisions reached to identify the combat support and combat service support forces identified for the 05-07 and 06-08 rotations and held discussions with officials responsible for identifying forces at DOD organizations. We also examined DOD documents that contained information on deployment policies and the partial mobilization authority to understand how they affect the availability of active military personnel and reservists for future deployments. We discussed the implications of DOD’s deployment policies and the status of identifying forces for rotations by obtaining testimonial evidence from officials responsible for managing these efforts at DOD organizations, including, but not limited to, the Office of the Under Secretary of Defense for Personnel and Readiness (Readiness, Programming and Assessment), the Joint Chiefs of Staff Directorate for Operations, the U.S. Joint Forces Command Joint Deployment Operations Division, the U.S. Special Operations Command Operations Support Group, the U.S. Army Special Operations Command Deputy Chief of Staff for Plans, the U.S. Army’s Office of the Deputy Chief of Staff for Operations and Plans, and the U.S. Army Forces Command Plans Division. Because it did not fall within the scope of our review, we did not assess how the forces were trained or will be trained and equipped or the effects on recruitment and retention as a result of continuing operational needs. We also observed the Department of the Army’s conference in April 2005 and the U.S. Joint Forces Command/Joint Chiefs of Staff conference in August 2005 to understand the process used by department officials to identify combat support and combat service support for the 06-08 rotation. As part of this effort, we observed working group meetings that were organized by combat support and combat service support skills to understand how department officials discussed and developed approaches to meet the combatant commander’s requirements. At these conferences, we held discussions with officials to fully understand the challenges they face with using the available data to identify personnel. To determine what percentage of combat support and combat service support skills reside in the Army’s active and reserve components, we collected skill set data from the Army’s Office of the Deputy Chief of Staff for Operations and calculated the percentage of positions assigned to several support skills for each of the Army’s components in fiscal years 2005 and 2011. In addition, we analyzed transcripts of public briefings and congressional testimony presented by DOD officials. To assess the reliability of the fiscal year 2005 and the projected fiscal year 2011 data on the composition of the Army’s active and reserve components by skills, we reviewed existing information about the data and the systems that produced them, interviewed officials knowledgeable about the data, and compared our analysis to the Army’s published analysis. We determined that the Army’s data were sufficiently reliable for the purposes of our objectives. To assess the extent to which DOD has conducted a comprehensive, data- driven analysis of its alternatives to continue meeting requirements for high-demand forces, we met with officials in the Office of the Under Secretary of Defense for Personnel and Readiness (Readiness, Programming and Assessment), the Joint Chiefs of Staff, the U.S. Army’s Office of the Deputy Chief of Staff for Operations and Plans, and the U.S. Joint Forces Command Joint Deployment Operations Division to determine whether the department had plans to conduct assessments. We held further discussions with officials in the Joint Chiefs of Staff Directorate for Force Assessment to gain an understanding of the departmentwide study led by the Joint Staff. Further, we examined the Joint Staff’s briefing documents to increase our understanding of the process used to conduct the study, the data and assumptions used during the study, and the results of the study. We discussed the status and implications of the study with officials who participated in the Joint Staff- led study, including the Under Secretary of Defense for Personnel and Readiness (Readiness, Programming and Assessment) and officials from the U.S. Joint Forces Command Joint Deployment Operations Division. To identify other options that DOD should consider to increase the availability of personnel with high-demand skills, we examined DOD documents containing information on deployment policies and the partial mobilization authority, held discussions with knowledgeable officials about mobilization authority and deployment rules, reviewed recently issued reports from think tanks related to providing forces for rotations, and reviewed our prior audit work related to end strength and initiatives to make more efficient use of military personnel. We identified criteria for examining force levels through our reports on strategic human capital management. Further, we reviewed our prior audit work related to recruiting and retention to enhance our understanding of the factors that affect the military services’ ability to attract and retain personnel. Our work was conducted in the Washington, D.C., metropolitan area; Norfolk, Virginia; Atlanta, Georgia; and Tampa, Florida. We performed our work from February 2005 through June 2006 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Department of Defense The following are GAO’s comments on DOD’s letter. GAO Comments 1. An objective of the report was to identify high-demand skills, and as part of that assessment, we observed and reviewed DOD’s force identification process to meet operational requirements for Iraq and Afghanistan, including DOD’s current policies and plans. The report describes in detail the structures developed to identify forces and identifies and assesses major analytical tools used during the process. Our report also acknowledges that the department met the combatant commander’s requirements for the 04-06, 05-07, and 06-08 rotations. However, we believe that the force identification process could become more efficient if DOD officials charged with identifying forces relied on comprehensive data to inform decision making. 2. We agree with the department that dynamic operational conditions in Iraq and Afghanistan have made it more difficult for the department to anticipate the number of forces and the specific skills needed in the future, and we have added text on pages 1 and 8 to more fully reflect this challenge. DOD stated that as a result of the dynamic operational conditions, the Joint Forces Command—the DOD agent charged with filling combatant commanders’ force requirements—used a variety of strategies, such as reassigning and retraining personnel to new skill areas (both within the Army and across service lines), capitalizing on joint solutions in like skill areas. According to DOD’s comments, in every case, these forces have deployed only after having been fully certified as prepared for their theater missions and have performed admirably. Our report extensively described the process for identifying forces for Army combat support and combat service support requirements and illustrated in detail how DOD officials used multiple strategies to meet requirements for high-demand skills. Assessing the appropriateness of sourcing outcomes and how the forces were trained were outside the scope of this review. 3. Our review focused on the Army because of the high-demand skills that were found predominantly in the Army, such as military police and civil affairs. We disagree that our report implies that DOD’s flexibility in meeting uncertain operational requirements is a sign of failed force management. We point out, however, that as rotations to Iraq and Afghanistan have continued to require large numbers of ground forces, data demonstrate that the number of available, trained Army personnel has declined. According to DOD officials, strategies to meet combatant commander requirements, such as reassigning and retraining personnel, present their own challenges, such as costs for new training. Further, while our draft report recognizes the overall Global Force Management process, it focuses on the part of that process that identifies deployable personnel and develops strategies to meet the combatant commander’s force requirements using available personnel. 4. We believe that the Defense Readiness Reporting System could be a mechanism to provide force planners the visibility they need when it is fully operational. We have updated our report to reflect the status of the system; however, we did not assess the data reliability of that system. 5. We do not make a recommendation as to what system DOD could use to supply force planners with the data they need for visibility over personnel skills and deployment histories. If the department decides to use the Defense Readiness Reporting System, it should be integrated into the force identification process. 6. See the Agency Comments and Our Evaluation section. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Margaret Morgan, Assistant Director; Deborah Colantonio; Susan Ditto; Nicole Harms; Whitney Havens; Catherine Humphries; James Lawson; David Marroni; Kevin O’Neill; Masha Pastuhov-Pastein; Jason Porter; and Rebecca Shea made major contributions to this report.
Since the terrorist attacks of September 11, 2001, the war on terrorism has dominated the global security environment. Ongoing overseas operations and heavy reliance on reservists have raised concerns about how the Department of Defense (DOD) will continue to meet its requirements using an all-volunteer force. The Army, in particular, has faced continuing demand for large numbers of forces, especially for forces with support skills. GAO was mandated to examine the extent of DOD's reliance on personnel with high-demand skills and its efforts to reduce or eliminate reliance on these personnel. Accordingly, GAO assessed (1) the combat support and combat service support skills that are in high demand and the extent to which DOD officials have visibility over personnel who are available for future deployment and (2) the extent to which DOD has conducted a comprehensive, data-driven analysis of alternatives for providing needed skills. Ongoing operations in Iraq and Afghanistan have required large numbers of ground forces, creating particularly high demand for certain combat support and combat service support skills, such as military police and civil affairs. After determining which requirements can be met with contractor personnel, DOD then determines how to meet requirements for military personnel. DOD officials charged with identifying forces have not had full visibility over the pool of skilled personnel available for future deployments. For some skills, the combatant commander's operational requirements have exceeded the initial supply of readily available trained military forces. DOD has met demands for these skills through strategies such as reassigning or retraining personnel. However, many of the skilled personnel in high demand are reservists whose involuntary active duty is limited under the current partial mobilization authority and DOD and Army policy. To meet requirements, officials charged with identifying personnel for future rotations developed an inefficient, labor-intensive process to gather information needed for decision making because integrated, comprehensive personnel data were not readily available. DOD is taking steps to develop comprehensive data that identify personnel according to deployment histories and skills; however, until DOD systematically integrates such data into its process for identifying forces, it will continue to make important decisions about personnel for future rotations based upon limited information and lack the analytical bases for requesting changes in or exceptions to deployment policies. Although DOD has developed several strategies to meet the combatant commander's requirements for previous rotations, it has not undertaken comprehensive, data-driven analysis of options that would make more personnel available for future rotations in Iraq and Afghanistan. A key reason why DOD has not conducted comprehensive analyses of options is that its process for identifying forces focuses on one rotation at a time and does not take a long-term view of potential requirements. Prior GAO work has shown that reliable data about current and future workforce requirements are essential for effective strategic planning, as is the data-driven analysis of the number of personnel and the skill mix needed to support key competencies. With data that link deployment dates and skills, DOD could assess options, including using more personnel with support skills from the Army and other services, transferring more positions to high-demand areas, and changing deployment lengths. Each of these options has both advantages and disadvantages. However, without a comprehensive analysis of the options and their related advantages and disadvantages, DOD will be challenged to plan effectively for future requirements and to meet recruiting goals. Additionally, without linking data and options, the services may have difficulty deploying all reservists once before other reservists are required to deploy for a second time, which is a key DOD goal. Moreover, the Secretary of Defense and Congress will not have complete information with which to make decisions about the size and composition of the force, mobilization policies, and other issues.
Background The DTV transition will require citizens to understand the transition and the actions that some might have to take to maintain television service. For those households with subscription video service on all televisions or with all televisions capable of processing a digital signal, no action is required. However, households with analog televisions that rely solely on over-the-air television signals received through rooftop or indoor antennas must take action to be able to view digital broadcast signals after analog broadcasting ceases. The Digital Television Transition and Public Safety Act of 2005 addresses the responsibilities of two federal agencies—FCC and NTIA—related to the DTV transition. The act directs FCC to require full-power television stations to cease analog broadcasting by February 17, 2009. The act also directed NTIA to establish a $1.5 billion subsidy program through which households can obtain coupons towards the purchase of digital-to-analog converter boxes. In August 2007, NTIA selected International Business Machines Corporation (IBM) as the contractor to provide certain services for the program. On January 1, 2008, NTIA, in conjunction with IBM and in accordance with the act, began accepting applications for up to two $40 coupons per household that can apply toward the purchase of eligible digital-to-analog converter boxes and, in mid-February 2008, began mailing the coupons. Initially, during the first phase of the program any household is eligible to request and receive the coupons, but once $890 million worth of coupons has been redeemed, and issued but not expired, NTIA must certify to Congress that the program’s initial allocation of funds is insufficient to fulfill coupon requests. NTIA will then receive $510 million in additional program funds, but households requesting coupons during this second phase must certify that they do not receive cable, satellite, or any other pay television service. As of June 24, 2008, in response to NTIA’s statement certifying that the initial allocation of funds would be insufficient, all appropriated coupon funds were made available to the program. Consumers can request coupons up to March 31, 2009, and coupons can be redeemed through July 9, 2009. As required by law, all coupons expire 90 days after issuance. As unredeemed coupons expire, the funds obligated for those coupons are returned to the converter box subsidy program. Retailer participation in the converter box subsidy program is voluntary, but participating retailers are required to follow specific program rules to ensure the proper use and processing of converter box coupons. Retailers are obligated to, among other things, establish systems capable of electronically processing coupons for redemption and payment and tracking transactions. Retailers must also train their employees on the purpose and operation of the subsidy program. According to NTIA officials, NTIA initially explored the idea of setting requirements for training content, but decided to allow retailers the flexibility of developing their own training programs and provided retailers with sample training materials. Certification requires retailers to have completed an application form by March 31, 2008, and to attest that they have been engaged in the consumer electronics retail business for at least 1 year. Retailers must also register in the government’s Central Contractor Registration database, have systems or procedures that can be easily audited and that can provide adequate data to minimize fraud and abuse, agree to be audited at any time, and provide data tracking each coupon with a corresponding converter box purchase. NTIA may revoke retailers’ certification if they fail to comply with these regulations or if any of their actions are deemed inconsistent with the subsidy program. Converter boxes can also be purchased by telephone or online and be shipped directly to a customer’s home from participating retailers. At the time of our review, 29 online retailers were participating in the converter box subsidy program. Additionally, 13 telephone retailers were listed as participating in the program, 2 of which are associated with national retailers. Private and Federal Stakeholders Have Undertaken a Myriad of Activities Aimed at Increasing the Public’s Awareness of the Transition Private sector stakeholders, such as broadcasters and cable providers, have undertaken various education efforts to increase public awareness about the DTV transition. The NAB and the National Cable and Telecommunications Association initiated DTV transition consumer education campaigns in late 2007 at an estimated value of $1.4 billion combined. NAB has produced six versions of a public service announcement, including 15-second and 30-second versions in both English and Spanish and close-captioned versions. Private sector stakeholders have also produced DTV transition educational programs for broadcast and distribution, developed Web sites that provide information on the transition, and engaged in various other forms of outreach to raise awareness. Additionally, most of the national retailers participating in the NTIA converter box subsidy program are providing materials to help inform their customers of the DTV transition and the subsidy program. Examples of these materials include informational brochures in English and Spanish, educational videos and in-store displays in English and Spanish, informational content on retailer Web sites, and information provided in retailer advertising in Sunday circulars. FCC and NTIA also have ongoing DTV consumer education efforts, which target populations most likely to be affected by the DTV transition. Specifically, they focused their efforts on 45 areas of the country that have at least 1 of the following population groups: (1) more than 150,000 over- the-air households, (2) more than 20 percent of all households relying on over-the-air broadcasts, or (3) a top 10 city of residence for the largest target demographic groups. The target demographic groups include seniors, low-income, minority and non-English speaking, rural households, and persons with disabilities. According to NTIA, its consumer education efforts will specifically target these 45 areas by leveraging partnerships and earned media spots (such as news stories or opinion editorials) to better reach the targeted populations. FCC indicated that while its outreach efforts focus on the targeted hard-to-reach populations, the only effort specifically targeting the 45 locations has been to place billboards in these communities. According to FCC, contracts exist for billboards in 26 of the 45 markets, and it is working to place billboards in the other 19 markets. Furthermore, FCC and NTIA have developed partnerships with some federal, state, and local organizations that serve the targeted hard-to- reach populations. NTIA is Effectively Implementing the Converter Box Subsidy Program, But Concerns Exist about NTIA’s Ability to Manage a Potential Spike in Demand NTIA has processed and issued coupons to millions of consumers, but a sharp increase in demand might affect NTIA’s ability to respond to coupon requests in a timely manner. NTIA and its contractors have implemented systems (1) to process coupon applications, (2) to produce and distribute coupons to consumers, and (3) for retailers to process coupons and receive reimbursement for the coupons from the government. Millions of consumers have requested converter box coupons and most of the requested coupons have been issued. Through August 2008, households had requested approximately 26 million coupons. NTIA had issued over 94 percent of all coupon requests, for more than 24million coupons. Of those coupons issued, about 9.5 million (39 percent) had been redeemed and 31 percent had expired. After an initial spike at the beginning of the program, coupon requests have remained steady and have averaged over 105,000 requests per day. Coupon redemptions, since coupons were first issued in February 2008, have averaged over 48,000 per day. In our consumer survey, we found that 35 percent of U.S. households are at risk of losing some television service because they have at least one television not connected to a subscription service, such as cable or satellite. However, through August 2008, only 13 percent of U.S. households had requested converter box coupons, and less than 5 percent had redeemed these coupons. As the transition date nears, there is the potential that many affected households that have not taken action might begin requesting coupons. Our consumer survey found that of those at risk of losing some television service and intending to purchase a converter box, most will likely request a coupon. In fact, in households relying solely on over-the-air broadcasts (approximately 15 percent), of those who intend to purchase a converter box, 100 percent of survey respondents said they were likely to request a coupon. Consumers have incurred significant wait times in the processing of their coupon requests, but the processing time from receiving requests to issuing coupons is improving. NTIA requires that 98 percent of all coupon requests be issued within 10 days, and the remainder be issued within 15 days. From February 17 through August 31, 2008, our analysis shows that the average duration between coupon request and issuance was over 16 days. In aggregate, 53 percent of all coupon requests had been issued within 10 days, and 39 percent of all coupon requests had been issued more than 15 days after being requested. From May 1 through August 31, 2008, the average processing time from coupon request to issuance was 9 days. Given the processing time required in issuing coupons, NTIA’s preparedness to handle volatility in coupon demand is unclear. Fluctuation in coupon requests, including the potential for a spike in requests as the transition date approaches, could adversely affect consumers. When NTIA faced a deluge of coupon requests in the early days of the converter box subsidy program, it took weeks to bring down the deficit of coupons issued to coupons requested. According to NTIA, it expects a similar increase in requests around the transition date, and such an increase may cause a delay in issuing coupons. As a result, consumers might incur significant wait time before they receive their coupons and might lose television service during the time they are waiting for the coupons. While NTIA and its contractors have demonstrated the capacity to process and issue large numbers of coupon requests over short periods, they have yet to establish specific plans to manage a potential spike or a sustained increase in demand leading up to the transition. We analyzed data to compare areas of the country that comprise predominantly minority and elderly populations with the rest of the U.S. population and found some differences in the coupon request, redemption, and expiration rates for Hispanic, black, and senior households compared with the rest of the U.S. population. For example, zip codes with a high concentration of Latino or Hispanic households had noticeably higher request rates (28 percent) when compared with non-Latino or non- Hispanic zip codes (12 percent). However, households in predominantly black and Latino or Hispanic zip codes were less likely, compared with households outside these areas, to redeem their coupons once they received them. As shown in table 1, the overall rate of redemption for the converter box subsidy program is 39 percent. Approximately 37 percent of coupons have been redeemed in predominantly Latino or Hispanic areas. In predominantly black areas, 32 percent of coupons have been redeemed. We found that in areas of the country with a high concentration of seniors, fewer coupons were requested (9 percent) compared with areas of the country that did not have a high concentration of seniors (13 percent). Redemption rates for the senior population were lower than the redemption rates in the rest of the country. Regarding coupon expirations, we found that the areas comprising Latino or Hispanic households allowed 27 percent of their coupons to expire, while areas with predominantly senior populations allowed 43 percent of their coupons to expire. To determine participation in the converter box subsidy program in the 45 areas of the country receiving targeted outreach by NTIA and FCC, we analyzed NTIA coupon data (including requests, redemptions, and expirations) in the 45 areas compared to the rest of the country not targeted by NTIA and FCC. We found participation levels were about the same in the targeted areas when compared to the rest of the country. For example, we found in the 45 targeted areas, 12.2 percent of households have requested coupons compared with 12.8 percent for the rest of the country not targeted by NTIA and FCC. According to NTIA, similarities in request, redemption, and expiration rates between the 45 targeted areas and the rest of the country is viewed as a success. As the sellers of the converter boxes, retailers play a crucial role in the converter box subsidy program and are counted on to inform consumers about it. At the time of our review, seven national retailers were certified to participate in the subsidy program. Participating retailers are obligated to, among other things, train employees on the purpose and operation of the subsidy program. All of the retailers with whom we spoke told us they were training employees on the DTV transition and the subsidy program, although the retailers varied in which staff must complete training. As part of our work, we conducted a “mystery shopper” study by visiting 132 randomly selected retail locations in 12 cities across the United States that were listed as participating in the converter box subsidy program. We did not alert retailers that we were visiting their stores or identify ourselves as government employees. During our visits, we engaged the retailers in conversation about the DTV transition and the subsidy program to determine whether the information they were providing to customers was accurate and whether individual stores had coupon-eligible converter boxes available. While not required to do so, some stores we visited had informational material available and others had signs describing the DTV transition and the subsidy program. We also determined whether the information that retailers were providing to customers was accurate and whether individual stores had coupon-eligible converter boxes available. At most retailers (118) we visited, a representative was able to correctly identify that the DTV transition would occur in February 2009. Additionally, nearly all (126) retailers identified a coupon-eligible converter box as an option available to consumers to continue watching television after the transition. Besides coupon eligible converter boxes, representatives identified other options to continue viewing television after the transition, including purchasing a digital television (67) or subscribing to cable or satellite service (77). However, in rare instances, we heard erroneous information from the retailers, including one representative who told us that an option for continuing to watch television after the transition was to obtain a “cable converter box” from a cable company and another representative who recommended buying an “HD tuner.” Since participating retailers are obligated to train their employees on the purpose and operation of the subsidy program, we observed whether the representative was able to explain various aspects about the subsidy program. A vast majority of the representatives were able to explain how to receive or apply for a coupon and the value of the coupon. Although we could obtain information from the majority of the stores that we visited and that were listed as participating in the subsidy program, in a few instances, we were not able to ask questions and observe whether the information provided was accurate. In two instances, there was no retailer at the store location listed as a participating retailer on NTIA’s Web site (https://www.dtv2009.gov/VendorSearch.aspx). In another instance, the location listed was under construction and had not yet opened. In two additional instances, the locations listed were private residences—one was an in-home electronics store, and the other was a satellite television installer working from a house. We asked NTIA how it ensured the accuracy of the list of participating retailers on its Web site, and according to NTIA, ensuring the accuracy of the list is the responsibility of the retailers. NTIA said it provides a list of locations to each retailer prior to placing the list on the Web site, and retailers can update addresses or add new listings as warranted. Conclusions and Recommendation NTIA estimates that it will see a large increase in the number of coupon requests in the first quarter of 2009 and our analysis confirms that, as the transition nears, a spike in coupon requests is likely. However, NTIA has not developed a plan for managing that potential spike or sustained increase in coupon demand. The time required for processing coupons has improved since consumers incurred significant wait times to receive their coupons at the beginning of the program, but until recently NTIA fell short of its requirement for processing coupons within 10 to 15 days. Given the relatively low participation rates to date and the amount of time it took to process the spike in coupon requests in the early days of the program, NTIA’s ability to handle volatility in coupon demand without a plan is uncertain. Consequently, consumers face potential risks that they might not receive their coupons before the transition and might lose their television service. To help NTIA prepare for a potential increase in demand for converter box coupons and so that consumers are not left waiting a lengthy amount of time for requested coupons, the report we are releasing today recommends that the Secretary of Commerce direct the Administrator of the NTIA to develop a plan to manage volatility in coupon requests so that coupons will be processed and mailed within 10-15 days from the day the coupon applications are approved, per NTIA’s stated requirement. In reviewing a draft of the report, the Department of Commerce (which contains NTIA) did not state whether it agreed or disagreed with our recommendation, but did say the Department shares our concern about an increase in coupon demand as the transition nears. Further, its letter stated it is committed to doing all that it can within its statutory authority and existing resources to ensure that all Americans are ready for the DTV transition. In its letter, FCC noted consumer outreach efforts it has taken related to the DTV transition. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. GAO Contact and Staff Acknowledgments For further information about this testimony, please contact Mark L. Goldstein at (202) 512-2834. Individuals making key contributions to this testimony included Colin Fallon, Simon Galed, Eric Hudson, Bert Japikse, Aaron Kaminsky, Sally Moino, Michael Pose, and Andrew Stavisky. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Digital Television Transition and Public Safety Act of 2005 requires all full-power television stations in the United States to cease analog broadcasting after February 17, 2009, known as the digital television (DTV) transition. The National Telecommunications and Information Administration (NTIA) is responsible for implementing a subsidy program to provide households with up to two $40 coupons toward the purchase of converter boxes. In this testimony, which is principally based on a report being issued today, GAO examines (1) what consumer education efforts have been undertaken by private and federal stakeholders and (2) how effective NTIA has been in implementing the converter box subsidy program, and to what extent consumers are participating in the program. To address these issues, GAO analyzed data from NTIA and reviewed legal, agency, and industry documents. Also, GAO interviewed a variety of stakeholders involved with the DTV transition. Private sector and federal stakeholders have undertaken various consumer education efforts to raise awareness about the DTV transition. For example, the National Association of Broadcasters and the National Cable and Telecommunications Association have committed over $1.4 billion to educate consumers about the transition. This funding has supported the development of public service announcements, education programs for broadcast, Web sites, and other activities. The Federal Communications Commission (FCC) and NTIA have consumer education plans that target those populations most likely to be affected by the DTV transition. Specifically, they identified 45 areas of the country as high risk that included areas with at least 1 of the following population groups: (1) more than 150,000 over-the-air households, (2) more than 20 percent of all households relying on over-the-air broadcasts, or (3) a top 10 city of residence for the largest target demographic groups. The target demographic groups include seniors, low-income, minority and non-English speaking, rural households, and persons with disabilities. In addition to targeting these 45 areas of the country, FCC and NTIA developed partnerships with organizations that serve these hard-to-reach populations. NTIA is effectively implementing the converter box subsidy program, but its plans to address the likely increase in coupon demand as the transition nears remain unclear. As of August 31, 2008, NTIA had issued almost 24 million coupons and as of that date approximately 13 percent of U.S. households had requested coupons. As found in GAO's recent consumer survey, up to 35 percent of U.S. households could be affected by the transition because they have at least one television not connected to a subscription service, such as cable or satellite. In U.S. households relying solely on over-the-air broadcasts (approximately 15 percent), of those who intend to purchase a converter box, 100 percent of survey respondents said they were likely to request a coupon. With a spike in demand likely as the transition date nears, NTIA has no specific plans to address an increase in demand; therefore, consumers might incur significant wait time to receive their coupons and might lose television service if their wait time lasts beyond February 17, 2009. In terms of participation in the converter box subsidy program, GAO analyzed coupon data in areas of the country comprising predominantly minority and senior populations and found that households in both predominantly black and Hispanic or Latino areas were less likely to redeem their coupons compared with households outside these areas. Additionally, GAO analyzed participation in the converter box subsidy program in the 45 areas of the country on which NTIA and FCC focused their consumer education efforts and found coupon requests to be roughly the same for zip codes within the 45 targeted areas compared with areas that were not targeted. Retailers play an integral role in the converter box subsidy program by selling the converter boxes and helping to inform their customers about the DTV transition. GAO visited 132 randomly selected retail stores in 12 cities. Store representatives at a majority of the retailers GAO visited were able to correctly state that the DTV transition would occur in February 2009 and how to apply for a converter box coupon.
Matters for Congressional Consideration Congress should consider requiring the Departments of Justice, Homeland Security, and Treasury to collaborate on the development and implementation of a joint radio communications solution. Specifically, Congress should consider requiring the departments to establish an effective governance structure that includes a formal process for making decisions and resolving disputes, define and articulate a common outcome for this joint effort, and develop a joint strategy for improving radio communications. Congress should also consider specifying deadlines for completing each of these requirements. Agency Comments and Our Evaluation We obtained written comments on a draft of this report from DOJ, DHS, and the Treasury, which are reprinted in appendixes II, III, and IV respectively. In comments from DOJ, the Assistant Attorney General for Administration largely disagreed with our findings and conclusions. DOJ stated that we had not recognized that circumstances had changed since the inception of our review and that departmental leaders had agreed on a common approach that would address concerns we have raised. However, we believe that our review accurately characterizes the evolution of circumstances throughout the development of IWN as well as the current status of the program. For example, we noted in our briefing slides that the departments had collaborated productively on the Seattle/Blaine pilot program, which served as a working demonstration and test of the IWN design. We also acknowledged in the slides that the departments had recently established a memorandum of understanding (MOU) regarding development of interoperable communications systems in the future. While that step is important, an effective governance structure still needs to be implemented before decisions can be made and procedures established for overcoming the differing missions, priorities, funding structures, and capabilities among the departments. DOJ also commented that the current business environment is not conducive to a single mobile-radio solution, and that such an approach is no longer feasible or cost-effective. In the slides we pointed out that a single, common project or system is not necessarily the best solution, and our conclusions do not advocate such a system as the best solution. We concluded that successful collaboration on a joint solution—whether that solution is IWN or an alternative approach—is necessary to promote efficient use of resources, reduce duplicative efforts, and encourage interoperability. Although a joint solution could be based on a single, nationwide network, such as an extension of the original IWN design, it could also be, for example, a mutually agreed-upon strategy for developing separate but interoperable networks and systems. DOJ stated that it planned to continue pursuing eventual integration and interoperability with DHS and other entities using common standards and guidelines rather than through a single, central solution. We agree that the implementation of common standards and guidelines are important and can help facilitate a joint project such as this. The Seattle/Blaine pilot project, for example, was based on the Project 25 set of standards. However, agreement has not yet been reached on the standards and guidelines that are to shape future collaboration among the departments on a joint approach to radio communications. As reflected in the briefing slides, we believe that success hinges on a means to overcome differences in missions and cultures, a collaborative governance structure through which decisions are made and disputes resolved, and a joint strategy to align activities and resources to achieve a joint solution. DOJ also stated that where the report seemed to suggest that DOJ and other agencies had not collaborated, that in fact the departments had worked together and collaborated extensively. However, as described in the briefing, we disagree with this statement. While DOJ has collaborated with other agencies on the Seattle/Blaine pilot project, the agencies determined that that specific system design could not be implemented on a nationwide scale, and they have not acted collaboratively to identify an alternative approach for a jointly coordinated communication solution. As discussed in the briefing, while the departments recently established an MOU regarding development of interoperable communications systems in the future, no progress had been made in re-establishing the joint governance structure outlined in the agreement, and the departments have been actively working to develop independent communications systems. In effectively abandoning collaboration on a joint solution, the departments risk duplication of effort and inefficient use of resources as they continue to invest significant resources in independent solutions. Further, these stovepipe efforts will not ensure the interoperability needed to serve day-to-day law enforcement operations or a coordinated response to terrorist or other events. As stated above, the adoption of key collaboration practices will be critical to a successful outcome. Finally, the department stated that it understood GAO’s concern that the departments risk duplication of effort and that it had made great progress in minimizing duplication/overlap, as evidenced by the Seattle/Blaine pilot project. However, as discussed above, the pilot project has not been chosen as a basis for a jointly coordinated, nationwide communications solution, nor has any other specific strategy been adopted that would provide assurance that duplication will be minimized in the future. DOJ also agreed that agencies must begin meeting quarterly to improve communications and that they must better document their overall, collective strategy beyond the MOU. Until a joint strategy to align activities and resources is adopted, we believe the potential for duplication and overlap remains. In comments from DHS, the Director of the Departmental Audit Liaison Office discussed the development of the IWN program and noted that issues had been identified with joint governance, the management of priorities and requirements across multiple departments, and addressing user requirements within schedule constraints. In this regard, DHS stated that our report was focused on mandating that the three agencies have one radio communications solution and that it implied that any other option would result in a stovepipe of non-interoperable communications systems. We disagree. As discussed above, in the slides we pointed out that a single, common project or system is not necessarily the best solution, and we do not advocate such a system as the best solution. We concluded that successful collaboration on a joint solution—whether that solution is IWN or an alternative approach—is necessary to promote efficient use of resources, reduce duplicative efforts, and encourage interoperability. Although a joint solution could be based on a single, nationwide network, such as an extension of the original IWN design, it could also be, for example, a mutually agreed-upon strategy for developing separate but interoperable networks and systems. Regarding the breakdown of the original collaborative structure for the IWN program, DHS commented that DHS and DOJ are employing different radio designs funded by Congress that are commensurate with spectrum needs in their environments and that the two departments have different regional priorities, such that a common system will not work on a national level. In the briefing, we recognized that the two departments had different priorities and that those differences led to an inability to resolve conflicts on the original IWN program. However, as discussed above, in effectively abandoning collaboration on a joint solution, the departments risk duplication of effort and inefficient use of resources as they continue to invest significant resources in independent solutions. Given their differences, adoption of key collaboration practices will be critical to ensuring that separate projects in the two departments are successfully coordinated in the future so that radio communications are improved among federal agencies, costs reduced, and duplication eliminated wherever possible. DHS also commented that we had not discussed the departments’ concerns about the projected expense of expanding the Seattle/Blaine pilot project to a national level. While we did not discuss specific cost projections for this option, which is no longer being considered, we recognize that any investment in coordinated future communications between the departments will be substantial. Accordingly, it will be critical to ensure a properly coordinated approach so that duplication and overlap is avoided. Regarding current collaboration with DOJ and Treasury, DHS noted that a memorandum of understanding had been signed in January 2008 and described how decisions are to be made under this agreement. DHS went on to describe internal priorities, such as the need for radio system upgrades in Customs and Border Protection, and stated that any cross- departmental efforts should not result in delays to these priorities. We do not dispute the urgency for upgrading radio systems that DHS cites. However, given that all three departments have differing priorities, as discussed in the slides, it remains critical that key collaboration practices are adopted to ensure successful coordination across departments. Finally, DHS briefly outlined its vision for a “tiered” strategy for achieving effective radio communications in a timely and cost-effective manner. DHS stated that the first goal of the partnership will be to define an outcome and an associated joint strategy. We agree that these elements—along with an effective governance structure that includes a formal process for making decisions and resolving disputes—are key elements for successful collaboration and implementation of a joint radio communications solution. In comments from the Treasury, the Chief Information Officer stated that the department continued to be highly supportive of the overall goals of the IWN program and looked forward to continuing to work with DOJ and DHS to advance law enforcement and emergency services communications. We are sending copies of this report to interested congressional committees and the Attorney General, the Secretary of Homeland Security, and Secretary of the Treasury. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-6253 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Appendix I: Briefing to Staff of the Senate Committee on Homeland Security and Governmental Affairs Radio Communications: Congressional Action Needed to Ensure Agencies Collaborate to Develop a Joint Solution September 25, 2008 Objective, Scope, and Methodology DOJ, DHS, and Treasury Are No Longer Pursuing a Joint Solution Agency Comments and Our Evaluation The tragic events of 9/11 and Hurricane Katrina have highlighted the critical importance of having effective radio communications systems for law enforcement and public safety agencies including federal agencies with such responsibilities. In order to effectively respond to events such as natural disasters, criminal activities, and domestic terrorism, law enforcement and public safety agencies need reliable systems that enable communication with their counterparts in other disciplines and jurisdictions. Further, since the 1990s, increasing demand for radio communications capabilities in both the private and public sectors has created a need to use radio communications capacity more efficiently. The Integrated Wireless Network (IWN) was intended to be a collaborative effort among the Departments of Justice (DOJ), Homeland Security (DHS), and the Treasury to provide secure, seamless, interoperable, and reliable nationwide wireless communications in support of federal agents and officers engaged in law enforcement, protective services, homeland defense, and disaster response missions. This initiative, begun in 2001, was originally estimated to cost approximately $5 billion. the ability of different tem to redily connect with ech other nd enable timely commniction. Objective, Scope, and Methodology As agreed, our objective for this review was to determine the extent to which DOJ, DHS, and Treasury are developing a joint radio communications solution to improve communication among federal agencies. To address our objective, we reviewed and analyzed documentation from DOJ, DHS, and Treasury to determine the status of IWN, interviewed officials from each department about the extent to which they are collaborating with the other departments on IWN or an alternative joint radio communications solution, reviewed and analyzed documentation for independent radio communications projects at DOJ and DHS to identify actions the departments are taking to improve their radio communications systems, reviewed and analyzed past and present agreements among the departments to determine the extent to which a governance structure is in place that enables effective collaboration, and compared collaboration activities performed by the departments to selected practices previously identified by GAO as helpful to sustaining collaboration Objective, Scope, and Methodology among federal agencies. We performed our audit work in the Washington, D.C., metropolitan area at DOJ, the Federal Bureau of Investigation, the Drug Enforcement Administration, DHS, Immigration and Customs Enforcement, Customs and Border Protection, Treasury, the National Institute of Standards and Technology, and the National Telecommunications and Information Administration. We also conducted work at these agencies’ field offices in the Seattle, Washington, metropolitan area, which was the location of the key pilot demonstration for the IWN program. We conducted this performance audit from February 2008 to September 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives, and we believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. nd GAO, Electronic Government: Potential Exists for Enhancing Collaboration on Four Initiatives, GAO-04-6 (Washington, D.C.: Oct. 10, 2003). While DOJ, DHS, and Treasury had originally intended IWN to be a joint radio communications solution to improve communication among law enforcement agencies, IWN is no longer being pursued as a joint development project. Instead of focusing on a joint solution, the departments have begun independently modernizing their own wireless communications systems. While DOJ and Treasury (and later DHS) collaborated on a pilot demonstration of IWN in the Seattle/Blaine area that continues to provide service to multiple agencies, the departments have determined that this specific system design cannot be implemented on a nationwide scale, and they have not acted collaboratively to identify an alternative approach for a jointly coordinated communications solution. In addition, the formal governance structure that was established among the three departments has been disbanded, and the contract for developing a new IWN design, awarded over a year and a half ago, is not being used jointly by the departments for this purpose. Currently, DOJ is planning to implement a nationwide network for its component agencies, and DHS and its components are pursuing numerous independent solutions. A primary reason why the collaboration on a joint communications solution has not been successful is that the departments did not effectively employ key cross-agency collaboration practices. Specifically, they could not agree on a common outcome or purpose to overcome their differences in missions, cultures, and established ways of doing business; they have not established a collaborative governance structure with a process for decision making and resolving disputes; and they have not developed a joint strategy for moving forward. While DHS considers improving radio communications at the nation’s borders to be a major priority, DOJ’s priorities are in other areas. Program officials from both departments acknowledged that differing priorities led to an inability to resolve conflicts. As a result, they now have several initiatives aimed at high-level coordination, none of which are focused on developing a joint communications solution. Department officials have indicated that they have not made any progress on re- establishing a joint governance structure and decision-making procedures for a joint communications solution. In abandoning collaboration on a joint solution, the departments risk duplication of effort and inefficient use of resources as they continue to invest significant resources in independent solutions. Further, these stovepipe efforts will not ensure the interoperability needed to serve day-to-day law enforcement operations or a coordinated response to terrorist or other events. Given the importance of collaborating effectively toward improving radio communications among federal agencies, reducing costs, and eliminating duplication where possible and the departments’ failure to develop a joint radio communications solution through their own initiative, Congress should consider requiring that the Departments of Justice, Homeland Security, and Treasury employ key cross-agency collaboration practices discussed in this report to develop a joint radio communications solution. We received comments via e-mail from DOJ and DHS on a draft of these briefing slides. Treasury officials stated that they had no comments on the draft briefing slides. In their comments, officials from DOJ’s Office of the Chief Information Officer disagreed with our findings and conclusions in several areas. First, the department officials stated that our analysis was flawed and unrealistic in focusing on a single, common project as the best solution for supporting missions, improving interoperability, and achieving cost efficiencies. We disagree that our conclusions advocate a single system as the best solution and clarified our position in the briefing that a joint approach could mean a single system or it could be a mutually agreed-upon strategy for developing separate but interoperable networks. Second, DOJ officials stated that we misrepresented their efforts to work with other agencies, including DHS, and that the department had tried to reach consensus and compromise with DHS but organizational challenges could not be overcome. We acknowledge that DOJ took steps to collaborate on IWN; however, we also note that when the challenges of collaborating could not be overcome, progress stalled. Rather than contradicting our conclusions, we believe these facts support our analysis that key practices for collaborating were not established or sustained. Unless such practices are established and sustained, the departments are unlikely to succeed at implementing a joint collaborative solution. Third, department officials stated that we unfairly characterized the results of the Seattle/Blaine pilot and failed to recognize DHS’s lack of contribution to the pilot and its requirements development. However, the pilot and its requirements development occurred prior to DHS’s involvement in the program. Further, we acknowledge within our briefing that the pilot provided a working demonstration and test of the preliminary network design as well as several specific benefits. Nevertheless, our discussions with users in the pilot area reveal that the pilot network did not meet many of their needs. In order to make progress in addressing unmet needs through a joint partnership, it will be important that the departments collaborate on alternative approaches based on lessons learned from this pilot. Finally, DOJ expressed concern that our findings did not address the business and operational issues facing IWN, including differing missions and priorities and a lack of funding. While these issues can be challenging, the departments have not implemented the governance structure or employed the key collaboration practices needed to overcome these challenges. Officials from DHS’s National Protection and Programs Directorate did not state whether they agreed or disagreed with our findings but provided suggestions for consideration in the development of a joint strategy, including expanding the partnership to include other federal departments, leveraging existing infrastructure across all levels of government, and ensuring that interoperability is a priority focus. The additional considerations proposed by DHS for inclusion in the joint partnership are consistent with our results and may merit attention as the partnership develops. DHS officials also provided technical comments that we have incorporated into the briefing slides, as appropriate. Radio frequency communications are vital to public safety organizations that respond to natural disasters and terrorist acts. These organizations include the nation’s first responders (such as firefighters, police officers, and ambulance services) as well as federal agencies that have law enforcement and public safety responsibilities, such as the Federal Bureau of Investigation. Federal law enforcement agencies rely on wireless land mobile radio systems for their day-to-day operations and use radio communications to provide for the safety of agents and the public. Further, in order to perform public safety operations effectively, these communications must be secure as well as reliable. The origins of the IWN program date back to 2001. At that time, DOJ and Treasury were independently pursuing efforts to upgrade their land mobile radio systems to meet a National Telecommunications and Information Administration (NTIA) requirement to reduce their use of radio frequency spectrum. Due to the similarity of their law enforcement missions and overlapping geographic jurisdictions, the two departments began discussing a joint project in August 2001. assed the Telecommniction Athoriztion Act of 1992 (Pub. L. No. 102-538 (1992)), which mndted tht the Secretry of Commerce nd the NTIA (the orgniztion reponle for effective use of rdio freqencie y federgencie) develop pln to mke more efficient use of federl lnd moile rdio pectrm. In repone, NTIA reqired, with certin exception, tht the chnnel bandwidth in certin freqency band used y federgencie for lnd moile rdio tem e redced from 25 to 12.5 kilohertz. Thi redction in chnnel bandwidth i referred to asrrowbanding. NTIA pecified different time frme for the trition based on the freqency band nd whether it was new or exiting tem. The subsequent events of 9/11 further underscored the need for secure, wireless, interoperable communications for all levels of government, and in November 2001, DOJ and Treasury took the initiative to create the IWN program by signing a memorandum of understanding to collaborate on achieving cost efficiencies and improving communications operability among their own law enforcement agencies as well as with other federal, state, and local organizations. The IWN joint program was intended to be a nationwide radio communications system that would provide secure, seamless, and reliable wireless communications in support of law enforcement. In addition, the IWN program would serve as a means for upgrading aging equipment. network for several federal agencies and enabled interoperability with several state and local law enforcement organizations in the Seattle/Blaine area. Following the establishment of DHS, several law enforcement components from DOJ and Treasury were transferred to the new department and the scope of IWN was expanded. In June 2004, DOJ, DHS, and Treasury signed a new memorandum of understanding. This agreement established the following governance structure to oversee and carry out the implementation of IWN: The Joint Program Office, consisting of staff assigned to the office on a full-time basis from each of the departments, was responsible for—among other things— performing all IWN program administrative and project management functions. commniction eqipment. gement Office irrently reponle for fnding nd mgement relted to wireless commniction nd IWN for the deprtment. tion Officer was originlly reponle for the IWN progrt DHS. Since DHSas creted, the deprtment went throgh erie of mgement chnge. In My 2007, the Office of the Chief Informtion Officer trferred ll mgement reponilitie for IWN to the newly formed Office of Emergency Commniction, which irrently reponle for IWN. The Office of the Chief Informtion Officer retined authority over pectrlloction for the DHS component. m Office repreent the deprtment in IWN-relted ctivitie. However, while Treasury crrently has abt 4,500 gent, the totl ner of gent nd officer who re potentil rdio user mong ll three deprtment over 80,000. policy and program direction to the Joint Program Office. The National Project Team, comprised of representatives from each component/bureau participating in the IWN program, was responsible for— among other things—providing information to the Joint Program Office required for the development, implementation, and administration of the IWN system. The memorandum of understanding described identical responsibilities and resource contributions for DOJ and DHS. However, Treasury was not required to share the costs of designing and building IWN, given its reduced number of law enforcement personnel after creation of DHS. In July 2004, the IWN Executive Board initiated an acquisition strategy to award a contract to: obtain reliable, secure, nationwide wireless communication capabilities; reduce costs by leveraging economies of scale; enable rapid deployment of radio communications functionality nationwide; enhance interoperability, operational effectiveness, and support though increased coverage and capabilities; and establish interoperability with other federal and non-federal wireless users through the consistent application of standards developed from this effort. The strategy envisioned selecting a single contractor to implement the entire IWN program using a 3-phased process: In phase 1, vendors submitted information regarding their high-level conceptual approach, organizational experience, and past performance. As a result of this process, four vendors continued in the acquisition process. This phase was completed in December 2004. In phase 2, the four vendors submitted detailed technical, management, and cost proposals to accomplish the entire IWN program. Based on an evaluation of these proposals, two vendors were awarded contracts to prepare detailed system designs. This phase was originally scheduled for completion in May 2005 but was not completed until June 2006. Phase 3 was to select the winning contractor based on evaluation of the detailed system designs submitted by each contractor. As a result of this process, General Dynamics C4 Systems was selected as the IWN systems integrator in April 2007. Figure 1 shows a timeline of major events related to IWN. We have previously reported on the importance of communications interoperability to effective public safety operations. Interoperability has been significantly hampered by the use of incompatible radio systems. Different technologies and configurations, including proprietary designs made by different manufacturers, have limited the interoperability of such systems. In 2004, we reported that a fundamental barrier to successfully establishing interoperable communications for public safety was the lack of effective, collaborative, interdisciplinary, and intergovernmental planning. Further, in 2007, we made recommendations to DHS to improve interoperable communications among federal, state, and local first responders.Among other things, we recommended that DHS develop a plan that strategically focused its interoperability programs and provided quantifiable performance measures. Program officials indicated that they were in the process of developing such a plan; however, they had not established a completion date for it. ly 20, 2004). We have also previously reported on key practices agencies should employ to help them overcome the barriers to successful inter-agency collaboration. These practices include: Defining and articulating a common outcome or purpose that overcomes differences in department missions, cultures, and established ways of doing business. Establishing a governance structure, including a collaborative management structure with defined leadership, roles and responsibilities, and a formalized process for making decisions and resolving disputes. Establishing mutually reinforcing or joint strategies that work in concert with those of the partners or are joint in nature to align activities and resources to accomplish the common outcome. Implementing these practices is critical to sustaining a successful inter-agency project such as IWN. DOJ, DHS, and Treasury are no longer pursuing a joint solution Given the importance of radio communications and the reality of limited resources, it is critical that agencies find ways to work together to achieve effective and efficient interoperable solutions. In particular, the advantages of collaborating to develop a joint radio communications solution clearly outweigh the benefits of each department pursuing its own radio communications initiative, as DOJ, DHS, and Treasury agreed when they signed on to the IWN program. The benefits of developing IWN as a joint communications solution, as identified by the program, include: supporting departmental missions effectively and efficiently, providing sufficient communications coverage for current operations, achieving efficient use of radio spectrum, improving interoperability with federal, state, and local law enforcement agencies, achieving cost efficiencies through resource consolidation and economies of scale. Achieving these benefits hinges on successful inter-agency collaboration. DOJ, DHS, and Treasury are no longer pursuing a joint solution Despite early progress, the departments are pursuing independent solutions Although the departments made early progress in jointly developing and implementing a pilot program, they are no longer pursuing IWN as a joint solution and instead are independently modernizing their own wireless communications systems. DOJ and Treasury (and later DHS) contributed resources to develop an operational pilot in the Seattle/Blaine area to demonstrate the original IWN design. This pilot provided a working demonstration and test of the preliminary network design, generally improved communications in the coverage area, addressed federal encryption requirements through new equipment, established technical solutions for interoperability with selected state and local organizations, and provided valuable lessons learned. While the pilot remains operational and has been expanded to increase coverage in areas of Washington and Oregon, several DOJ and DHS components in the region have been unable to fully use the system due to unmet requirements. Components in the area continue to maintain legacy networks to ensure complete coverage. Since the pilot demonstration, DOJ and DHS have concluded that the pilot design could not be implemented jointly on a nationwide scale. DOJ officials expressed concern that it would be too expensive to expand the pilot network to fulfill DOJ, DHS and Treasury requirements on a nationwide scale, while DHS officials were also concerned that the design would not be technically well suited to meet DHS needs. DOJ, DHS, and Treasury are no longer pursuing a joint solution Since deciding not to proceed with the IWN pilot design jointly, the departments have not developed an alternative approach for collaborating on a joint communications solution, either through development of a single, nationwide network, such as an extension of the original IWN design, or a through a mutually agreed-upon strategy for developing separate but interoperable networks and systems that can accommodate the needs of all participants and incorporate the lessons learned from prior efforts (such as the pilot). For example: The departments have not used their IWN contract as a vehicle for development of a joint solution. For nearly three years, DOJ, DHS, and Treasury jointly participated in the process of selecting a systems integrator. However, since that selection, the departments have not used the IWN contract (awarded a year and a half ago) to begin developing a joint nationwide radio communications solution. Instead, the task order that has been issued based on the IWN contract is being used for establishing a joint program office for the contractor and DOJ—not for DHS or Treasury. The task order specifies that the contractor will draft architecture documents for developing a communications system for DOJ—it does not include DHS or Treasury. The formal governance structure for IWN originally supported by the three departments has been disbanded. Specifically, the IWN Executive Board and the National Project Team stopped meeting after award of the IWN contract. In DOJ, DHS, and Treasury are no longer pursuing a joint solution addition, the Joint Program Office that was intended to manage IWN is no longer supported by shared staff and resources from the three agencies. Although officials from the three departments stated that they talk to each other about radio communications issues, these discussions have not occurred on a regular basis and have not been used to re-establish a formal governance structure for developing a joint communications solution. rrently has one employee collocted with the DOJ Wireless Mgement Office to fcilitte exchnge of informtion. DHS doe not contribute ny ff or rerce to the joint progrm office. DOJ, DHS, and Treasury are no longer pursuing a joint solution legacy systems on a regional basis, replace or decommission certain systems, and deploy new systems to meet federal requirements for reduced spectrum use and encryption. Establishing interoperability with other federal, state, and local organizations; network redundancy; trunking; and spectrum efficiency are to be included in later phases, as funding is available. According to the department, the total cost is estimated at $1.23 billion, and the system will be implemented over 6- 7 years. DHS is pursuing multiple approaches at both the component and department levels to meet different priorities. For example, since 2005, Customs and Border Protection has been developing and implementing a nationwide radio communications network intended to improve and update radio communications for Customs and Border Protection officers and agents—referred to as the Tactical Communications Modernization Project. In contrast, Immigration and Customs Enforcement officials have adopted a different approach, looking for opportunities to strategically partner with other agencies and leverage existing assets to meet their operational requirements. Immigration and Customs Enforcement has submitted a number of proposals to the department for approval. While initiatives such as these are reviewed by the DHS Office of the Chief Information Officer, they are funded at the component level and focus on meeting the needs of individual components. DOJ, DHS, and Treasury are no longer pursuing a joint solution In addition to such component initiatives, the DHS Office of Emergency Communications (OEC), which is responsible for IWN, is pursuing a high-level strategy for developing radio communications networks, based on shared infrastructure, as an alternative to the original IWN design. The OEC approach, which has been explored with the assistance of the Federal Partnership for Interoperable Communications, focuses on coordination with federal, state, and local organizations that are building or planning to build large communications networks so that these networks might also meet the needs of member federal agencies. However, the OEC’s shared infrastructure approach has yet to be approved at the department level. In addition, this approach focuses on coordination with other government agencies and not specifically among DHS components or the law enforcement community, which was an original goal for the IWN program. l Prtnerhip for Interoperable Commniction, which i ponored y the OEC, i n orgniztion intended to ddress federl wireless commniction interoperability y fotering intergovernmentl coopertion nd identifying nd leverging common ynergie. It inclde 44 federl memer gencie nd pproximtely 160 prticipnt. DOJ, DHS, and Treasury are no longer pursuing a joint solution The departments have not employed key cross-agency collaboration practices A primary reason that collaboration on a joint communications solution has not been successful and the benefits envisioned by the departments have not been realized is that the departments did not effectively employ key cross-agency collaboration practices. As we previously mentioned, these practices include defining and articulating a common outcome or purpose, establishing a governance structure, and establishing mutually reinforcing or joint strategies to accomplish a common outcome. For example: The departments have not defined and articulated a common outcome or purpose that overcomes differences in department missions, cultures, and established ways of doing business. Although the departments originally recognized the benefits of collaborating on a joint solution, they allowed differences in priorities and opinions to stall their collaboration efforts. Specifically, DOJ saw IWN as a concept or vision for new development, which would culminate in a nationwide radio communications network for federal law enforcement. DHS, in contrast, considered the IWN contract to be a vehicle for systems integration. In addition, DHS considered improving radio communications around the nation’s borders to be a major priority, while DOJ’s priorities were focused in other areas of the nation. Further, the departments could not agree on the direction that IWN should take after deciding that the design of the pilot would not be appropriate for a nationwide network. DOJ and DHS program officials have both acknowledged DOJ, DHS, and Treasury are no longer pursuing a joint solution that differing priorities led to an inability to resolve conflicts. They further explained that delays in progress and continued deterioration of legacy systems led the departments to independently pursue other solutions. The departments did not establish a collaborative governance structure that includes a management structure, defined roles and responsibilities, and a formalized process for decision making and resolving disputes. Although the departments attempted to establish a joint governance structure, it was not effective at decision making and resolving disputes and the partnership was discontinued. Both DOJ and DHS stated that making joint decisions in their original partnership depended on reaching consensus among the departments, and when consensus could not be reached, progress on IWN stalled. The departments did not establish a mutual or joint strategy to align activities and resources to accomplish a common outcome. Despite acknowledging the potential benefits from collaborating on a joint solution, the departments have not produced a strategic or implementation plan that outlines a strategy for developing a joint radio communications solution, whether that solution is IWN or an alternative joint approach. DOJ, DHS, and Treasury are no longer pursuing a joint solution The departments are aware that efforts to collaborate have not been successful. Although they have established three high-level initiatives to address coordination, these initiatives are not focused on implementing a collaborative joint communications solution across DOJ, DHS, and Treasury. Specifically: The three departments signed a new memorandum of understanding in January 2008 that aims at coordinating their joint wireless programs. Although the goals of the current memorandum are similar to those that the departments specified in their 2004 agreement for IWN, DOJ and DHS officials have stated that no progress has been made in re-establishing the joint governance structure outlined by the agreement. In addition, decision-making procedures outlined in the 2008 memorandum—like those in the 2004 agreement—do not clearly define how to overcome barriers faced when consensus cannot be reached among the departments. DOJ and DHS officials agreed the memorandum serves primarily as a means for facilitating communication among the departments when opportunities and funding are available. DOJ, DHS, and Treasury are no longer pursuing a joint solution Participation in the Federal Partnership for Interoperable Communications is voluntary for both federal and state entities, coordination occurs on an ad-hoc basis, and meeting participants do not necessarily include officials who are in positions to make decisions about their agency’s radio communications programs. As previously described, the DHS OEC’s shared infrastructure approach is intended to explore collaboration through the Federal Partnership for Interoperable Communications and focuses on coordinating radio communications initiatives among federal, state, and local organizations based on operational needs. However, DOJ officials stated that the Federal Partnership for Interoperable Communications serves primarily as a working group of technical staff, while Treasury officials noted that, to date, they have attended the group’s meetings primarily as observers rather than as active participants. Therefore, it is unclear whether this initiative can address the day-to-day mission needs of law enforcement agencies. DOJ, DHS, and Treasury are no longer pursuing a joint solution In accordance with the 21 Century Emergency Communications Act, the Emergency Communications Preparedness Center (ECPC) has been created and is supported by the OEC. The purposes of this group include serving as the focal point for interdepartmental efforts and providing a clearinghouse for relevant information regarding the ability of emergency response providers and relevant government officials to communicate in the event of natural or man-made disasters and acts of terrorism. DHS officials believe that the creation of the ECPC will address collaboration and may be the proper forum for coordinating a joint solution. However, the charter for this organization has not yet been approved. Although DOJ and Treasury both participate in the Emergency Communications Preparedness Center, DOJ officials noted that this group is focused on emergency communications and response, and it is unclear whether this group can address the day-to-day operational requirements of law enforcement agencies. DOJ, DHS, and Treasury are no longer pursuing a joint solution DOJ and DHS pursuing independent solutions, it is clear that the departments do not view these initiatives as a means to collaborate on the IWN program and have not defined or committed to an alternative approach to develop a joint communications solution. Without a commitment to collaborate on a joint solution, they will continue to invest significant resources in independent solutions that risk duplication of effort and inefficient use of resources. Further, these stovepipe efforts will not ensure the interoperability needed to serve day-to-day law enforcement operations or for responding to terrorist or other events that require a coordinated response. Despite early progress on the pilot effort, the departments have been unable to sustain development of a joint radio communications solution on their own. As a result, after seven years of effort, they are no longer pursuing IWN as a joint solution and are instead pursuing potentially duplicative and wasteful independent solutions. A primary reason that collaboration on a joint communications solution has failed and the benefits envisioned by the departments have not been realized is that the departments did not effectively employ key cross-agency collaboration practices that could overcome the challenges faced in such programs. Specifically, they lacked a means to overcome differences in missions and cultures, a collaborative governance structure that could make decisions and resolve disputes, and a joint strategy to align activities and resources to achieve a joint solution. As long as the departments pursue separate initiatives and expend their resources independently, they risk duplication and inefficiency, and may fail to achieve the level of interoperability that is vital for both law enforcement and emergency communications. While successful collaboration on a joint solution is necessary, this joint solution could be based on a single, nationwide network, such as an extension of the original IWN design, or it could also be a mutually agreed-upon strategy for developing separate but interoperable networks and systems that incorporate lessons learned from past efforts. Given the importance of collaborating effectively toward improving radio communications among federal agencies, reducing costs, and eliminating duplication where possible and the departments’ failure to develop a joint radio communications solution through their own initiative, Congressional action should be considered to ensure that this collaboration takes place. The Congress should consider requiring that the Departments of Justice, Homeland Security, and Treasury collaborate on the development and implementation of a joint radio communications solution. Specifically, Congress should consider requiring the departments to: establish an effective governance structure that includes a formal process for making decisions and resolving disputes; define and articulate a common outcome for this joint effort; and develop a joint strategy for improving radio communications. Congress should also consider specifying deadlines for completing each of these requirements. Agency Comments and Our Evaluation We received comments via e-mail from DOJ and DHS on a draft of these briefing slides. Treasury officials stated that they had no comments on the draft briefing slides. Officials from DOJ’s Office of the Chief Information Officer disagreed with our findings and conclusions in several areas and expressed concerns that we did not accurately characterize the department’s efforts to collaborate. Officials from DHS’s National Protection and Programs Directorate did not state whether they agreed or disagreed with our findings, but provided suggestions for additional consideration; in addition, DHS officials provided technical comments that we incorporated into the briefing slides, as appropriate. Officials from DOJ’s Office of the Chief Information Officer disagreed with our findings and conclusions in several areas. First, the officials stated that our analysis was flawed and unrealistic in focusing on a single, common project as the best solution for supporting missions, improving interoperability, and achieving cost efficiencies. However, we disagree that our conclusions advocate a single, common project or system as the best solution. We concluded that successful collaboration on a joint solution, whether that solution is IWN or an alternative approach, is necessary to promote efficient use of resources, reduce duplicative efforts, and encourage interoperability. Although a joint solution could be based on a single, nationwide network, such as an extension of the original IWN design, it could also be, for example, a mutually agreed-upon strategy for developing separate but interoperable networks and systems. Accordingly, we have clarified our briefing slides to emphasize that we have not concluded that a single monolithic project or system is the most appropriate joint collaborative solution. Second, the department officials stated that we misrepresented DOJ efforts to work with other agencies, including DHS. Specifically, DOJ officials stated that they had tried to reach consensus and compromise with DHS, but DHS leadership had not embraced the concept of a joint program, forcing DOJ to work individually with the DHS components instead of with a single, consolidated program office within the DHS organization. Furthermore, the DOJ officials cited lack of centralized funding at DHS to be another key challenge to collaborating with that department. We acknowledge that DOJ took steps to collaborate on IWN, but when the challenges could not be overcome, progress stalled. We recognize the challenges faced in collaborating among departments, and, in particular, the challenges described by DOJ in collaborating with DHS. However, rather than contradicting our conclusions, we believe these facts support our analysis that key practices for collaborating were not established or sustained. Unless such practices are established and sustained, the departments are unlikely to succeed at implementing a joint collaborative solution. Third, DOJ officials also stated that we unfairly characterized the results of the Seattle/Blaine pilot and failed to recognize DHS’s lack of contribution to the pilot and its requirements development. However, the pilot and its requirements development Agency Comments and Our Evaluation occurred prior to DHS’s involvement in the program. Further, in our briefing, we note that the Seattle/Blaine pilot afforded several benefits to users in Washington and Oregon, including improving communications in the coverage area and establishing technical solutions for interoperability with state and local organizations. Further, we agree that the pilot served as a working demonstration and test of the IWN design and that additional participation from DHS might have resulted in additional requirements being met. However, our discussions with users and potential users revealed that the pilot network did not meet many of their needs. In order to make progress in addressing unmet needs through a joint partnership, it will be important that the departments collaborate on alternative approaches based on lessons learned from this pilot. Finally, DOJ officials also expressed concern that our findings did not address business and operational issues facing IWN, including a lack of adequate funding and the differing missions, priorities, funding structures, and existing capabilities at DHS and DOJ. While we agree that the departments have faced significant challenges, we believe that collaboration on a joint strategy remains critically important. We recognize that the departments have taken initial steps to re-establish coordination, such as signing a revised memorandum of understanding. However, an effective governance structure needs to be implemented before decisions can be made and procedures established for overcoming the differing missions, priorities, funding structures, and capabilities among the departments. Agency Comments and Our Evaluation We also obtained comments on a draft of this briefing via e-mail from DHS’s National Protection and Programs Directorate officials. In these comments, the DHS officials did not state whether they agreed or disagreed with our findings, but they supported the continued development of a joint federal radio communications strategy and stated that more specific guidance was needed. Specifically, DHS identified three elements for inclusion in the development of a joint strategy: Expand the partnership to include other federal departments that rely on mission- critical wireless communications beyond the law enforcement community. Leverage existing infrastructure across all levels of government to ensure cost effectiveness and reduce duplication of effort. Ensure that interoperability is a priority focus beyond the upgrade and modernization focuses of the original IWN concept. In addition, the department stated that there was a need within DHS to further align authority and resources with responsibility for a joint solution. For example, while the Office of Emergency Communications was given responsibility for IWN, it was not given authority and only limited resources for the management of the program and therefore had limited ability to drive stakeholders toward a joint solution. The additional considerations proposed by DHS for inclusion in the joint partnership are consistent with our results, and may merit attention as the partnership Agency Comments and Our Evaluation develops. DHS officials also provided technical comments on our draft briefing slides, which we have incorporated, as appropriate. Appendix III: Comments from the Department of Homeland Security Appendix IV: Comments from the Department of the Treasury Appendix V: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the individual named above, Linda D. Koontz, Director; John de Ferrari, Assistant Director; Shannin O’Neill; Neil Doherty; Nancy Glover; Nick Marinos; Melissa Schermerhorn; Jennifer Stavros-Turner; and Shaunyce Wallace made key contributions to this report.
The Integrated Wireless Network (IWN) was intended to be a collaborative effort among the Departments of Justice (DOJ), Homeland Security (DHS), and the Treasury to provide secure, seamless, interoperable, and reliable nationwide wireless communications in support of federal agents and officers engaged in law enforcement, protective services, homeland defense, and disaster response missions. GAO was asked to determine the extent to which the three departments are developing a joint radio communications solution. To address this objective, GAO reviewed and analyzed relevant documentation and interviewed department officials about the extent to which they are collaborating with the other departments on IWN or an alternative joint radio communications solution. The Departments of Justice, Homeland Security, and the Treasury had originally intended IWN to be a joint radio communications solution to improve communication among law enforcement agencies; however, IWN is no longer being pursued as a joint development project. Instead of focusing on a joint solution, the departments have begun independently modernizing their own wireless communications systems. While the Departments of Justice and the Treasury (and later the Department of Homeland Security) collaborated on a pilot demonstration of IWN in the Seattle/Blaine area that continues to provide service to multiple agencies, the departments have determined that this specific system design cannot be implemented on a nationwide scale, and they have not acted collaboratively to identify an alternative approach for a jointly coordinated communications solution. In addition, the formal governance structure that was established among the three departments has been disbanded, and the contract for developing a new IWN design, awarded over a year and a half ago, is not being used jointly by the departments for this purpose. Currently, the Department of Justice is planning to implement a nationwide network for its component agencies, and the Department of Homeland Security and its components are pursuing numerous independent solutions. A primary reason why the collaboration on a joint communications solution has not been successful is that the departments did not effectively employ key cross-agency collaboration practices. Specifically, they could not agree on a common outcome or purpose to overcome their differences in missions, cultures, and established ways of doing business; they have not established a collaborative governance structure with a process for decision making and resolving disputes; and they have not developed a joint strategy for moving forward. While the Department of Homeland Security considers improving radio communications at the nation's borders to be a major priority, the Department of Justice's priorities are in other areas. Program officials from both departments acknowledged that these differing priorities led to an inability to resolve conflicts. As a result, they now have several initiatives aimed at high-level coordination, none of which are focused on developing a joint communications solution. While department officials have signed an updated memorandum of understanding related to coordinating their radio communications projects, they have not made any progress on reestablishing a joint governance structure and decision-making procedures to address the challenges of collaborating on a joint communications solution. In abandoning collaboration on a joint solution, the departments risk duplication of effort and inefficient use of resources as they continue to invest significant resources in independent solutions. Further, these efforts will not ensure the interoperability needed to serve day-to-day law enforcement operations or a coordinated response to terrorist or other events.
Background According to ONDCP and other officials in the interagency counternarcotics community, the 2,000-mile U.S.-Mexico land border presents numerous challenges to preventing illicit drugs from reaching the United States. With 43 legitimate crossing points, the rest of the border consists of hundreds of miles of open desert, rugged mountains, the Rio Grande, and other physical impediments to surveillance, making it easy to smuggle illegal drugs into the United States. Since the 1970s, the United States has collaborated with and provided assistance to Mexico for counternarcotics programs and activities. The goal over the years has been to disrupt the market for illegal drugs, making it more difficult for traffickers to produce and transport illicit drugs to the United States. Specifically, the United States has provided Mexico with assistance for a range of projects, including interdicting cocaine shipments from South America; stemming the production and trafficking of opium poppy, as well as marijuana; and, more recently, controlling precursor chemicals used to manufacture methamphetamine. In the past, Mexico has chosen to combat drug trafficking with reduced assistance from the United States, and Mexican sensitivity about its national sovereignty has made it difficult for the two countries to coordinate counternarcotics activities. However, beginning in the mid- 1990s, cooperation began to improve, culminating in 1998 in the signing of a Bi-National Drug Control Strategy. Since then, the two countries have continued to cooperate through meetings of a U.S.-Mexico Senior Law Enforcement Plenary, among other contacts. Illicit Drug Production and Trafficking by Mexican Drug Organizations Have Continued Virtually Unabated Mexico is the conduit for most of the cocaine reaching the United States, the source for much of the heroin consumed in the United States, and the largest foreign supplier of marijuana and methamphetamine to the U.S. market. According to U.S. and Mexican estimates, which vary from year to year, more cocaine flowed toward the United States through Mexico during 2006 than in 2000, and more heroin, marijuana, and methamphetamine were produced in Mexico during 2005 than in 2000. In addition, although reported seizures of these drugs within Mexico and along the U.S. southwest border generally increased, according to the U.S. interagency counternarcotics community, seizures have been a relatively small percentage of the estimated supply. As we have reported previously, acknowledged shortcomings in the illicit drug production and seizure data collected and reported by various U.S. government agencies mean that the data cannot be considered precise. However, they can provide an overall indication of the magnitude and nature of the illicit drug trade. Based on the available data, the following describes the trends since 2000 on the amount of cocaine arriving in Mexico for transshipment to the United States; the amounts of heroin and marijuana produced in Mexico; and reported seizures of these illicit drugs and methamphetamine in Mexico and along the U.S.-Mexico border. (See app. I for a more detailed table of the data.) Cocaine: Virtually all the cocaine consumed in the United States is produced along the South American Andean ridge—primarily, in Colombia. The U.S. interagency counternarcotics community prepares an annual assessment (the Interagency Assessment of Cocaine Movement ) that, among other things, estimates the amount of cocaine departing South America toward the United States. From 2000 to 2006, the IACM reported an increase in the estimated amount of cocaine flowing through Mexico to the United States—from 66 percent in 2000 to 77 percent in 2003 and 90 percent in 2006. Between 2000 and 2002, the cocaine estimated arriving in Mexico rose about 23 percent—from 220 to 270 metric tons. In 2003, it declined over 60 metric tons, or about 22 percent. For 2004-2006, the IACM did not provide “point” estimates for cocaine flow because of certain methodological concerns; rather, a range was provided for each year. The midpoint of the IACM range of cocaine estimated arriving in Mexico during 2006 (about 380 metric tons) was about 160 metric tons more than the estimate for 2000. Using the midpoint of the IACM ranges, the amount of cocaine estimated arriving in Mexico during 2000-2006 averaged about 290 metric tons per year. Despite the apparent increases in cocaine arriving in Mexico, the amount of cocaine reported seized in Mexico and along the U.S.-Mexico border for 2000-2006 did not increase proportionately, with 43 metric tons reported seized in 2000, a low of 28 metric tons seized in 2003, and a high of 44 metric tons in 2005. Reported seizures for 2000-2006 averaged about 36 metric tons a year, or about 13 percent of the estimated amount of cocaine arriving in Mexico. Heroin: During 2000-2005, the estimated amount of heroin produced for export in Mexico averaged almost 19 metric tons a year—ranging from a low of 9 metric tons in 2000 to a high of 30 metric tons in 2003. Although the estimated amount of heroin produced declined in 2004 and 2005, the 2005 estimate (17 metric tons) was nearly double the estimated amount produced in 2000. Reported heroin seizures in Mexico and along the U.S.- Mexico border averaged less than 1 metric ton or less than 5 percent a year of the estimated export quality heroin produced in Mexico between 2000 and 2005. Marijuana: During 2000-2005, the estimated amount of marijuana produced in Mexico each year averaged about 9,400 metric tons—ranging from a low of 7,000 metric tons in 2000 to a high of 13,500 metric tons in 2003. Although estimated production declined to 10,100 metric tons in 2005, this was over 3,000 metric tons more than the estimated production in 2000. Reported seizures of marijuana in Mexico and along the U.S.- Mexico border ranged from about 2,150 metric tons in 2000 to nearly 3,500 in 2003—averaging less than 2,900 metric tons a year, or about 30 percent of the annual production estimates. Methamphetamine: Neither the United States nor the government of Mexico prepares estimates of the amount of methamphetamine produced in Mexico. However, U.S. officials told us that the large increases in reported methamphetamine seizures from 2000 through 2006 point to significantly greater amounts being manufactured. On the basis of the reported data, seizures along the U.S.-Mexico border rose more than five times—from an estimated 500 kilograms in 2000 to almost 2,900 metric tons in 2004 and over 2,700 kilograms in 2006. Corruption Persists within the Mexican Government In 2001, State reported that pervasive corruption within the government of Mexico was the greatest challenge facing Mexico’s efforts to curb drug trafficking. Since then State has reported on the Mexican government’s efforts to reduce corruption. Nevertheless, increasing illicit drug proceeds from the United States—estimated by the National Drug Intelligence Center at between $8 billion and $23 billion in 2005—has afforded Mexican DTOs considerable resources to subvert government institutions, particularly at the state and local level. U.S. and Mexican government officials and various other observers, including academics, investigative journalists, and nongovernmental organizations that study drug trafficking trends in Mexico, told us that profits of such magnitude enable drug traffickers to bribe law enforcement and judicial officials. Since 2000, Mexico has undertaken several initiatives to address corruption. For instance, in 2001, when Mexican authorities created the Federal Investigative Agency (AFI) in the Mexican Attorney General’s Office, they disbanded the Federal Judicial Police, which was widely considered corrupt. Mexico also conducted aggressive investigations into public corruption, resulting in the arrest and prosecution of officials, as well as the dismissal and suspension of others. Despite these actions, corruption remains a major factor complicating efforts to fight organized crime and combat drug trafficking. U.S. and some Mexican law enforcement agents told us that in certain parts of the country, they do not have vetted counterparts to work with. Moreover, AFI represents only about one-third of Mexico’s estimated 24,000 federal law enforcement officials. According to U.S. officials, the majority—about 17,000—belong to the Federal Preventive Police, whose personnel are not subject to the same requirements as those of AFI for professional selection, polygraph and drug testing, and training. Partly to address the problem of corruption, Mexican President Felipe Calderón’s government has begun to consolidate various federal civilian law enforcement entities into one agency and triple the number of trained, professional federal law enforcement officers subject to drug, polygraph, and other testing. This initiative will combine AFI and the Federal Preventive Police, along with officers from other agencies, into one agency known as the Federal Police Corps, which would operate in cities and towns of more than 15,000 people. However, this initiative will not affect the vast majority of Mexico’s law enforcement officials, most of whom are state and local employees and who, according to one source, number approximately 425,000. Mexican DTOs Control Drug Trafficking in Mexico and Have Extended Their Reach into the United States According to the Drug Enforcement Administration (DEA), four main DTOs control the illicit drug production and trafficking in Mexico and operate with relative impunity in certain parts of the country: The Federation, which operates from the Mexican state of Sinaloa, is an alliance of drug traffickers that U.S. and Mexican officials told us may have the most extensive geographic reach in Mexico. The Tijuana Cartel, also known as the Arellano Felix Organization after its founder, operates from the border city of Tijuana in the Mexican state of Baja California. Its activities center in the northwestern part of Mexico, where, according to local investigative journalists and U.S. officials, it exerts considerable influence over local law enforcement and municipal officials. The Juarez Cartel is based in Ciudad Juarez, in the border state of Chihuahua. According to DEA officials, the Juarez Cartel has extensive ties to state and local law enforcement officials. The Gulf Cartel operates out of Matamoros on the Gulf of Mexico, in the border state of Tamaulipas. According to DEA officials, the Gulf Cartel has infiltrated the law enforcement community throughout Tamaulipas, including the border city of Nuevo Laredo, which is a principal transit point for commercial traffic to the United States. The Gulf Cartel has also employed a criminal gang referred to as the Zetas, which is primarily composed of rogue former Mexican military commandos that are known for their violent methods. According to DEA and other U.S. officials, in recent years Mexican DTOs have taken over the transportation of cocaine shipments from South America previously managed by Colombians. In addition, according to the National Drug Threat Assessment, Mexican DTOs have expanded their presence in drug markets throughout the United States, moving into cities east of the Mississippi River previously dominated by Colombian and Dominican drug traffickers. According to National Drug Intelligence Center officials, Mexican DTOs tend to be less structured in the United States than in Mexico, but have regional managers throughout the country, relying on Mexican gangs to distribute illicit drugs. Further, DTOs are becoming more sophisticated and violent. With significant resources at their disposal, Mexican DTOs are developing more sophisticated drug trafficking methods and to evade U.S. maritime detection and interdiction efforts, such as using elaborate networks of go- fast boats and refueling vessels. According to Justice officials and documents, Mexican drug traffickers are also taking advantage of advances in cell phone and satellite communications technology, which have allowed them to quickly communicate and change routes once they suspect their plans have been compromised. In addition, the traffickers have also begun making more use of tunnels under the U.S.-Mexico border—another indication of the increasing sophistication of DTO operations. From 2000 to 2006, U.S. border officials found 45 tunnels— several built primarily for narcotics smuggling. According to U.S. officials, tunnels found in the last 6 years are longer and deeper than in prior years. Drug-related violence in Mexico has continued to increase in recent years. President Calderón highlighted the importance of improving public security by punishing crime, and the administration of former President Vicente Fox (2000-2006) actively targeted major drug kingpins. While this strategy does not appear to have significantly reduced drug trafficking in Mexico, it disrupted the cartels’ organizational structure, presenting opportunities to gain control of important transit corridors leading to the United States, such as Nuevo Laredo. Such struggles led to increased violence throughout Mexico, with drug-related deaths estimated at over 2,000 in 2006. This trend has continued in 2007, with drug-related deaths estimated at over 1,100 as of June 2007. In addition, an increasing number of drug-related incidents targeting law enforcement officers and government officials have been documented in Mexico. For example, in May 2007, the newly appointed head of Mexico’s drug intelligence unit in the Attorney General’s office was shot and killed in a street ambush in Mexico City. Journalists have also been targeted as a result of investigative articles written about DTO activities. Due to the risks associated with reporting on narco-trafficking, Mexico was recently ranked as the second most dangerous country in the world for journalists, after Iraq. U.S. Assistance Helped Mexico Improve Its Counternarcotics Efforts, but Coordination Can Be Improved Table 1 depicts U.S. assistance to support counternarcotics-related programs and activities in Mexico during fiscal years 2000 through 2006. Other U.S. agencies also supported Mexican counternarcotics activities, but did not provide funding. State’s Bureau for International Narcotics and Law Enforcement Affairs (INL) funds supported the purchase of a wide range of items and activities, including scanning machinery for security purposes at ports and border crossings; vehicles, computers, software, and other equipment used to improve Mexico’s law enforcement infrastructure; interdiction and eradication initiatives; aircraft and related equipment and maintenance; training for Mexican law enforcement and judicial officials; and other programs designed to promote U.S. counternarcotics goals. DEA’s funding primarily supported field offices throughout Mexico, from which DEA agents coordinated bilateral cooperation with Mexican federal, state, and local law enforcement officials, allowing both countries to collect drug intelligence, conduct investigations, prosecute drug traffickers, and seize assets. Defense supported programs designed to detect, track, and interdict aircraft and maritime vessels suspected of transporting illicit drugs—primarily cocaine from South America. Last, USAID’s funding for Mexico promoted reform of Mexico’s judicial system at the state level, as well as government transparency, which broadly supports U.S. counternarcotics objectives. According to the U.S. embassy in Mexico, one of its primary goals is to help the Mexican government combat transnational crimes, particularly drug trafficking. Over the years, U.S. assistance has supported four key strategies: (1) to apprehend and extradite drug traffickers, (2) to counter money laundering by seizing the assets of DTOs, (3) to strengthen the application of the rule of law, and (4) to interdict or disrupt the production and trafficking of illicit drugs. Since 2000, U.S. assistance has made some progress in each of these areas but has not significantly cut into drug trafficking, and Mexico and the United States can improve cooperation and coordination in some areas. Extraditions of Mexican Drug Traffickers Have Increased In January 2007, the administration of President Calderón extradited several high-level drug kingpins, such as Osiel Cardenas, the head of the Gulf Cartel, whose extradition long had been sought by U.S. authorities. U.S. officials cited Mexico’s decision to extradite Cardenas and other drug kingpins as a major step forward in cooperation between the two countries and expressed optimism about the prospects for future extraditions. As shown in table 2, U.S. extradition efforts have progressed gradually through 2005, but increased more than 50 percent in 2006 and through mid-October, 2007. Efforts to Counter Money Laundering Are Progressing In 2002, Immigration and Customs Enforcement (ICE) and DEA supported Mexican authorities who established a vetted unit within AFI for investigating money laundering, consisting of about 40 investigators and prosecutors. These AFI officials collaborated with ICE on money laundering and other financial crime investigations and developed leads. With funding provided by the Narcotics Affairs Section (NAS), ICE developed several training initiatives for Mexican law enforcement personnel targeting bulk cash smuggling via commercial flights to other Latin American countries. From 2002 to 2006, in collaboration with ICE, Mexican Customs and AFI’s money laundering unit seized close to $56 million in illicit cash, primarily at Mexico City’s international airport. In 2004, the Mexican Congress passed financial reform legislation as part of a comprehensive strategy to prevent and combat money laundering and terrorist financing. In May of that year, the Financial Intelligence Unit under Mexico’s Treasury Secretariat brought together various functions previously undertaken by different Treasury Secretariat divisions with the goal of detecting and preventing money laundering and terrorist financing. To support these efforts, NAS provided over $876,000 for equipment and to refurbish office space for the Financial Intelligence Unit. Since 2004, the Financial Intelligence Unit has established closer monitoring of money service businesses and financial transactions. According to Financial Intelligence Unit officials, this resulted in the seizure of millions of dollars. U.S. Treasury officials noted improvements in the level of cooperation with Mexican authorities under the Fox administration. For example, they highlighted how the Financial Intelligence Unit began issuing accusations against individuals named on Treasury’s Office of Foreign Assets Control’s (OFAC) Specially Designated Nationals and Blocked Persons list of drug kingpins and suspected money launderers. These accusations were forwarded to the Mexican Attorney General’s Office for possible legal action. Treasury officials also expressed optimism that continued collaboration with Mexican authorities under the Calderón administration would lead to more aggressive action on asset forfeitures. DEA also works closely with AFI to identify the assets of Mexican DTOs. In March and April 2007, DEA conducted asset forfeiture and financial investigative training to the newly formed Ad Hoc Financial Investigative Task Force in Mexico’s Attorney General’s Office. In March 2007, DEA efforts in an investigation of chemical control violations resulted in the seizure of $207 million in currency at a residence in Mexico City. In another investigation, DEA assistance led Mexican authorities to seize in excess of $30 million in assets from a designated kingpin and his DTO. DEA officials share Treasury’s optimism that continued collaboration with Mexican authorities will lead to significant seizures of drug trafficking assets. USAID, DEA, INL, and Other U.S. Agencies Support Mexico’s Rule-of- Law Efforts As part of its rule-of-law portfolio in Mexico, USAID has promoted criminal justice reforms at the state level since 2003. The criminal procedures system that prevails in Mexico today is based on the Napoleonic inquisitorial written model, with judges working independently using evidence submitted in writing by the prosecution and defense to arrive at a ruling. According to U.S. officials, this system has been vulnerable to the corrupting influence of powerful interests, particularly criminal organizations. To promote greater transparency in judicial proceedings, USAID has supported initiatives to introduce adversarial trials in Mexico. Such trials entail oral presentation of prosecution and defense arguments before a judge in a public courtroom. Since this system is open to public scrutiny, USAID officials explained that it should be less vulnerable to corruption. To date, USAID has provided technical assistance to 14 Mexican states to implement criminal justice reforms, including oral trials. U.S. agencies have also pursued legal and regulatory reforms related to precursor chemicals used in the production of methamphetamine in Mexico. Specifically, the United States has encouraged the government of Mexico to implement import restrictions on methamphetamine precursor chemicals and impose stricter controls on the way these substances are marketed and sold once in Mexico. In 2004, the Mexican Federal Commission for the Protection against Sanitary Risk (COFEPRIS) conducted a study that revealed an excess of imports of pseudoephedrine products into Mexico. Subsequently, Mexico implemented several controls on pseudoephedrine. In 2005, COFEPRIS officials reduced legal imports of pseudoephedrine by over 40 percent—from 216 metric tons in 2004 to about 132. In 2006, pseudoephedrine imports were further reduced to 70 metric tons. According to ONDCP, as of mid-October, 2007 Mexico had reduced its imports of pseudoephedrine to 12 metric tons. U.S. Support for Mexican Interdiction Efforts Has Helped, but Improvements Are Needed The fourth strategy under the embassy’s counternarcotics goal is to support Mexican efforts to interdict illicit drugs. U.S. assistance has provided for (1) infrastructure upgrades for law enforcement entities; (2) professional training for law enforcement and judicial personnel; (3) military coordination, particularly for maritime interdiction and surveillance; and (4) aviation support for interdiction and surveillance. Overall, these U.S.-supported programs have strengthened Mexican counternarcotics efforts, but areas for improvement remain, particularly regarding cooperation and coordination with Mexican counternarcotics agencies and the provision of U.S. aviation support. Infrastructure Upgrades and Equipment From 2000 to 2006, a significant share of INL’s assistance to Mexico— about $101 million of nearly $169 million—supported the embassy’s interdiction strategy for Mexico through the purchase of equipment to enhance border security measures and upgrade the infrastructure of various Mexican law enforcement entities. In October 2001, when the Fox administration created AFI under the jurisdiction of the Attorney General’s Office, NAS provided infrastructure and equipment for counternarcotics operations, including computer servers, telecommunications data processing hardware and software, systems for encrypting telecommunications, telephone systems, motorcycles, and a decontamination vehicle for dismantling methamphetamine processing labs. In addition, NAS funded the renovation of a building where AFI staff were located, as well as the construction of a state-of-the-art network for tracking and interdicting drug trafficking aircraft. According to State reports, since 2001, AFI has figured prominently in investigations, resulting in the arrests of numerous drug traffickers and corrupt officials, becoming the centerpiece of Fox administration efforts to transform Mexican federal law enforcement entities into effective institutions. In July 2003, the Mexican Attorney General’s Office reorganized its drug control planning capacity under the National Center for Analysis, Planning and Intelligence (CENAPI). According to INL, NAS also equipped CENAPI with a state-of-the-art computer network for collecting, storing, and analyzing crime-related information. CENAPI analysts noted that software provided by NAS allowed them to process large volumes of data—including background files on more than 30,000 criminals—and make considerable progress in investigations of unsolved crimes. In 2005, NAS provided computer equipment for COFEPRIS to monitor imports of methamphetamine precursor chemicals at major international points of entry into Mexico. This complemented efforts by the United Nations Office on Drugs and Crime to enhance COFEPRIS’s capabilities to track shipments and imports of precursor chemicals and controlled medicines through a National Drug Control System database. NAS also funded the procurement of nonintrusive inspection equipment for Mexican customs officials to scan container trucks, railroad cars, and other cargo containers for illicit contraband at Mexican ports and the border. Such border security measures also support counternarcotics efforts, since drug traffickers are known to exploit opportunities provided by legitimate U.S.-Mexico cross-border trade to smuggle illicit drugs. Border security funding was also used to enhance “secure rapid inspection lanes” at six U.S.-Mexico border crossings. In addition to support provided by NAS, Justice’s DEA provided specialized equipment to the Attorney General’s Office and other Mexican law enforcement entities to allow them to detect and properly handle hazardous materials at clandestine methamphetamine laboratories. This included safety suits required for clandestine lab cleanups, evidence containers, and drug-testing chemical kits. DEA also donated eight specially designed vehicles to handle toxic chemicals typically found at facilities where methamphetamine is produced. These trucks were recently refurbished and will be based at locations throughout Mexico where a large number of methamphetamine labs are suspected of operating. Law Enforcement and Judicial Personnel Training U.S. agencies have sought to strengthen Mexico’s interdiction capabilities through training for Mexican law enforcement, judicial, and military personnel. According to State, the overall purpose of this training is to help Mexican police personnel and prosecutors combat more effectively all transnational crimes affecting U.S. interests, including drug trafficking and money laundering. NAS has taken the lead in funding such training, and courses are typically taught by U.S. law enforcement agencies and various contractors in Mexico and the United States. From 2000 through 2006, NAS provided approximately $15 million for such training. DEA has also funded some training for members of its vetted units, and Defense has provided training for Mexican military officials. According to U.S. and Mexican officials, this training was an integral part of the Mexican Attorney General’s efforts to develop a professional cadre of investigative agents within AFI, and it also supported more general efforts by the Fox administration to upgrade the capabilities and ethical awareness of Mexican law enforcement officials at the federal, state, and local levels. By 2006, the United States had supported training for over 2,000 federal, state, and local law enforcement officials, with a goal of training 2,000 more in 2007. Interdiction Cooperation and Coordination Can Be Improved From 2000 to 2006, Defense has spent a total of about $58 million for equipment and training for the Mexican military, particularly to help the Mexican Navy interdict aircraft and vessels suspected of transporting illicit drugs. From 2000 to 2006, Defense provided training for about 2,500 Mexican military personnel in the use of certain kinds of equipment, as well as training to enable them to coordinate with U.S. aircraft and vessels. The training provided was designed to strengthen the Mexican military’s ability to detect, monitor, and interdict suspected drug trafficking aircraft and vessels, as well as help professionalize Mexico’s military and improve relations between the U.S. and Mexican militaries. Defense initiatives have facilitated coincidental maritime operations between the United States and Mexico that have resulted in greater cooperation between the two countries, particularly with respect to boarding, searching, and seizing suspected vessels transiting Mexican waters. In recent years, the Mexican Navy has regularly responded to U.S. information on suspect vessels transiting Mexican waters—46 times in 2006, for example. In addition, the Mexican Navy agreed on several occasions to temporarily place Mexican liaison officers aboard U.S. Coast Guard vessels, as well as placing U.S. Coast Guard officers aboard Mexican vessels. The Mexican Navy also permitted U.S. law enforcement personnel to participate in some dockside searches and post-seizure analyses. However, the United States and Mexico have not agreed to a bilateral maritime cooperation agreement that would allow U.S. law enforcement personnel to board and search Mexican-flagged vessels on the high seas suspected of trafficking illicit drugs without asking the government of Mexico for authority to board on a case-by-case basis. According to Defense officials, a request to board and search a suspicious Mexican- flagged vessel—or one whose captain reports it as Mexican-registered— can be complex and time-consuming, involving, at a minimum, the Foreign Affairs Secretariat as well as the Mexican Navy. Waiting for approval or the arrival of the Mexican Navy typically creates delays, which can result in the loss of evidence as the illicit drugs are thrown overboard or the vessel is scuttled or escapes. In addition, while the Mexican Navy has proved willing to respond to U.S. information on suspicious vessels transiting Mexican waters, according to Defense officials, the Mexican Navy does not normally conduct patrols more than 200 nautical miles from shore. In addition, according to embassy and Defense officials, Defense has little contact with Mexico’s Defense Secretariat (SEDENA), which oversees the Mexican Army and Air Force. According to these officials, the Mexican Army has conducted counternarcotics operations throughout Mexico, including in Acapulco, Nuevo Laredo, and Tijuana, to reduce the violence caused by drug trafficking, and it manually eradicates opium poppy and marijuana. But, according to Defense officials, none of these efforts took advantage of U.S. expertise or intelligence. In the past, some eradication efforts were also done by the Mexican Attorney General’s Office, which worked with its U.S. counterparts. Now, however, the Calderón administration plans to consolidate all eradication efforts under SEDENA, which makes greater cooperation with SEDENA all the more important. In addition, from 2001 until late 2006, Customs and Border Protection (CBP) provided eight Citation jets for detection and monitoring of suspected drug trafficking aircraft along the U.S.-Mexican border under a program known as Operation Halcon in cooperation with AFI. According to CBP officials, in recent years Operation Halcon was a successful interdiction effort that helped prevent drug traffickers from flying aircraft near the U.S.-Mexico border, which made it more difficult to transport illicit drugs to the United States. They also noted that CBP and AFI personnel worked very closely and one CBP official worked full time at the AFI Command Center. Moreover, CBP officials maintained that the embassy infrastructure, operational staffing, and relationships developed under Halcon provided critical daily interface with the Mexican authorities, facilitating quick responses to operational needs along the border and the sharing of intelligence. Overall, in 2005, between 15 and 25 percent of the 294 suspect aircraft identified by Operation Halcon resulted in seizures of aircraft and other vehicles or arrests. In March 2006, the United States sought to formalize Operation Halcon to limit liability for U.S. pilots involved in the patrols in the event of an accident. However, the Mexican government did not respond with terms acceptable to CBP, and in November 2006, the government of Mexico suspended the program. As a result, U.S. embassy officials said that fewer suspect flights are being identified and interdicted. According to CBP officials, since the suspension, seizures of illicit drugs along the U.S.- Mexico border have increased, and this, according to DEA, CBP, and other officials, is an indication that more drugs are finding their way to northern Mexico. U.S. Aviation Support for Interdiction Can Be Better Coordinated From 2000 to 2006, NAS provided about $22 million, or 13 percent of INL’s obligations for Mexico, to support aviation programs for counternarcotics efforts by the Attorney General’s Office and one program for the Mexican Air Force. Since 1990, NAS has provided 41 Vietnam-era UH-1H helicopters, of which 28 remain in service, to the Mexican Attorney General’s Office to transport law enforcement personnel interdicting drug trafficking aircraft landing in Mexico. Since 2000, NAS has expended $4.5 million to refurbish 8 of the aircraft. According to State, the aircraft have served as the transportation workhorse for the Attorney General’s air services section, flying a total of approximately 14,000 hours from 2001 to 2006. However, according to the embassy, the UH-1H program did not meet its target of interdicting 15 percent of all aircraft detected in the transport of illicit drugs and crops—in 2005, 4 percent were interdicted. In addition, the helicopters’ readiness rates have progressively declined from about 90 percent in January 2000 to 33 percent in January 2007. NAS and Mexican officials attributed the reduced readiness rates to a lack of funding and a lack of spare parts for these aging aircraft, which Defense will stop supporting in 2008. In January 2007, NAS officials told us that State/INL does not intend to provide any further support for the UH-1Hs. Beginning in 2004, NAS provided the Attorney General’s Office 12 Schweizer 333 helicopters, of which 11 remain operational. The total expended for these helicopters was $14.2 million, which included a 2-year support package. Equipped with forward-looking infrared sensors for nighttime operations as well as television cameras, the Schweizers are designed to provide the Attorney General’s Office with a reconnaissance, surveillance, and command and control platform. According to State officials, the Schweizers were used in Nuevo Laredo and other locations, providing support for surveillance operations, flying a total of approximately 1,750 hours from September 2004 to February 2007. Originally, NAS had planned to provide 28 Schweizers, deploying them to various points throughout Mexico. However, according to State officials, due to funding limitations and changed priorities, NAS capped the number at 12. In addition, Mexican Attorney General officials told us that they would have preferred a helicopter with both a surveillance capability and troop transport capacity. From 2000 to 2006, NAS also expended about $4.2 million to repair, maintain, and operate four C-26 aircraft provided by the United States to Mexico in 1997. The aircraft did not originally come equipped with a surveillance capability, and the Mexican Air Force had indicated it had no plans to invest in the necessary equipment. In 1998, we reported that the Mexican Air Force was not using the aircraft for any purpose. After Mexico upgraded these aircraft with forward-looking infrared radar in 2002, NAS funded maintenance of the aircraft and sensors, as well as training for sensor operators and imagery analysts. Part of the NAS funding was also used to provide contractor logistical support, including spare parts. Southwest Border Strategy’s Implementation Plan In March 2006, ONDCP, in conjunction with the National Security Council and other agencies involved in the U.S. interagency counternarcotics community, developed a Southwest Border Strategy to help reduce the flow of illicit drugs entering the United States across the southwest border with Mexico. The stated objectives of the strategy, which we reviewed in June 2007, were to enhance and better coordinate the collection of intelligence; effectively share information, when appropriate, with Mexican officials; investigate, disrupt, and dismantle Mexican DTOs; interdict drugs and other illicit cargo by denying entry by land, air, and sea routes; deny drug traffickers their profits by interdicting bulk currency movements and electronic currency transfers; enhance Mexico’s counterdrug capabilities; and reduce the corruption that facilitates illicit activity along and across the border. In addition, a plan was developed to implement the strategy. As of August 2007, ONDCP officials told us that the implementation plan was being revised to respond to the Calderón administration’s new initiatives. On October 2, 2007, the Director of ONDCP released a summary of the strategy that referred to the implementation plan. According to the summary, the implementation plan lays out the desired end state, estimated resource requirements, action plan, and metrics for each of the seven objectives in the strategy. Conclusions U.S. counternarcotics assistance to Mexico since 2000 has helped Mexico strengthen its law enforcement institutions and capacity to combat illicit drug production and trafficking. However, overall, the flow of illicit drugs to the United States has not abated, and U.S. and Mexican authorities have seized only a relatively small percentage of the illicit drugs estimated transiting through or produced in Mexico. Moreover, reducing drug-related corruption remains a challenge for the Mexican government, and Mexican DTOs have increasingly become a threat in both Mexico and the United States. Mexican officials have recognized the increasing threat and indicated that combating the illicit drug threat in cooperation with the United States is a priority. As we noted in our recent report, the Calderón administration has signaled an interest in working with the United States to reduce drug production and trafficking. At the time, to respond to the Calderon administration’s initiatives, ONDCP and the U.S. counternarcotics community was revising the Southwest Border Strategy’s implementation plan to emphasize greater cooperation with Mexico. We recommended that the Director of ONDCP, as the lead agency for U.S. drug policy, in conjunction with the cognizant departments and agencies in the U.S. counternarcotics interagency community, coordinate with the appropriate Mexican officials before completing the Southwest Border Strategy’s implementation plan to help ensure Mexico’s cooperation with any efforts that require it and address the cooperation issues we identified. ONDCP concurred with the recommendation and it has since assured us that the interagency counternarcotics community is actively engaged with their Mexican counterparts. In commenting on our report, ONDCP emphasized that the Southwest Border Strategy’s implementation plan must be a living document with the flexibility to adjust as resources become available. Mr. Chairman and members of the subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. Contact and Staff Acknowledgements For questions regarding this testimony, please contact Jess T. Ford at (202) 512-4268 or [email protected]. Albert H. Huntington, III, Assistant Director; Joe Carney; and José M. Peña, III made key contributions in preparing this statement. Appendix I: Estimated Amounts of Illicit Drugs Transiting or Produced in Mexico and Seized, Calendar Years 2000-2006 Arriving in Mexico for transshipment to the United States 260 to 460300 to 460The Drug Enforcement Administration’s El Paso Intelligence Center (and the IACM) defines drug seizures at the U.S. southwest border to include seizures at the U.S.-Mexico border or within 150 miles on the U.S. side of the border, including 88 border counties in Arizona, California, New Mexico, and Texas. This estimate does not include heroin that is produced in Colombia and may transit Mexico on the way to the United States. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The overall goal of the U.S. National Drug Control Strategy, which is prepared by the White House Office of National Drug Control Policy (ONDCP), is to reduce illicit drug use in the United States. One of the strategy's priorities is to disrupt the illicit drug marketplace. To this end, since fiscal year 2000, the United States has provided about $397 million to support Mexican counternarcotics efforts. According to the Department of State (State), much of the illicit drugs consumed in the United States flows through or is produced in Mexico. GAO examined (1) trends in Mexican drug production and trafficking since calendar year 2000 and (2) U.S. counternarcotics support for Mexico since fiscal year 2000. This testimony is based on a recently issued report (GAO-07-1018) that addresses these issues. According to the U.S. interagency counternarcotics community, hundreds of tons of illicit drugs flow from Mexico into the United States each year, and seizures in Mexico and along the U.S. border have been relatively small in recent years. The following illustrates some trends since 2000: (1) The estimated amount of cocaine arriving in Mexico for transshipment to the United States averaged about 290 metric tons per year. Reported seizures averaged about 36 metric tons a year. (2) The estimated amount of export quality heroin and marijuana produced in Mexico averaged almost 19 metric tons and 9,400 metric tons per year, respectively. Reported heroin seizures averaged less than 1 metric ton and reported marijuana seizures averaged about 2,900 metric tons a year. (3) Although an estimate of the amount of methamphetamine manufactured in Mexico is not prepared, reported seizures along the U.S. border rose from about 500 kilograms in 2000 to highs of about 2,800 kilograms in 2005 and about 2,700 kilograms in 2006. According to U.S. officials, this more than fivefold increase indicated a dramatic rise in supply. In addition, according to State, corruption persists within the Mexican government and challenges Mexico's efforts to curb drug production and trafficking. Moreover, Mexican drug trafficking organizations operate with relative impunity along the U.S. border and in other parts of Mexico, and have expanded their illicit business to almost every region of the United States. U.S. assistance since fiscal year 2000 has helped Mexico strengthen its capacity to combat illicit drug production and trafficking. Among other things, extraditions of criminals to the United States increased; thousands of Mexican law enforcement personnel were trained; and controls over chemicals to produce methamphetamine were strengthened. Nevertheless, cooperation with Mexico can be improved. The two countries do not have an agreement permitting U.S. law enforcement officers to board Mexican-flagged vessels suspected of transporting illicit drugs on the high seas; an aerial monitoring program along the U.S. border was suspended because certain personnel status issues could not be agreed on; State-provided Vietnam-era helicopters have proved expensive and difficult to maintain and many are not available for operations; and a State-supported border surveillance program was cut short due to limited funding and changed priorities. In 2006, in response to a congressional mandate, ONDCP and other agencies involved in U.S. counternarcotics efforts developed a strategy to help reduce the illicit drugs entering the United States from Mexico. An implementation plan was prepared but is being revised to address certain initiatives recently undertaken by Mexico. Based on our review of the plan, some proposals require the cooperation of Mexico; but, according to ONDCP, they had not been addressed with Mexican authorities at the time of our review.
Background NFIP Overview In 1968, Congress created NFIP to address the increasing cost of federal disaster assistance by providing flood insurance to property owners in flood-prone areas, where such insurance was either not available or prohibitively expensive. The 1968 law also authorized premium subsidies to encourage community and property owner participation. To participate in the program, communities must adopt and agree to enforce floodplain management regulations to reduce future flood damage. In exchange, federally backed flood insurance is offered to residents in those communities. NFIP was subsequently modified by various amendments to strengthen certain aspects of the program. The Flood Disaster Protection Act of 1973 made the purchase of flood insurance mandatory for properties in SFHAs that are secured by mortgages from federally regulated lenders. This requirement expanded the overall number of insured properties, including those that qualified for subsidized premiums. The National Flood Insurance Reform Act of 1994 expanded the purchase requirement for federally backed mortgages on properties located in an SFHA. Key Factors for NFIP Premium Rates FEMA bases NFIP premium rates on a property’s flood risk and other factors. A FIRM is the official map of a community on which FEMA has delineated both the risk premium zones applicable to the participating community and SFHAs. FEMA studies and maps flood risks, assigning flood zone designations from high to low depending on the likelihood of flooding. Properties in SFHAs are at high risk, specifically a 1 percent or greater annual chance of flooding, and are designated as zones A, AE, V, or VE. FEMA also bases premium rates on property and policy characteristics. For example, FEMA bases premium rates on occupancy type (single-family or multifamily unit), number of floors, and elevation of the property—that is, the difference between the lowest elevation of the building relative to its base flood elevation—if applicable. Base flood elevation refers to the level relative to mean sea level at which there is a 1 percent or greater chance of flooding in a given year. Additionally, FEMA uses policy characteristics, such as building and content coverage amounts and policy deductible amounts, in setting premium rates. NFIP has two basic categories of premium rates: those intended to reflect the full risk of flooding to the group of properties within a rate class (full- risk rates) and those that are not intended to reflect full risk (subsidized rates). Full-risk rate structures are mostly buildings constructed after a community’s FIRM was published and are referred to as post-FIRM. These structures have been built to flood-resistant building codes or have had their flood risks mitigated and generally are at or above base flood elevation. Structures with subsidized rates are mostly buildings constructed before a community joined NFIP and are generally referred to as pre-FIRM because they were built before the potential for flood damages was known and identified on the community’s FIRM. Unlike full- risk rates, subsidized rates do not take elevation of the property into consideration. Property elevation can be obtained through elevation certificates. Status of Subsidies under the Biggert-Waters Act and HFIAA More recent legislation—the Biggert-Waters Act and HFIAA—affected NFIP’s ability to charge subsidized premium rates on certain types of properties and will likely change the number of policies that are subsidized, as well as the size of the subsidy. For example, the Biggert- Waters Act prohibited subsidies from being extended for homes sold to new owners after July 6, 2012, (date of enactment) and removed subsidies if properties lapsed in coverage as a result of the policyholders’ deliberate choice. However, HFIAA reinstated premium subsidies for properties that were purchased after July 6, 2012, and properties not insured by NFIP as of July 6, 2012. Because new policyholders may join NFIP and receive subsidized rates, such as owners of pre-FIRM properties that previously were not insured, the number of subsidized policies could increase over time. However, provisions under both acts gradually phase out subsidies by requiring FEMA to increase premiums annually until full-risk rates are reached. The Biggert-Waters Act requires FEMA to increase premiums by 25 percent each year until full-risk rates are reached for certain types of properties, including business properties, residential properties that are not a primary residence, properties that have sustained substantial damage or improvement, and severe repetitive loss properties. HFIAA did not affect the phase-out schedule for those properties, and the act also contains provisions requiring FEMA to increase premium rates on other subsidized policies, such as those for primary residences purchased after July 6, 2012, and primary residences not insured by NFIP as of the same date, by at least 5 percent but no more than 15 percent annually. Mitigation FEMA supports a variety of flood mitigation activities that are designed to reduce flood risk and thus NFIP’s financial exposure. These activities, which are implemented at the state and local levels, include hazard mitigation planning; the adoption and enforcement of floodplain management regulations and building codes; and the use of hazard control structures, such as levees, dams, and floodwalls or natural protective features such as wetlands and dunes. Community-level mitigation funding is available through FEMA via grant programs such as the Flood Mitigation Assistance Program. Through these programs, FEMA provides communities cost-sharing opportunities for mitigation activities. At the individual property level, mitigation options include elevating a building to or above the area’s base flood elevation, relocating the building to an area with less flood risk, or purchasing and demolishing the building and turning the property into green space. Various Options Exist for Targeting Assistance, but Each Involves Challenges That FEMA Would Have to Overcome Although any pre-FIRM property located in an SFHA in a participating community is currently generally eligible for a subsidy, according to some stakeholders we interviewed and our analysis of literature we reviewed, options for targeting assistance to subsidized NFIP policyholders who may experience difficulty paying full-risk rates include means testing based on the income level of policyholders or geographic areas, setting premium caps, and basing assistance on the cost of mitigating the risk of damage to a home. These options are not mutually exclusive and could be combined depending on Congress’s policy priorities for NFIP. However, they all involve trade-offs, and implementing any of them would likely be challenging. Means-Tested Assistance Targets Those with Financial Need but Presents Data-Related Challenges According to some stakeholders we interviewed and our analysis of literature we reviewed, means testing to determine eligibility for NFIP assistance could help directly address affordability concerns by targeting subsidies to those in need. According to a NAS report on NFIP affordability we reviewed, a means-tested program could be designed in various ways, including targeting assistance based on individual policyholders’ financial need or the financial characteristics of a local geographic area. Currently, NFIP subsidies are tied to the property, not the property owner, and any pre-FIRM property located in an SFHA in a participating community is eligible for a subsidy. In contrast, a means- tested program would decouple the subsidy from the property and instead attach it to the policyholder or a group of policyholders on the basis of need, as determined by specified financial requirements and eligibility criteria. In our July 2013 report on subsidized properties, we found that this approach would allow the federal government to provide assistance only to those NFIP policyholders deemed eligible, with the rest paying full- risk rates. Means-tested programs that consider individuals’ financial need are not new to the federal government, and some stakeholders we interviewed suggested that a means-based assistance program for NFIP could be designed similarly to other existing programs. Over the years, Congress has established a number of programs to provide cash and noncash assistance based on the financial need of individuals and families. For example, to be eligible for certain federal housing programs, individual households must meet specific income limits. These limits reflect the financial characteristics of a local area because they are expressed as a percentage of the area median income (AMI) for the county or metropolitan area in which the household is located, and the limits range from 30 percent through 140 percent of AMI. For example, to be eligible for homeowner rehabilitation and homebuyer assistance under HUD’s HOME Investment Partnership Program, households must have incomes at or below 80 percent of AMI. Similarly, under the Federal Home Loan Bank System’s Community Investment Program, the income of a qualifying mortgage borrower may not exceed 115 percent of AMI. Some stakeholders we interviewed suggested that similar AMI limits could be used to determine eligibility for NFIP because these measures reflect local characteristics. An NFIP assistance program based on individuals’ or households’ income would require a similar threshold to be set. In order for FEMA to implement a means-tested option that considers individual policyholders’ financial need, it would need income information at the individual or household level for policyholders who receive a subsidized rate under the current NFIP structure. Because the current NFIP structure attaches the assistance to the property rather than the policyholder, FEMA does not collect income information for policyholders who receive subsidies. As a result, a system to collect this information would need to be designed and implemented. We identified two primary ways FEMA could obtain income data, but gathering such information could be challenging. According to some stakeholders we interviewed, IRS could provide FEMA with income data it collects from tax filers. For example, some stakeholders said that a partnership between FEMA and IRS could be established, similar to the partnership IRS and the Department of Education have for the Free Application for Federal Student Aid (FAFSA) form. The Department of Education began coordinating with IRS in 2010 to provide an option for tax filers to prepopulate the FAFSA using an automatic data transfer from their tax returns. However, restrictions set forth in the Internal Revenue Code prohibit the disclosure of taxpayer information to other federal agencies without a statutorily specified purpose, and new processes would need to be established if taxpayer data were to be used. Under section 6103 of the Internal Revenue Code, federal tax information must be kept confidential and may not be disclosed, except as otherwise specifically authorized. In December 2011, we developed a guide that Congress could use for screening and assessing proposals to disclose confidential tax information to specific parties for specific purposes. Specifically, the guide consists of key questions that can help in screening a proposal for basic facts and identifying policy factors to consider. Further, according to IRS officials, certain processes would need to be developed to provide federal tax information to another agency, such as FEMA, including entering into required agreements, such as data-sharing agreements. Moreover, IRS officials told us that FEMA would need to develop a system to accept and safeguard the information, and IRS would need to make modifications to its own information technology systems in order to interface with the agency, which they described as a significant effort, and provide oversight of the assistance program. If this approach were used, information on the cost of making these changes to FEMA’s and IRS’s information technology systems would need to be balanced against the costs of the existing subsidy approach. Another way to obtain household income information would be to collect it from individual policyholders, but doing so could be complex and challenging. First, a definition of income for the program would need to be determined (e.g., what sources of income would be considered when determining eligibility), as well as whether and which exclusions and deductions would be allowed. Second, FEMA would then need to develop an infrastructure and new processes to collect the information, which would likely increase the cost of administering the program. In addition, FEMA would need to determine how it would verify the information. For example, HUD’s Housing Choice Voucher (Voucher) program, which provides rental assistance to participating low-income households, is administered by almost 2,300 local public housing authorities (program administrators). These program administrators must obtain and verify comprehensive information on tenants’ household composition, level and sources of income, assets, public assistance, and some types of expenses (e.g., medical and child care expenses) to determine their household adjusted gross incomes, their eligibility for income exclusions and deductions, and their rental payments. We have previously found that complex processes for determining income can lead to compliance issues. For example, in a February 2005 report on rental subsidy improper payments in HUD’s rental programs, we found that HUD’s complex policies for determining rent subsidies have led to improper payments. Similarly, in an August 2013 report on farm and conservation programs, we found that complex income determination and verification processes may have led to improper payments to participants whose incomes exceed statutory limits. In addition to income, a few stakeholders we spoke with said that wealth, such as value of the insured property, could be considered when determining eligibility based on individuals’ financial need. For example, one stakeholder we interviewed said that an assistance program for NFIP could be designed using a two-step process that considers income and other factors as a proxy for wealth, such as property value. Under this process, according to the stakeholder, a policyholder’s eligibility would first be assessed using a means-tested approach, and then property value would be evaluated to help ensure that only those with modest income and wealth receive the assistance. However, other stakeholders we interviewed said that property values may not be an adequate measure to determine a policyholder’s ability to pay the premium. For example, some stakeholders said that the value of a modest home could be high because it is located in an area with a high land value. One stakeholder we interviewed said that a low-income policyholder could have purchased a home at a modest price, but over the years the value of the home could have significantly increased. The stakeholder further suggested that if the policyholder was lower-income and did not have any other assets besides the home, it would be appropriate to exclude the value of the home when determining eligibility. An alternative to using individuals’ income to determine financial need would be to determine eligibility based on the income characteristics of a specific geographic area. For example, a NAS report on the affordability of NFIP suggested that all homeowners in a geographic area, such as a community, could be eligible for assistance if, for instance, the median income of the area was “sufficiently low.” The federal government has established a similar approach for the provision of school lunches. For example, the Healthy, Hunger-Free Kids Act of 2010 includes a community eligibility provision that allows school districts with high poverty rates to provide free breakfast and lunch to all students, regardless of their household income. This provision eliminates the burden of collecting household applications to determine eligibility for school meals, relying instead on information from other means-tested programs such as the Supplemental Nutrition Assistance Program. The NAS affordability report also notes that determining assistance at the community level would help to protect the vitality of an eligible community with a high concentration of currently subsidized policyholders because if the subsidies were not available, the resulting higher flood insurance premiums would likely depress the value of properties. For example, according to information from the National Association of Realtors, the Biggert-Waters Act negatively affected the housing market in certain areas where many buyers walked away from purchasing a home because of the high flood insurance premium increases. However, because this option does not consider individuals’ financial need, some policyholders who do not face an affordability issue with their flood premiums, as defined by a potential assistance program, may continue to receive assistance, while policyholders who have affordability issues but do not live in a community eligible for the assistance would no longer receive a subsidy. In addition, similar to determining eligibility using individuals’ financial need, some policyholders who could be eligible for the assistance under this approach could have high-value homes. Under any means-tested approach, FEMA would need to know the full- risk rate for the properties of those policyholders deemed eligible in order to determine how much assistance to provide. However, FEMA does not collect data needed to calculate the full-risk rate of currently subsidized properties, such as elevation data obtained through an elevation certificate. As a result, these data are not currently available, which would be another challenge to implementing and determining the cost of a means-tested approach to providing NFIP assistance. Other Options for Targeting Assistance May Be Simpler to Implement but Would Target Those with Financial Need Less Directly Premium Caps Other approaches to targeting assistance with NFIP premiums could be simpler to implement than means-tested approaches or might help reduce risk, but they would target those with financial need less directly. According to our analysis of a NAS report on NFIP affordability, one of these methods would be to provide assistance to those policyholders whose premium exceeds a certain percentage of the amount of coverage purchased. Under this option, policyholders could receive assistance if it were greater than a certain percentage of coverage provided by the policy, and the premium would effectively be capped at that percentage. For example, HFIAA states that FEMA should strive to minimize the number of policies with annual premiums that exceed 1 percent of the total coverage provided by the policy. Using this option would help ensure that the premiums do not go above a certain amount—for example, 1 percent of coverage—which could help lower the premiums of eligible policyholders who live in high-risk areas. While capping premiums could be simpler to implement than some other options, it would likely involve trade-offs. For example, capping premiums does not consider policyholders’ resources and certain expenses (e.g., household income, assets, and expenditures for housing, food, medical care, or other goods and services) and therefore does not take into account their financial need. As a result, similar to the current subsidy method, this option could provide subsidies to some individuals who may not have a financial need. In addition, this option may discourage mitigation efforts because premiums would not reflect the actual flood risk of a property. As with the means-tested options, an appropriate threshold for the cap would need to be established if premium capping were implemented. Further, FEMA would need to know the full-risk premium rate of a property to determine whether it is above or below the defined cap. As previously discussed, FEMA does not collect the necessary elevation data needed to calculate the full-risk rate of properties subsidized under the current structure, and so these data are not currently available. As discussed later in this report, we have previously recommended that FEMA collect these data. Cost of Mitigating Risk to a Property According to some stakeholders we interviewed, our prior work, and our analysis of some of the literature we reviewed, another option to target NFIP policyholders would be to provide assistance based on the cost of mitigating flood risk, where policyholders with mitigation costs above a certain level could receive assistance to help mitigate the risk of damage to the property. This option would help policyholders finance mitigation of flood risk to their homes—whether through elevation, relocation, or demolition—which could reduce risk in ways that would likely be reflected in a lower insurance premium. In a November 2008 report on options for addressing the financial impact of subsidized premium rates, we found that mitigation efforts could be used to help reduce or eliminate the long- term risk of flood damage, especially if FEMA targeted the properties that were most costly to the program. We concluded that increasing mitigation efforts could have a number of advantages, including that it could produce savings for policyholders and for federal taxpayers through reduced flood insurance losses and federal disaster assistance, increase the number of property owners paying full-risk rates, and build on FEMA’s existing mitigation programs. However, we also identified several disadvantages associated with this option, including the following: Mitigating flood risk to a large number of properties could take a number of years to complete under the current mitigation process, which could require premium subsidies to also be offered. Increasing mitigation efforts would likely be costly and require increased funding, and even if this funding were made available, property owners could still be required to pay a portion of the mitigation expenses. Buyouts and relocations, two other types of mitigation, would likely be more costly in certain areas of the country, and in some cases the cost for mitigating the structures’ flood risk might be prohibitive. Certain types of mitigation, such as relocation or demolition, might be met with resistance by communities that rely on those properties for tax revenues, such as coastal communities with significant development in areas prone to flooding. Further, not all properties can be modified to mitigate flood risk. For example, according to a 2013 RAND report, some mitigation activities that have been used in other areas of the country would pose challenges in New York City because of the particular characteristics of the city’s building stock. An initial analysis by the New York City Mayor’s Office found that 39 percent of buildings (approximately 26,300) in the high-risk zones of the city’s new floodplain would be difficult to elevate because they are on narrow lots or are attached or semiattached buildings. To help address this challenge, HFIAA requires that FEMA establish guidelines for alternative methods of mitigation (other than building elevation) to reduce potential flood damages to residential buildings that cannot be elevated due to their structural characteristics. As a result, in September 2015, FEMA issued guidance that describes alternative mitigation measures intended for a variety of housing types that cannot feasibly be elevated. According to the report, there are a number of alternative methods of mitigation that may result in flood insurance premium reductions, such as filling a basement located below the base flood elevation to ground level, abandoning or elevating the lowest floor of certain residential buildings, and installing openings in foundation and enclosure walls located below the base flood elevation that allow automatic entry and exit of floodwaters. Similar to the other options previously discussed, implementing mitigation as an option for targeting assistance would also require elevation data that are currently unavailable because these data would be needed to determine the cost of mitigating the risk of damage to a property. Once the mitigation cost was determined, FEMA could compare this amount to the established threshold for mitigation costs to determine eligibility. For all of the options we have discussed, including the means-tested options, administering an assistance program could add to FEMA’s existing management challenges. In our June 2011 report on the administration of NFIP, we found that FEMA faces management challenges in areas that affect NFIP, and we made 10 recommendations to, among other things, improve the effectiveness of FEMA’s planning and oversight efforts for NFIP and increase the usefulness and reliability of NFIP’s flood insurance policy and claims processing system—5 of which FEMA has implemented. Further, FEMA continues to work on implementing required changes under the Biggert-Waters Act, as amended by HFIAA. In a February 2015 report on the status of FEMA’s implementation of the Biggert-Waters Act, as amended, we found that FEMA faces a number of challenges in implementing the new requirements, including resource issues, the complexity of the legislation, and the need to balance NFIP’s financial solvency and affordability goals. As a result, FEMA would likely face challenges in designing and implementing any new assistance program. Many Policyholders Could Be Eligible for Assistance under Various Approaches, but FEMA Lacks Data to Estimate Costs Our analysis of available data suggests that, under several of the options discussed in the previous section, many subsidized policyholders would potentially be eligible for assistance with their NFIP premiums. However, estimating the cost of providing assistance under various targeting options with precision is difficult because FEMA lacks the elevation data needed to calculate full-risk rates for currently subsidized properties. Using the limited data that are available, we estimated that the cost could vary widely, depending on various factors such as which option and threshold are used. Available Data Suggest Many Subsidized Policyholders Could Be Eligible for Assistance Using Various Targeting Options and Thresholds Our analysis of available FEMA data suggests that many subsidized policyholders would potentially be eligible for assistance under three of the options previously discussed: (1) means testing based on individual policyholders’ financial need, (2) means testing based on income characteristics of a local geographic area, and (3) capping premiums based on a percentage of coverage. Estimation of Eligible Policyholders Based on Individuals’ Financial Need Our analysis of ACS data showed that, depending on the income threshold used, 47 percent to 74 percent of subsidized policyholders (approximately 285,000 to 451,000) would likely be eligible to receive assistance under a means-tested approach that considers individuals’ financial need. As described previously, to implement this approach, individual or household-level income information is needed; however, these data were publicly unavailable. Instead, using household homeowner data from the 2009 through 2013 5-year ACS at the county level, we estimated that roughly 47 percent of subsidized policyholders have incomes below 80 percent of AMI and, therefore, would likely be eligible to receive assistance if this approach and threshold were implemented. This estimate is based on the assumption that the distribution of household income levels among subsidized policyholders in a given county as of September 30, 2013, was similar to the distribution of household income among all homeowners in the county. We recognize this is a potential limitation of the estimates, and the actual numbers of policyholders likely to receive assistance under this approach would vary depending on how similar the income distribution of subsidized policyholders is to the income distribution of homeowners overall in a county. Further, as figure 1 indicates, adjusting the threshold would affect the estimated percentage of policyholders that would likely be eligible for the assistance. For example, if the eligibility threshold were increased to 140 percent of AMI, we estimated that the percentage of policyholders who would likely be eligible to receive assistance would increase to about 74 percent. As of September 30, 2013, the actual number of subsidized policies was about 609,000. The states with the highest numbers of subsidized policies as of that date were Florida (102,193), Louisiana (60,692), California (50,018), New Jersey (41,259), and Texas (40,805) (see fig. 2). Using household homeowner data from the 2009 through 2013 5-year ACS at the county level, Florida would still have the greatest number of policyholders likely to be eligible to receive assistance if the income limit for this approach were set at 80 percent of AMI, with nearly 48,000 policyholders likely to be eligible, followed by Louisiana and California (see fig. 3). Three of the top five states with the most subsidized policies—Florida, Louisiana, and New Jersey—would also be states with the greatest number of policyholders likely to be eligible to receive assistance if the income threshold were set at 115 percent of AMI (see fig. 4). If the threshold were increased to 140 percent of AMI, Florida, Louisiana, and California would have the greatest number of policyholders likely to be eligible to receive assistance (see fig. 5). Our analysis of ACS data showed that, depending on the income threshold used, 23 percent to 87 percent of subsidized policyholders (approximately 139,000 to 527,000) would likely be eligible to receive assistance if a means-tested approach that considers the income characteristics of a local geographic area were implemented. Using ACS data at the census-tract level, we estimated that as of September 2013, about 23 percent of subsidized policyholders lived in a census tract that had an estimated median household income below 80 percent of AMI and, therefore, would likely be eligible to receive assistance under this approach. Unlike the previous approach, which is based on individual or household income, this estimate is based on the median income characteristics of an entire local geographic area. As such, all policyholders in a particular local geographic area, such as a census tract, would be eligible for assistance if the median household income of the area were below a selected threshold. As figure 6 indicates, similar to the other means-tested approach, adjusting the threshold would also affect the estimated percentage of policyholders who could be eligible for the assistance. For example, if the eligibility threshold were increased to 140 percent of AMI, we estimated that the percentage of policyholders who would likely be eligible to receive assistance would increase to about 87 percent. Because this approach targets areas with certain geographic characteristics, it could also include policyholders with relatively high incomes or high property values. For example, in one census tract that would potentially be eligible for assistance, where subsidized policyholders comprised approximately 50 percent of the homeowners in the community, an estimated 27 percent of homeowners had an income that exceeded $150,000. In another tract that would potentially be eligible for assistance, where subsidized policyholders comprised about 36 percent of homeowners in the community, the median home value exceeded $1 million. However, we also found that some low-income subsidized policyholders resided in census tracts not eligible for assistance under this approach, including census tracts in Puerto Rico. Estimation of Eligible Policyholders under the Capped Premium Option We were unable to estimate the number of subsidized policyholders who would likely be eligible for assistance under the capped premium option because implementing it would require information on the full-risk premium rates of currently subsidized policies, which as previously discussed, FEMA does not calculate. However, our analysis of available data on the subsidized premiums paid on these policies, and their total coverage (building and content) amounts, as of September 30, 2013, showed that, as table 1 indicates, about 23 percent of policyholders who paid subsidized premiums as of September 30, 2013, were paying above 1 percent of their total coverage amounts. Our analysis also showed that almost none of the subsidized policyholders were paying premiums that were more than 2 percent of their total coverage amounts. FEMA Lacks Information Needed to Estimate the Cost of Assistance As previously discussed, FEMA does not collect certain flood risk information that would be needed to calculate the full-risk rate for most subsidized policies; as a result, estimating the cost of providing subsidy assistance under various targeting options is difficult. Elevation certificates are needed to determine the full-risk rate for a property. However, because FEMA does not use this information in rating subsidized policies, it does not currently require elevation certificates for subsidized policyholders, although policyholders may obtain an elevation certificate voluntarily. As a result, FEMA cannot accurately determine the actual forgone premiums for subsidized policies—the difference between subsidized premiums paid and the premiums that would be required to cover the expected losses associated with subsidized policies. Likewise, without full-risk rate premiums for these properties, it is difficult to estimate the actual subsidy cost of implementing various options that could be used to target assistance for NFIP. Because it is not possible to calculate the actual amount of assistance each policyholder could be eligible for, estimating the aggregate cost of providing assistance under the various targeting options is not possible with any specificity. Although we were unable to estimate the subsidy cost of implementing these targeting options with any precision, we have previously estimated forgone premiums for subsidized policies using various statements published by FEMA that describe the size of the subsidies and expenses. In our December 2014 report on forgone premiums for subsidized policies, using available data, we estimated that the cumulative forgone premiums net of expenses ranged roughly from $8 billion to $17 billion over the period from 2002 through 2013. In particular, we estimated that the forgone premiums net of expenses for all policies subsidized in 2013 roughly range from $575 million to $1.8 billion. While the number of policyholders who could be eligible could vary widely depending on the selected targeting option and threshold, only a subset of all subsidized policyholders would likely be eligible to receive assistance. Using the means-tested targeting option and thresholds mentioned earlier in this report, the cost could have ranged from $40 million to $1.7 billion in 2013. For example, the estimated cost for the approach that considers individuals’ financial need could have ranged from $161 million to $1.7 billion in 2013, and the estimated cost for the approach that considers the income characteristics of the local geographic area could have ranged from $40 million to $1.7 billion. We could not calculate a potential cost under the capped premium method because, as noted earlier, determining eligibility for assistance would require information on full-risk rates for currently subsidized properties, which FEMA does not collect. In our July 2013 report on subsidized properties, we found that NFIP lacked the information needed to determine the full-risk rates for subsidized properties. As a result, we recommended that FEMA develop and implement a plan to obtain information needed to determine full-risk rates for subsidized properties. FEMA generally agreed with the recommendation and has taken limited action to implement it. For example, FEMA noted that the agency would evaluate the appropriate approach for obtaining or requiring the submittal of this information. FEMA also said it would explore technological advancements and engage with industry to determine the availability of technology, building information data, readily available elevation data, and current flood hazard data that could be used to implement the recommendation. However, in a subsequent meeting, FEMA officials also said that the agency faced a cost challenge with respect to elevation certificates and that obtaining these certificates could take considerable time and cost several hundred million dollars. They noted that requiring policyholders to incur the cost of obtaining elevation certificates would not be consistent with NFIP’s policy objective to promote affordability. The officials added that the agency encourages subsidized policyholders who seek to ensure the appropriateness of their NFIP rates to voluntarily submit elevation documentation. We acknowledge the difficulty and expense involved in obtaining precise information about flood risk, but we maintain that implementing this recommendation is important. Information about flood risk is needed to correctly charge full-risk rates for an increasing number of policies as FEMA phases out subsidies. Further, such information could help FEMA inform policyholders about their flood risk, as required by HFIAA. Several Mechanisms Could Be Used to Deliver NFIP Assistance, but Each Involves Public Policy Trade-offs Based on our analysis of studies, interviews with stakeholders, and prior GAO reviews, FEMA could potentially use a variety of mechanisms to deliver assistance to NFIP policyholders who could be deemed eligible based on the various targeting options previously discussed. These mechanisms include: discounted rates, through which the government charges recipients less than the full cost of the service received; vouchers, through which the government would disburse funds that allow recipients to pay for a restricted set of goods or services; tax expenditures, through which the government would reduce recipients’ tax liability based on eligible expenses; and grants and loans for mitigation, through which the government would disburse funds to recipients under a contract. Each mechanism involves trade-offs among affordability and four policy goals for federal involvement in natural catastrophe insurance. We identified these four policy goals, which have not changed, in our 2007 report on the federal role in natural catastrophe insurance: (1) charging premium rates that fully reflect actual risks; (2) encouraging private markets to provide natural catastrophe insurance; (3) encouraging broad participation in natural catastrophe insurance programs; and (4) limiting costs to taxpayers before and after a disaster. For the fourth goal, we focused only on administrative costs because total program costs would be affected by undetermined factors such as eligibility criteria and caps on assistance. As summarized in figure 8, we determined that each mechanism fully supports at least two of the four natural catastrophe insurance policy goals, but none of the mechanisms fully support all four of these policy goals. All Mechanisms Except Discounted Rates Could Reflect Actual Risk and Encourage Private Market Participation FEMA’s current discounted rate mechanism does not help FEMA charge premiums that reflect actual risks or encourage the private market to provide flood insurance. The other delivery mechanisms we identified— vouchers, tax expenditures, and grants and loans for mitigation—would likely help support these goals. Discounted Rates NFIP’s current discounted rate mechanism does not support the policy goal of charging premiums that reflect actual risk, according to our prior reports, a study we reviewed, and most stakeholders we interviewed. As we have previously found, NFIP’s discounted rates do not fully reflect actual risks because the premiums are not intended to contribute sufficient revenues to cover potential losses. In addition, the discounted rate mechanism hides actual risk because it builds a subsidy within the rate structure, meaning that policyholders who have discounted rates do not know their full-risk rate or the amount of subsidy they receive. We have previously found that discounted rates for NFIP, as well as for the federal crop insurance program, do not provide all policyholders with accurate price signals about their chances of incurring losses. As a result, some policyholders may perceive their risk of loss to be lower than it really is and may have less financial incentive to mitigate risk of damage to a property or to decide not to purchase a property at higher risk of flooding. In addition, building a subsidy into the rate structure means that the discounted rate mechanism makes it difficult to measure nonadministrative program costs (i.e., subsidy costs). We and the Congressional Budget Office have previously found that FEMA’s discounted rate mechanism disguises actual NFIP costs because the costs were evident only in FEMA’s need to borrow from Treasury. Further, because the discounted rate mechanism builds assistance into the rate structure, it does not encourage the private sector to provide insurance, according to our prior work and a stakeholder we interviewed. We have previously found that discounted rates discourage private participation in the flood insurance market because private insurers cannot compete with NFIP’s highly discounted (subsidized) rates in some geographic areas. For example, one state insurance regulator we interviewed during this review indicated that HFIAA’s reinstatement of discounted rates eliminated by the Biggert-Waters Act inhibited the participation of private insurers who had begun to take a more active role in the state. However, a discounted rate mechanism used to deliver assistance to policyholders who are deemed eligible could be modified to better address these limitations. Specifically, a full-risk rate could first be determined, and then the discount could be applied outside of the rate structure. Such an approach would better communicate the actual cost of the risk to policyholders and would make subsidy costs more transparent. For example, one stakeholder we interviewed said that billing statements could be modified to show policyholders both their full-risk rate and the assistance they receive. Further, the amount of the subsidy could be explicitly funded through an appropriation. Vouchers, Tax Expenditures, and Grants and Loans The other potential NFIP assistance delivery mechanisms we identified— vouchers, tax expenditures, and grants and loans for mitigation—would likely help promote premiums that reflect the actual risk of losses because they first require determination of a full-risk premium and then provide assistance outside of the rate structure. On the basis of our literature review and interviews with stakeholders, these other mechanisms would deliver assistance in the following ways: With vouchers, policyholders would be charged a full-risk rate premium but would receive a subsidy through a voucher to cover the difference between what they are deemed able to pay and the full-risk rate premium. With tax expenditures, policyholders would be charged a full-risk rate premium before having their tax liability reduced when they file their taxes. With grants and loans, policyholders would receive grants or loans to help mitigate their homes, and then they would be charged a premium rate that reflects their lower risk. These other potential NFIP assistance delivery mechanisms could help make existing NFIP subsidy costs more transparent because they separate assistance from premiums. We and the Congressional Budget Office have found that separating assistance from premiums could help the government and taxpayers understand actual program costs, in part because doing so would make NFIP subsidy costs explicit by requiring Congress to appropriate funds for them. Vouchers and grants and loans for mitigation meet these goals. However, the costs associated with tax expenditures may be somewhat less clear than costs associated with vouchers and grants and loans for mitigation. Tax expenditures would help make subsidy costs somewhat more transparent because they separate assistance from premiums, similar to vouchers and grants and loans for mitigation, but we and the Congressional Budget Office have previously found that tax expenditures can mask subsidy costs because they are not readily identifiable in the budget and are generally not subject to systematic performance measurement, similar to discounted rates. In addition, vouchers, tax expenditures, and grants and loans for mitigation could help encourage the private sector to provide flood insurance, based on our analysis of prior GAO reports, studies we reviewed, and a stakeholder we interviewed. This is generally because these mechanisms would provide assistance outside the rate structure, enabling NFIP to charge rates that more fully reflect risk and are much closer to the rates private insurers would need to charge, which we have previously reported is a key private sector concern. In addition, vouchers and tax expenditures could potentially be designed in a way that would incentivize homeowners to consider private insurance: vouchers could be used with either NFIP or private insurance, and tax expenditures could be based on either NFIP or private insurance expenses. Further, grants and loans for mitigation could increase the number of homes at lower risk of flood damage and create a larger, more diverse risk pool, which would help private insurers be better able to manage their risk exposure—another issue we have previously identified as a key private sector concern about offering flood insurance. Mechanisms Vary in Whether They Encourage Broad Participation and Limit Administrative Costs FEMA’s current discounted rate mechanism helps encourage broad NFIP participation and limits administrative costs. The extent to which the other delivery mechanisms we identified—vouchers, tax expenditures, and grants and loans for mitigation—could encourage broad participation is unclear. In addition, their effect on administrative costs varies. Discounted Rates The discounted rate mechanism encourages broad participation. As we have previously reported, discounted rates have helped NFIP achieve a program goal of broad participation by providing assistance that lowers the cost of insurance. Further, discounted rates may encourage more participation than other potential delivery mechanisms, such as tax expenditures, because FEMA applies discounted rates that reduce premiums immediately and policyholders do not have to wait to receive their assistance. In addition, continuing to use the discounted rate mechanism would likely help NFIP limit up-front administrative costs because the discounted rate mechanism is already in place and used to issue subsidies, which means that NFIP can avoid some costs that would be associated with creating a new delivery mechanism. Also, some stakeholders we interviewed said that discounted rates may be the most efficient delivery mechanism option for ongoing program administration, citing reasons such as FEMA’s ability to implement the mechanism without coordinating with other federal agencies. Vouchers Vouchers may have some characteristics that support the policy goal of broad NFIP participation and others that do not, according to examples cited in our previous work, studies we reviewed, and stakeholders we interviewed. For example, vouchers could help encourage broad participation in NFIP because they would immediately reduce premium costs and are unrelated to recipients’ tax-filing status. However, some stakeholders we interviewed noted other voucher characteristics that may discourage participation in NFIP. For example, FEMA officials we interviewed said that policyholders may perceive vouchers to have associated stigma, and another stakeholder expressed concern that a potentially burdensome application process could discourage eligible policyholders from applying. In addition, vouchers would likely increase NFIP’s administrative costs to a certain extent, according to examples cited in our previous work, studies we reviewed, and stakeholders we interviewed. Because FEMA does not currently have an NFIP voucher program, it would need to dedicate additional resources to its creation and to its ongoing administration. For example, HUD is one agency with such a voucher program and a 2015 HUD study of costs incurred by the local public housing authorities that administer HUD’s Voucher program found that efficient public housing authorities spent an average of $70 per month to administer a voucher, with costs related to frontline labor representing the largest costs. To help FEMA limit such administrative costs, two studies we reviewed said that flood insurance vouchers could be administered through an existing voucher program, such as the HUD Voucher program. However, according to the 2015 HUD study and HUD officials we interviewed, the local public housing authorities that administer the HUD Voucher program do not receive adequate funding to efficiently and effectively administer the existing program. HUD officials we interviewed also said that there would be additional costs, which could be significant, associated with implementing a new program. As a result, establishing an assistance program for NFIP under HUD’s, or another program’s, infrastructure would likely require additional resources for agencies responsible for implementing the program. Any additional costs would have to be weighed against the costs of the existing program. According to HUD officials, other concerns in addition to costs—such as housing authorities’ lack of familiarity with FEMA and flood insurance—would also have to be addressed before determining the suitability of using an existing HUD program to deliver NFIP assistance. Tax Expenditures Tax expenditures may have some characteristics that support the policy goal of broad NFIP participation and others that do not, according to examples cited in our prior reports, many studies we reviewed, and some stakeholders we interviewed. For example, well-designed tax expenditures can be targeted to reach certain populations and provide incentives for taxpayers to engage in particular activities; do not have the stigma that some individuals may associate with government spending programs; and may be less burdensome than applying for assistance through other spending programs in some ways. However, our prior reports, many studies we reviewed, and some stakeholders we interviewed found that other characteristics of tax expenditures may not encourage broad participation because eligible policyholders may not be aware of the tax expenditure or their eligibility; would face the burden of navigating the complex tax system, which may result in limited take-up or pressure to hire professionals to help to navigate the system; would generally need to pay the full premium before they receive the tax expenditure, which could result in cash flow challenges; and may have lower incomes or may not be required to pay taxes, which means they may not receive as great of a benefit from nonrefundable tax credits, tax deductions, and tax-preferred savings vehicles. Similarly, tax expenditures may have some characteristics that help NFIP limit administrative costs and others that do not, according to examples cited in our prior reports, many studies we reviewed, and some stakeholders we interviewed. We and others have previously stated that, in concept, using tax expenditures could help limit administrative costs to taxpayers for certain activities because much of the administrative infrastructure already exists for the government to collect and remit money to tax filers via the tax system, as compared to setting up separate spending programs. Additionally, one study we reviewed said that, in general, direct IRS access to policyholder income information would help limit administrative costs for the federal government. However, implementing a new tax expenditure would still create some additional burden for IRS in a time of tight budgetary resources. We previously found that IRS has scaled back activities and staff in response to declining appropriations, which could potentially reduce program effectiveness or increase risk to IRS and the federal government. We also previously found that administering complex tax rules can strain IRS’s ability to serve taxpayers because of the resources needed to modify related documents and procedures, develop guidance, clarify instructions, and address noncompliance. Further, there may be some administrative costs and inefficiencies associated with interagency collaboration between FEMA and IRS, according to two stakeholders we interviewed. For example, regarding potential NFIP tax expenditures, Treasury Office of Tax Policy officials said that IRS would have to dedicate resources to administering the program and coordinating with FEMA to set up a data-sharing agreement and verify nonincome-related information submitted by policyholders, such as premiums paid. We have previously found that the complex nature of some tax expenditures, such as the mortgage interest and other real estate deductions, may result in high error rates that create costs for taxpayers due to forgone revenues and IRS resources spent to enforce compliance. We have produced a guide for evaluating tax expenditure performance that could be used if Congress were to decide that tax expenditures are the most appropriate way to deliver assistance to eligible NFIP policyholders. The guide discusses various tax expenditure design issues that should be considered before implementing a tax expenditure, including the tax expenditure’s purpose, how the tax expenditure would relate to other federal programs, consequences for the federal budget, and how the tax expenditure would be evaluated. Grants and Loans Grants and loans for mitigation may have some characteristics that support the policy goal of broad NFIP participation and others that may not, according to examples cited in prior GAO reports, a few studies we reviewed, and two stakeholders we interviewed. As mentioned previously, grants and loans could encourage policyholders to mitigate flood risk to their properties by helping them afford the significant up-front costs of mitigation, which may otherwise be a barrier. Because mitigation would likely result in significantly lower premiums, which homeowners could be more willing and able to pay, homeowners might be more likely to participate in NFIP. However, some characteristics of grants and loans may discourage broad participation. For example, potentially complex application processes could discourage eligible policyholders from applying; eligible policyholders may not be aware of their eligibility or of the programs; some potentially eligible policyholders may not be able to meet loan qualification criteria related to repayment, a challenge GAO has previously reported for Small Business Administration disaster assistance loans; and loans may not be appealing to some policyholders if rates are too high or policyholders are debt-averse. Regarding the policy goal of limiting administrative costs, mitigation grants and loans would likely pose some additional administrative costs for NFIP. Similar to vouchers, FEMA would need to dedicate resources to setting up and administering a new grant or loan program or expanding an existing program to provide the assistance to eligible policyholders. Also, loans pose some other administrative costs, such as servicing outstanding loans and collecting on defaulted loans, among others. Finally, it is important to remember that the delivery options discussed in this report are not mutually exclusive and could potentially be used in combination to address Congress’s priorities for NFIP. For example, according to two studies we reviewed, NFIP could offer assistance to policyholders experiencing affordability issues through a combination of mitigation loans and vouchers. The loans would help policyholders afford mitigation efforts, reducing premiums in the long term. The vouchers would help policyholders cover the costs of repaying the loans, and they could also be used to cover part of the remaining premium costs if they were still unaffordable. If Congress were to consider an assistance program to address affordability issues experienced by NFIP policyholders, a number of policy decisions would be involved, each of which involves trade-offs and potentially difficult choices. In particular, decisions would need to be made to determine which policyholders would be eligible to receive the assistance, as well as other factors to consider when determining eligibility—for example, whether the assistance would only be provided to pre-FIRM principal residences located in high-risk areas. Also, the amount of assistance would have to be determined. For example, the assistance could be less than, equal to, or more than the difference between the subsidized premium rate eligible policyholders would pay under the current NFIP structure and the full-risk premium rate of the property. In addition, a decision would need to be made on the type of assistance (i.e., premium subsidy, mitigation assistance, or both). Further, decisions would have to be made on which delivery mechanism is most appropriate for NFIP, how the assistance would be paid for, and by whom. Agency Comments We provided a draft of this report to FEMA within the Department of Homeland Security for its review and comment. We also provided a draft of this report to HUD and Treasury for technical comment. FEMA and HUD provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the Secretaries of Homeland Security, Housing and Urban Development, and the Treasury. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Alicia Puente Cackley at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last part of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology Our objectives in this report were to describe (1) options to target assistance to National Flood Insurance Program (NFIP) subsidized policyholders who may experience difficulty paying full-risk rates, (2) the number of currently subsidized policyholders who might be eligible for assistance under certain options and the cost of implementing these options, and (3) potential delivery mechanisms for providing assistance to eligible policyholders. For purposes of this report, we made the following assumptions: Only current NFIP policyholders who pay subsidized premium rates established by the Homeowner Flood Insurance Affordability Act of 2014 (HFIAA) for their primary residences located in high-risk locations known as Special Flood Hazard Areas (SFHA) would be potentially eligible for assistance. The starting point for premiums, before the provision of any assistance, would be a full-risk premium. The maximum amount of the subsidy provided to those policyholders deemed eligible for assistance under the identified eligibility options would be the difference between the full-risk premium and the subsidized premium charged under the current NFIP structure. Identifying Targeting Options and Delivery Mechanisms To identify options for targeting assistance to policyholders who may experience difficulty paying full-risk rates and to identify potential mechanisms for delivering that assistance, we reviewed our prior related reports and analyzed relevant laws. We also conducted a literature review using the Proquest database and Internet searches using search terms such as “flood insurance,” “means test,” “ability to pay,” and “eligibility” for identifying reports that discuss options for targeting federal assistance. We reviewed 53 reports and determined 11 to be relevant by reviewing abstracts of the literature we found. Similarly, we conducted a literature review using the Proquest database using search terms such as “delivery,” “assistance,” “loan,” “voucher,” “grant,” and “discount” for identifying reports that discuss mechanisms used to deliver assistance. We reviewed 70 reports and identified 23 to be relevant by reviewing abstracts of the literature we found. In addition, we interviewed officials from the Federal Emergency Management Agency (FEMA), Department of Housing and Urban Development (HUD), the Department of the Treasury’s (Treasury) Federal Insurance Office (FIO), Florida Office of Insurance Regulation, Louisiana Department of Insurance, and New Jersey Department of Banking and Insurance and representatives from 18 organizations with flood insurance knowledge to obtain input on (1) options that could be used to target assistance for NFIP; (2) different mechanisms that other federal programs have used to deliver assistance and the extent to which they could be used to deliver assistance in NFIP; and (3) to the extent possible, any benefits and challenges of using these options and delivery mechanisms. To select the organizations to interview, we reviewed lists compiled for prior GAO reports on NFIP, identified organizations that have testified before Congress on the affordability of NFIP premiums, identified organizations through our literature review, and obtained recommendations from those we interviewed. We interviewed officials at the following 18 organizations: Allstate Insurance Company American Academy of Actuaries Association of State Floodplain Managers, Inc. Center for Economic Justice Consumer Federation of America National Academy of Sciences National Association of Insurance Commissioners National Association of Mutual Insurance Companies National Association of Realtors Property Casualty Insurers Association of America RAND Corporation SmartSafer.org USAA General Indemnity Company Risk Management and Decision Processes Center at the Wharton Independent Insurance Agents and Brokers of America Insurance Information Institute Joint Center for Housing Studies of Harvard University School of the University of Pennsylvania Wright National Flood Insurance Company On the basis of our literature review and interviews, we identified three general options that could potentially be used to target assistance to NFIP policyholders who may experience difficulty paying full-risk rates: means testing based on the income level of policyholders or local geographic areas, setting premium caps based on a percentage of total insurance coverage, and basing assistance on the cost of mitigating the risk of damage to a home. We also identified four types of mechanisms that could potentially be used to deliver assistance: discounted rates, vouchers, tax expenditures, and loans and grants for flood risk mitigation. A generally recognized definition of affordability does not currently exist for flood insurance; as a result, we interviewed representatives from HUD, FIO, the Florida Office of Insurance Regulation, Louisiana Department of Insurance, New Jersey Department of Banking and Insurance, Independent Community Bankers of America, and the Mortgage Bankers Association to determine potential ways affordability could be defined as it relates to flood insurance. Further, to obtain information on benefits and challenges of obtaining tax data from the Internal Revenue Service (IRS) and the implications certain delivery mechanisms may have for the tax system, we contacted representatives of Treasury’s Office of Tax Policy and IRS. Illustration of the Effects of Certain Targeting Options To address our second objective, we used NFIP’s policy data to identify policies for primary residences that would continue to receive subsidized premium rates as set by HFIAA. We analyzed data from NFIP’s policy database as of September 30, 2013. We applied the same algorithm that FEMA used to determine which policies were subsidized before enactment of the Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act), and we applied FEMA’s interpretation of the provision in the Biggert-Waters Act that eliminated subsidies and the provisions in HFIAA that restored subsidies. We further narrowed our analysis to subsidized policyholders located in SFHAs because the purchase of flood insurance is mandatory for properties in these areas that are secured by mortgages from federally regulated lenders. We assessed the reliability of the policy data by gathering and analyzing available information about how the data were created and maintained, and we performed electronic tests of required data elements. We determined that the data were sufficiently reliable for the purpose of determining the number of subsidized policies and the associated premiums. We did not use more recent data because in December 2014 we identified a number of discrepancies in NFIP’s fiscal year 2014 data and determined them to be not sufficiently reliable. We had begun our analysis before FEMA addressed those discrepancies. To determine the overall number of policyholders and how many of those are subsidized, we used data as of May 2015, which we deemed sufficiently reliable for our purposes based on gathering and analyzing available information about how the data were created and maintained, and we performed electronic tests of required data elements. To estimate the number of currently subsidized policyholders that might be eligible for assistance under certain means-tested options, we attempted to obtain income information for these policyholders. Because FEMA does not collect income information for its NFIP policyholders, we attempted to obtain income data from IRS for subsidized policyholders as of September 30, 2013. We were unable to obtain access to IRS tax return data, which under section 6103 of the Internal Revenue Code, must be kept confidential and may not be disclosed, except as specifically authorized by law. We also attempted to obtain household income and other related data from a third-party vendor, including wealth, household size, and home value. To do this, we used prior GAO work on information resellers to identify and conduct market research with selected companies. We spoke with officials at three information resellers and gathered documentation on data modeling, coverage, match rate, and other relevant information to assess the accuracy and reliability of their data. We determined the data lacked sufficient precision and therefore found these data not to be reliable for purposes of estimating income and other homeowner and property characteristics of NFIP subsidized policies. Because we were unable to obtain income information at the individual policyholder level, we used income data from the American Community Survey (ACS), a continuous survey of households conducted by the U.S. Census Bureau. Specifically, we used 5-year data from the 2009 through 2013 ACS for the 50 states, the District of Columbia, and Puerto Rico to estimate the number of subsidized policyholders who would likely receive assistance under the means-tested options we identified. We analyzed income levels of households and owner-occupied households (tables B19013 for household median income and table B25118 for owner- occupied household income distribution) obtained from ACS to provide a rough estimate of the income for subsidized policyholders in SFHAs. To examine the reliability of ACS data, we reviewed testing and documentation for a prior GAO report using much of the same data, including information from interviews with Census Bureau officials and experts. We also examined ACS technical documentation and conducted electronic testing and logic checks. As a result of our testing and reviews of related documentation, we determined the data were sufficiently reliable for our analyses. Estimation of Eligible Policyholders Based on Individuals’ Financial Need To develop possible thresholds for estimating the number of policyholders who might be eligible for assistance under the means-tested approaches, we reviewed documentation on the income limits used for various federal housing programs. These thresholds are expressed as a percentage of the area median income (AMI) for the county or metropolitan area in which an individual lives, and they range from 30 percent through 140 percent of AMI. Because HUD defines low-income households as those with income at or below 80 percent, we used this percentage as the lowest threshold when we conducted our analysis on the effect of implementing the means-tested approaches. We also illustrated the potential effect of using 115 percent and 140 percent of AMI, which are used in other government programs, as thresholds. We used ACS data on the distribution of homeowner income at the county level to generate estimates of how many policyholders might be eligible for subsidies using different thresholds of the HUD AMI. Our estimation of eligible policyholders based on individuals’ financial need is based on the assumption that the distribution of household income among SFHA policyholders is similar to that of the distribution of household income among all homeowners in each county. To conduct the analysis, we used county-level data to generate estimates of the number of subsidized policyholders residing in areas with estimated income below the HUD AMI thresholds. Specifically, based on household income distribution, we estimated the proportion of homeowners with incomes above or below cut points based on the income distribution categories in the ACS data. We then applied the estimated proportion above and below the HUD AMI thresholds to the number of policyholders in our data to generate an estimate of the proportion that would be subsidized using a test of individuals’ financial need. We used two different approaches to illustrate the sensitivity of our estimates. We first followed ACS technical guidance to generate 95 percent confidence intervals around the proportions of residents in each income category, and we tested these bounds against the HUD AMI or threshold. This method produced relatively narrow confidence bounds that depended heavily on the assumption that the distribution of policyholder and homeowner income was similar at the county level. Given that we do not have information on the accuracy of this assumption, we instead present the results from an alternative sensitivity test that allows for less correspondence between the homeowner and policyholder income distributions. This second sensitivity test illustrates the effect of shifting the income distribution within each county up or down a category, and provides a better, if still imperfect, sense of the uncertainty inherent in our estimates given that we lack information on individual policyholders’ incomes. Despite these tests, we cannot be sure that there are not systematic differences in the income distribution of policyholders compared to homeowners in general at the county level that would not be captured by such testing. Estimation of Eligible Policyholders Based on the Financial Characteristics of a Local Geographic Area To estimate the number of eligible policyholders based on financial characteristics of a local geographic area, we used the same HUD AMI thresholds as in our estimates based on individuals’ financial need. However, unlike the analysis for the means-tested approach that considers individuals’ financial need, the estimation of eligible policyholders based on the financial characteristics of a local geographic area is based on the median household income of an entire local geographic area, the census tract. As such, it estimates the number of subsidized policyholders likely eligible for assistance using a “community” threshold test, in which the census tract median household income is compared to the HUD AMI. To conduct the analysis, we used tract-level data to test the estimated median income of each census tract against the relevant HUD AMI threshold, and we assumed that all policyholders living in tracts with estimated median incomes below the threshold would receive subsidies. To illustrate uncertainty in our estimates, we also tested the upper and lower bounds of the 95 percent confidence interval for the tract-level estimate of median homeowner income against the HUD AMI or threshold. To illustrate potential consequences of using a community threshold test, we identified tracts where policyholders comprised a relatively large proportion of the estimated number of homeowners in the tract and that would be subsidized using the HUD AMI as the threshold. We use the threshold of 115 percent of HUD AMI to identify tracts eligible for subsidy under the local geographic area approach, as it is the middle of the three thresholds we test in our main analysis. We limited our analysis to tracts with 50 or more policies and with fairly precise estimates of homeowner income. We then identified those tracts where either the estimated home value was relatively high, or the estimate of the proportion of homeowners with high incomes was relatively large. From the tracts we identified, we selected examples that represented extreme values; while these tracts are not typical of tracts that would be subsidized under a community approach, they demonstrate that as a targeting mechanism, the community approach could have unintended consequences. Estimation of Eligible Policyholders under the Capped Premium Option We were unable to estimate the number of subsidized policyholders who would likely be eligible for assistance under the capped premium option because implementing it would require information on the full-risk premium rates of currently subsidized policies, which FEMA does not calculate. Instead, we estimated the number of subsidized policyholders that paid less than various thresholds, as of September 30, 2013. To identify a range of thresholds that could be used to develop estimates under the premium capped option, we used 1 percent as the lower threshold because HFIAA states that FEMA should strive to minimize the number of policies with annual premiums that exceed 1 percent of the total coverage. We also assessed the effect of increasing the threshold by 1 percent increments up to 5 percent. We compiled information on the amount of insurance coverage (both building and content coverage) and the premium cost associated with subsidized policies from the NFIP policy database. Specifically, we compared a range of percentages, from 1 percent to 5 percent, of total insurance coverage to the total subsidized premium and determined how many subsidized policyholders paid above and below each percentage limit. Estimation of Eligible Policyholders Based on the Cost of Mitigation We were unable to illustrate the effect of targeting assistance based on the cost of mitigating the primary residence of subsidized policyholders because data to determine this cost were not available. To determine the cost of mitigating the risk of damage to a property, information on its elevation—that is, the difference between the lowest elevation of the property relative to its base flood elevation—is needed, but FEMA does not currently collect this information for properties that pay subsidized rates. Estimating the Cost of Assistance To estimate the potential cost of implementing these options, we attempted to estimate the full-risk rate of subsidized properties by constructing information about flood risk that is not available in NFIP’s database. We attempted to calculate some flood risk information (i.e., elevation information) for subsidized properties located in North Carolina by obtaining two key elements: lowest floor elevation and base flood elevation—that is, the flood level relative to mean sea level at which there is a 1 percent or greater chance of flooding in a given year. We selected North Carolina because it is one of the only states that have collected elevation data in high-risk flood zones, which is necessary to determine the full-risk premium rates. Specifically, we obtained Light Detection and Ranging (LIDAR) data from the North Carolina Floodplain Mapping Program to determine if we could estimate the lowest floor elevation level of subsidized properties in the state. We also analyzed data from FEMA’s National Flood Hazard Layer database to determine if we could estimate the base flood elevation of these subsidized properties. However, both sources lacked the precision needed for purposes of our analysis. LIDAR. While NFIP defines elevation difference as the difference between lowest occupied floor elevation and base flood elevation, North Carolina’s LIDAR data measures first floor elevation, which is inconsistent with NFIP’s measure for lowest floor elevation. According to North Carolina officials, its LIDAR does not measure the lowest floor elevation because it is measured from the outside of the structure to the bottom of the front door. Without measuring from the inside of the structure, North Carolina’s LIDAR data do not take into account a precise measurement of the lowest floor. For example, the bottom of the front door may be higher than the bottom of the lowest occupied floor, such as a furnished basement. FEMA’s National Flood Hazard Layer. FEMA’s National Flood Hazard Layer data lack precision to correctly align the data to a property. The data are a geospatial file that shows the base flood elevations of areas, among other things. However, it simply shows them as lines on a map, which cannot be used to determine the base flood elevation for a particular building unless the building is intersected by the line. For all other buildings, base flood elevation would have to be estimated using the closest elevation lines. For example, if a building were located halfway between a 100 foot line and a 110 foot line, a base flood elevation of 105 feet could be estimated for the building. However, this estimate is not precise enough for purposes of our analysis. Due to the unavailability of accurate estimates for lowest floor elevation and base flood elevation to calculate full-risk rate, we were unable to estimate with any precision the potential cost of implementing certain options we identified for targeting assistance to policyholders who may experience difficulty paying full-risk rates. However, we developed a rough estimate of the cost by multiplying the estimated percentage of subsidized policyholders likely to be eligible for assistance under the various options and thresholds and estimates on forgone premium net of expenses we had previously developed. Specifically, in our December 2014 report on forgone premiums for subsidized policies, we noted several limitations to using these statements to produce our estimates. We presented three separate estimates: (1) FEMA’s statement about the impact of eliminating subsidies on aggregate premiums, (2) the percentage of long-term expected losses covered by subsidized premiums, and (3) the percentage of long-term expected losses covered by subsidized premiums to estimate forgone premiums for only the policies that remained subsidized after HFIAA. The estimated total subsidy cost of implementing certain targeting options is based on the estimated lowest and highest forgone premium net of expenses in 2013 across the three estimates calculated in our 2014 report. We applied the range to the targeting options described earlier in this report. Specifically, we applied the cost range ($575 million to $1.8 billion) to the various targeting options and their ranges of percentage of eligible policyholders. This cost estimate assumes that the difference between what subsidized policyholders would pay if they were charged full-risk rates and the subsidized rates they paid in 2013 are the same for all subsidized policyholders. Also, this estimated subsidy cost does not take into account the cost associated with implementing the selected targeting option. In addition to the limitations on the eligibility estimates discussed in this report, our 2014 report discusses potential constraints on our cost estimates. We conducted this performance audit from July 2014 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Patrick A. Ward (Assistant Director), Josephine Perez (Analyst-in-Charge), Bethany Benitez, Chloe Brown, Pamela Davidson, Chir-Jen Huang, May Lee, John Mingus, Marc Molino, Anna Maria Ortiz, Jennifer Schwartz, and Jack Wang made key contributions to this report.
As of May 30, 2015, FEMA, which administers NFIP, subsidized about 996,000 flood insurance policies. The National Flood Insurance Act of 1968 authorized these highly discounted premiums. To help strengthen NFIP's financial solvency, the Biggert-Waters Flood Insurance Reform Act of 2012 required FEMA to eliminate or phase out almost all subsidized premiums. However, affected policyholders raised concerns about the resulting rate increases. The Homeowner Flood Insurance Affordability Act of 2014 sought to address affordability concerns by repealing or altering some Biggert-Waters Act requirements. GAO was asked to identify options for policyholders who may face affordability issues if charged full-risk rate premiums. This report describes options to target assistance to policyholders, estimates of eligible policyholders and associated costs of these options, and mechanisms for delivering assistance. GAO reviewed literature on approaches for targeting and delivering assistance, interviewed 18 organizations familiar with flood insurance and officials from FEMA and other agencies, and analyzed NFIP premium data and Census income data for 2009-2013 (most recent). Options for targeting assistance to subsidized policyholders of primary residences who may experience difficulty paying full-risk rates for their National Flood Insurance Program (NFIP) policies include means testing assistance based on the income level of policyholders or geographic areas, setting premium caps, and basing assistance on the cost of mitigating the risk of damage to their homes. Currently, NFIP subsidies are tied to the property. Implementing a means-tested approach would decouple the subsidy from the property and instead attach it to the policyholder or a group of policyholders on the basis of financial need. All of these options involve trade-offs, and implementing any of them would present challenges because the Federal Emergency Management Agency (FEMA) would have to collect data that it does not currently collect, such as policyholders' income and flood-risk information needed to calculate full-risk rates. Although data are limited, they suggest that many policyholders who currently receive a subsidy would likely be eligible for assistance under certain targeting options GAO identified. For example, using Census data, under the means-tested approach based on individual policyholders' income and using an eligibility threshold of 80 percent of area median income, about 47 percent of subsidized policyholders, as of September 2013, would likely be eligible to receive assistance. If the eligibility threshold were increased to 140 percent of area median income, 74 percent would likely be eligible to receive assistance. Under this and other targeting options, however, it is not possible to estimate the cost of providing assistance with precision because FEMA lacks the information needed to calculate full-risk rates for currently subsidized properties. GAO recommended in July 2013 that FEMA collect information from all policyholders necessary to determine flood risk. FEMA agreed with the recommendation but has taken limited action to implement it, citing the considerable time and cost involved in obtaining the information. FEMA officials stated that they plan to continue to rely on subsidized policyholders to voluntarily obtain this information. Without proper flood-risk information, the cost of the existing subsidy or other assistance—which would be important for Congress in considering options to address affordability—cannot be determined accurately. Several mechanisms are available for delivering assistance to eligible policyholders, but each involves trade-offs among four public policy goals. For NFIP, these goals are (1) charging premium rates that fully reflect risk, (2) encouraging private markets to provide flood insurance, (3) encouraging broad program participation, and (4) limiting administrative costs. NFIP currently uses discounted rates to deliver subsidies to certain policyholders but could choose from a variety of delivery mechanisms, including vouchers, tax expenditures, and grants and loans, depending on policy priorities. For example, while tax expenditures do not have the stigma that some individuals may associate with government spending programs, policyholders could face cash flow challenges because they would generally need to pay the full premium before they receive the tax benefit. Finally, alternative mechanisms could increase administrative costs because FEMA would incur additional costs associated with setting up and administering a new assistance program or tax benefit, among other reasons.
Background The VA health care system was established in 1930, primarily to provide for the rehabilitation and continuing care of veterans injured during wartime service. VA developed its health care system as a direct delivery system with the government owning and operating its own health care facilities. It grew into the nation’s largest direct delivery system. Over the last 65 years, VA has seen a significant evolution in its missions. In the 1940s, a medical education mission was added to strengthen the quality of care in VA facilities and help train the nation’s health care professionals. In the 1960s, VA’s health care mission was expanded with the addition of a nursing home benefit. And, in the early 1980s, a military back-up mission was added. The types of veterans served have also evolved. VA has gradually shifted from primarily providing treatment for service-connected disabilities incurred in wartime to increasingly focusing on the treatment of low-income veterans with medical conditions unrelated to military service. Similarly, the growth of private and public health benefits programs has given veterans additional health care options, placing VA facilities in direct competition with private-sector providers. VA is in the midst of a major reorganization of its health care system. It has replaced its four large regions with 22 Veterans Integrated Service Networks (VISN), intended to shift the focus of the health care system from independent medical facilities to groups of facilities working together to provide efficient, accessible care to veterans in their service areas. The reorganization also includes plans to downsize the central office, strengthen accountability, and emphasize customer service. Under the reorganization, VA facilities are being encouraged to contract with private-sector providers when they can provide services of comparable or higher quality at a lower cost. VA sees the reorganization as creating “the model of a flagship health-care system for the future.” As the Veteran Population Declines and Ages, Demand for VA Services Is Shifting The veteran population, which totaled about 26.4 million in 1995, is both declining and aging. VA has estimated that between 1990 and 2010, the total veteran population will decline 26 percent. The decline will be most notable among veterans under 65 years of age—from about 20 million to 11.5 million. In contrast, over the same period, the number of veterans aged 85 and older is expected to increase from 0.2 million to 1.3 million and will make up about 6 percent of the veteran population. Coinciding with the declining and aging of the veteran population are shifts in the demand for VA health care services from inpatient hospital care to outpatient care. From 1980 to 1995, the days of hospital care provided fell from 26 million to 14.7 million, and the number of outpatient visits increased from 15.8 million to 26.5 million. (See fig. 1.) Over the same period, the average number of veterans receiving nursing home care in VA-owned facilities increased from 7,933 to 13,569, and VA’s medical care budget authority grew from about $5.8 billion to $16.2 billion. Between 1969 and 1994, VA reduced its operating hospital beds by about 50 percent, closing or converting about 50,000 to other uses. The decline in psychiatric beds was most pronounced, from about 50,000 in 1969 to about 17,300 in 1994. (See fig. 2.) In fiscal year 1995, VA closed another 2,300 beds. Further Decline in Hospital Workload Likely Several factors, such as the following, could lead to a continued decline in VA hospital workload. Veterans who have health insurance are much less likely to use VA hospitals than veterans without public or private insurance, and the number of veterans with health insurance is expected to increase even without further national or state health reforms. This increase is expected because almost all veterans become eligible for Medicare when they turn 65 years of age, including those unemployed or employed in jobs that do not provide health insurance at the time they turn 65. Health reforms, such as those that have been debated in the past year, that would increase the portability of insurance and place limits on coverage exclusions for preexisting conditions would also increase the number of veterans with health insurance. The nature of insurance coverage is changing. For example, increased enrollment in health maintenance organizations (HMO)—from 9 million in 1982 to 50 million in 1994—is likely to reduce the use of VA hospitals. Veterans with fee-for-service public or private health insurance often face significant out-of-pocket expenses for hospital care and have a financial incentive to use VA hospitals because VA requires little or no cost-sharing. Veterans’ financial incentives to seek hospital care from VA are largely eliminated when they join HMOs or other managed care plans because such plans require little or no cost-sharing. Proposals to expand Medicare beneficiaries’ enrollment in managed care plans could thus further decrease the use of VA hospitals. On the other hand, health reforms that would create medical savings accounts could increase demand for VA hospital care because veterans might seek free care from VA rather than spend money out of their medical savings account to pay for needed services. Finally, increased cost-sharing under fee-for-service programs could encourage veterans to use the VA system. The declining veteran population will likely lead to significant reductions in use of VA hospitals even as the acute care needs of the surviving veterans increase. If veterans continue to use VA hospital care at the same rate that they did in 1994—that is, if VA continues services at current levels—days of care provided in VA hospitals should decline from 15.4 million in 1994 to about 13.7 million by 2010. (See fig. 3.) Our projections are adjusted to reflect the higher use of hospital care by older veterans. Veterans’ Health Care: Challenges for the Future Days (in Millions) Establishing preadmission certification requirements for admissions and days of care similar to those used by private health insurers could significantly reduce admissions to and days of care in VA hospitals. Currently, VA hospitals too often serve patients whose care could be more efficiently provided in alternative settings, such as outpatient clinics or nursing homes. Estimates of nonacute admissions to and days of care provided by VA hospitals often exceed 40 percent. Preadmission certification would likely reduce these admissions. VA is currently assessing the use of preadmission reviews systemwide as a method to encourage the most cost-effective, therapeutically appropriate care. The Veterans Health Administration is also implementing a performance measurement and monitoring system containing a number of measures that should reduce inappropriate hospital admissions. Several of these measures, such as setting expectations for the percentage of surgery done on an ambulatory basis at each facility and implementing network-based utilization review policies and programs, are intended to move the VA system towards efficient allocation and utilization of resources. Eligibility and Clinic Expansions Contribute to Increase in Outpatient Workload Between 1960, when outpatient treatment of nonservice-connected conditions was first authorized, and 1995, the number of outpatient visits provided by VA outpatient clinics increased from about 2 million to over 26 million. The increase in outpatient workload, due in part to changes in medical technology and practice that allow care previously provided only in an inpatient setting to be provided on an ambulatory basis, corresponds to expansions in VA eligibility and opening of new VA clinics. In its fiscal year 1975 annual report, VA noted the relationship between “progressive extension of legislation expanding the availability of outpatient services” and increased outpatient workload. Among the eligibility expansions occurring between 1960 and 1975 were actions to authorize (1) pre- and posthospital care for treatment of nonservice-connected conditions (1960) and (2) outpatient treatment to obviate the need for hospitalization (1973). Workload at VA outpatient clinics increased from about 2 million to 12 million visits during the 15-year period. Even with the expansions of outpatient eligibility that have occurred since 1960, most veterans are currently eligible only for hospital-related outpatient care. That is, they are eligible for those outpatient services needed to prepare them for, obviate the need for, or follow up on a hospital admission. Only about 500,000 veterans are eligible for comprehensive outpatient services. VA and others have proposed further expansions of VA outpatient eligibility that would make all veterans eligible for comprehensive outpatient services, subject to the availability of resources. Just as eligibility expansions increased outpatient workload, VA efforts to improve the accessibility of VA care resulted in increased demand. Between 1980 and 1995, the number of VA outpatient clinics increased from 222 to 565, including numerous mobile clinics that bring outpatient care closer to veterans in rural areas. Between 1980 and 1995, outpatient visits provided by VA clinics increased from 15.8 million to 26.5 million. VA has developed plans to further improve veterans’ access to VA outpatient care through creation of access points. VA would like to establish additional access points by the end of 1996. Aging Population Results in Increased Demand for Nursing Home Care As the nation’s large World War II and Korean War veteran populations age, their needs for nursing home and other long-term care services are increasing. Old age is often accompanied by the development of chronic health problems, such as heart disease, arthritis, and other ailments. These problems, important causes of disability among the elderly population, often result in the need for nursing home care or other long-term care services. Between 1969 and 1994, the average daily workload of VA-supported nursing homes more than tripled (from 9,030 patients to 33,405). With the veteran population continuing to age rapidly, VA faces a significant challenge in trying to meet increasing demand for nursing home care. The number of veterans 85 years of age and older is expected to increase more than eight-fold between 1990 and 2010. Over 50 percent of those over 85 years old are expected to need nursing home care, compared with about 13 percent of those 65 to 69 years old. Veterans More Likely to Have Unmet Needs for Specialized and Long-Term Care Services Than for Acute Care Services Veterans are more likely to have unmet needs for specialized and long-term care services than they are for acute hospital and outpatient care. With the aging of the veteran population and prospects for insurance reform, veterans’ unmet needs for acute care services are likely to decline in the future. Most Veterans’ Needs for Hospital and Outpatient Care Are Met health. Lacking insurance, people often postpone obtaining care until their conditions become more serious and require more costly medical services. Most veterans who lack insurance coverage, however, are able to obtain needed hospital and outpatient care through public programs and VA. Still, VA’s 1992 National Survey of Veterans estimated that about 159,000 veterans were unable to get needed hospital care in 1992 and about 288,000 were unable to obtain needed outpatient services. By far the most common reason veterans cited for not obtaining needed care was that they could not afford to pay for it. While the cost of care may have prevented the veterans from obtaining care from private-sector hospitals, it appears to be an unlikely reason for not seeking care from VA. All veterans are currently eligible for hospital care, and about 11 million are in the mandatory care category for free hospital care. Other veterans are required to make only nominal copayments. Many of the problems veterans face in obtaining health care services appear to relate to distance from a VA facility rather than their eligibility to receive those services from VA. For example, our analysis of 1992 National Survey of Veterans data estimates that fewer than half of the 159,000 veterans who did not obtain needed hospital care lived within 25 miles of a VA hospital. By comparison, we estimate that over 90 percent lived within 25 miles of a private-sector hospital. Of the estimated 288,000 veterans unable to obtain needed outpatient care during 1992, almost 70 percent lived within 5 miles of a non-VA doctor’s office or outpatient facility. As was the case with veterans unable to obtain needed hospital care, those unable to obtain needed outpatient care generally indicated that they could not afford to obtain the needed care from private providers. Only 13 percent of the veterans unable to obtain needed outpatient services reported that they lived within 5 miles of a VA facility, where they could generally have received free care. there were 131 users for every 1,000 veterans compared with fewer than 80 users per 1,000 veterans living at distances of over 5 miles from a VA outpatient clinic. Similarly, veteran users living within 5 miles of a VA outpatient clinic made over twice as many visits to VA outpatient clinics as veterans living over 25 miles from a VA clinic. Veterans Have Uneven Access to VA Services Even those veterans living near VA facilities, however, can have unmet needs because of unequal access to care. Veterans’ ability to obtain needed health care services from VA frequently depends on where they live and which VA facility they go to. VA spends resources providing services to high-income, insured veterans with no service-connected disabilities at some facilities, while low-income, uninsured veterans have needs that are not being met at other facilities. Although considerable numbers of veterans have migrated to the western states, VA resources and facilities have shifted little. As a result, facilities in the eastern states are more likely to have adequate resources to treat all veterans seeking care than are facilities in western states, which frequently are forced to ration care to some or all higher-income veterans as well as to many veterans with lower incomes. Medical centers’ varying rationing practices also result in significant inconsistencies in veterans’ access to care both among and within the centers. For example, as we reported in 1993, higher-income veterans without service-connected disabilities could receive care at 40 medical centers that did not ration care, while 22 other medical centers rationed care even to veterans with service-connected disabilities. Some centers that rationed care by either medical service or medical condition turned away lower-income veterans who needed certain types of services while caring for higher-income veterans who needed other types of services. Specialized Services Not Always Available specialized treatment programs can result in unmet needs, as in the following examples: Specialized VA post-traumatic stress disorder programs are operating at or beyond capacity, and waiting lists exist, particularly for inpatient treatment. Although private insurance generally includes mental health benefits, private-sector providers generally lack the expertise in treating war-related stress that exists in the VA system. Inadequate numbers of beds are available in the VA system to care for homeless veterans. For example, VA had only 11 beds available in the San Francisco area to meet the needs of an estimated 2,000 to 3,300 homeless veterans. Public and private health insurance do not include extensive coverage of long-term psychiatric care. Veterans needing such services must therefore rely on state programs or the VA system to meet their needs. VA is a national leader both in research on and treatment and rehabilitation of people with spinal cord injuries. Similarly, it is a leader in programs to treat and rehabilitate the blind. Although such services are available in the private sector, the costs of such services can be catastrophic. Veterans Have Unmet Needs for Long-Term Care Services Finally, veterans frequently have unmet needs for nursing home and other long-term care services. Medicare and most private health insurance cover only short-term, post-acute nursing home and home health care. Although private long-term care insurance is a growing market, the high cost of policies places such coverage out of the reach of many veterans. As a result, most veterans must pay for long-term nursing home and home care services out of pocket until they spend down most of their income and assets and qualify for Medicaid assistance. After qualifying for Medicaid, they are required to apply almost all of their income toward the cost of their care. Veterans able to obtain nursing home care through VA programs can avoid the spend-down and most of the cost-sharing required to obtain service through Medicaid. VA has long had a goal of meeting the nursing home needs of 16 percent of veterans needing such care. In fiscal year 1995, VA served an estimated 9 percent of veterans needing nursing home care. Options for Retargeting Resources Toward Veterans’ Health Care Needs VA could use a number of approaches, within existing resources and legal authorities, to better target resources toward addressing the unmet health care needs of veterans. With limited resources, one approach would be to shift resources from providing services to one group of veterans to paying for expanded services for a different group of veterans. For example, resources spent in providing care for higher-income veterans without service-connected disabilities could be shifted toward improving services for veterans with service-connected disabilities and lower-income veterans whose health care needs are not being met. About 15 percent of the veterans with no service-connected disabilities who use VA medical centers have incomes that place them in the discretionary care category for both inpatient and outpatient care. Another approach could be to narrow the types of services provided—such as the provision of over-the-counter drugs—and use the resources spent on those services to pay for other higher-priority services. necessary services because they are geographically inaccessible. While this approach would help some veterans, current law severely restricts the use of fee-basis care by veterans with no service-connected disabilities. Such veterans are eligible only for limited diagnostic services and follow-up care after hospitalization. VA’s recent efforts to establish access points will improve accessibility for some veterans, but VA has not applied the outpatient priorities for care or the eligibility requirements for fee-basis care in enrolling patients and providing services. As a result, access points could divert funds that could be used to provide access to VA-supported care for high-priority veterans to pay for services for discretionary-care veterans. The concept of access points appears sound—to increase competition and therefore reduce costs of contract care. To be equitable, however, care provided through access points could be made subject to the same limitations that apply to fee-basis care for other veterans. Increased use of fee-basis care, either through fee-for-service contracting or capitation payments, is not, however, without risks. The capacity of VA’s direct delivery system serves as a control over growth in VA appropriations. Without changes in the methods used to set VA appropriations, removing the restrictions on use of fee-basis care could create significant pressure to increase VA appropriations. In other words, the result might be expanding priorities for care covered under the fee-basis program to match the priorities currently covered at VA facilities rather than reordering priorities within available resources. This expansion of priorities could occur because VA’s budget request does not provide information on the priority categories of veterans receiving care from VA. Finally, VA could ensure that its facilities use consistent methods to ration care when demand exceeds capacity. Other Countries Integrated Their Veterans’ Hospitals Into Their Health Care Systems or Shifted the Focus of Their Facilities Faced with aging and declining veteran populations, Australia, Canada, and the United Kingdom closed or converted their veterans’ hospitals to other uses. They preserved and enhanced veterans’ health benefits without maintaining their direct delivery systems. For example, they supplemented services covered under other health programs or gave veterans higher priorities for care or better accommodations under those programs. Veterans’ service organizations, originally skeptical about the changes, now generally support them. medical expertise increasingly difficult. For example, Australia’s veterans’ hospitals had trouble retaining skilled staff and maintaining affiliation with medical schools as their patient mix became increasingly geriatric. The United Kingdom decided in 1953 that transferring its veterans’ hospitals to the country’s universal care system would both increase utilization of the former veterans’ hospitals and allow them to preserve and further develop their specialized medical expertise by expanding their patient mix. Canada, in 1963, and Australia, in 1988, made similar decisions on the basis of continuing decline in acute care use of their veterans’ hospitals and the ability and desire of veterans to obtain care in their communities. What we learned from our examination of these countries’ veterans’ health care programs was that health reforms, either nationally or within the veterans’ system, that allow veterans to choose between care in VA facilities or community facilities decrease demand for care in VA facilities. In other words, any change in our veterans’ health care system—such as the establishment of access points or other contract providers—that gives veterans greater access to community providers will likely decrease demand for that type of care in existing VA facilities. In contrast to Australia, Canada, and the United Kingdom, Finland continues to operate a direct delivery system. It, like Canada, however, shifted the emphasis of its veterans’ health care system from acute to long-term care services to meet the changing needs of an aging veteran population. By 1993, it had converted almost half of the beds in its primary hospital to nursing home care. Both Canada and Finland also developed home care programs to help veterans maintain their independence as long as possible. Approaches for Preserving and Alternatives to Preserving the Direct Delivery System Most of VA’s $16.6 billion health care budget goes to maintain its direct delivery infrastructure. It is invested in buildings, staff, land, and equipment. As the Congress deliberates the future of veterans’ health care, it will inevitably face the question of whether to act to preserve health care benefits or the direct delivery system or both, as envisioned under VA’s planned reorganization. Preserving Direct Delivery increasing VA’s market share of the veteran population; allow VA to use its excess hospital capacity to serve veterans’ dependents and other nonveterans; and convert VA hospitals to other uses, such as meeting the increasing demands for VA-supported nursing home care. Increase VA’s Market Share of Veterans One approach for preserving the direct delivery system would be for the VA system to increase its market share of the veteran population. About 80 percent of the veteran population has never used VA health care services. Bringing more of those veterans into the VA system could increase demand for VA hospital care. Decreasing veterans’ out-of-pocket costs does not appear to be a viable strategy for attracting new veteran users. All veterans are currently eligible for medically necessary VA hospital care without limits, about 9 to 11 million with no out-of-pocket costs. The remaining veterans would incur some cost-sharing if they sought care from VA facilities, but generally much less than they would incur in seeking care from private hospitals using their Medicare or private insurance. and other health care providers’ willingness to contract with VA hospitals. This approach to filling VA hospital beds, however, would require significant budget increases if new access points modestly increase VA’s market share of hospital and outpatient users. For example, VA currently serves about 2.6 million of our nation’s 26 million veterans in a given year and 4 to 5 million veterans over a 3-year period. About 40 percent of the 5,000 veterans enrolled at VA’s 12 new access points had not received VA care in the 3 years before they enrolled. Most of the new users we interviewed had learned about the access points through conversations with other veterans, friends, and relatives or from television, newspapers, and radio. Expanding eligibility. Expanding eligibility for outpatient care could also attract new users to the VA system. Although such users would be brought into the system through expanded outpatient eligibility, many of the new users would likely use VA hospitals for inpatient care. A 1992 VA eligibility reform task force estimated that making all veterans eligible for comprehensive VA health care services could triple demand for VA hospital care. Expand Care for Nonveterans A second approach for increasing the workload of VA hospitals would be to expand VA’s authority to provide care to veterans’ dependents or other nonveterans. Currently, VA has limited authority to treat nonveterans, primarily providing such services through sharing agreements with military facilities and VA’s medical school affiliates. Allowing VA facilities to treat more nonveterans could increase use of VA hospitals and broaden VA’s patient mix, strengthening VA’s medical education and research missions. Without better systems for determining the cost of care, however, such an approach could result in funds appropriated for veterans’ health care being used to pay for care for nonveterans. In addition, VA would be expanding the areas in which it is in direct competition with private-sector hospitals in the surrounding communities. Essentially, every nonveteran brought into a VA hospital is a patient taken away from a private-sector hospital. Thus, expanding the government’s role in providing care to nonveterans could further jeopardize the fiscal viability of private-sector hospitals. In rural communities without a public or private hospital, however, opening VA hospitals to nonveterans might improve the availability of health care services for the entire community and, at the same time, help preserve the direct delivery system. Convert VA Hospitals to Nursing Homes or Other Uses A third approach to preserving the direct delivery system would be to convert VA hospitals to provide nursing home or other types of care. Although converting existing space to provide nursing home care is often cheaper than building a new facility, converting hospital beds to other uses would increase costs. Construction funds would be needed to pay for the conversions, and medical care funds would be needed to pay for the new nursing home users treated in what had been empty beds. VA could, however, serve more veterans with available funds if it were authorized to (1) adopt the copayment practices used by state veterans’ homes or (2) establish an estate recovery program patterned after those operated by increasing numbers of state Medicaid programs. Unlike Medicaid and most state veterans’ homes, the VA nursing home program has no spend-down requirements and minimal cost-sharing. Only higher-income veterans with nonservice-connected disabilities contribute toward the cost of their care, making copayments that average $12 a day. Alternatives to Preserving the Acute Care Hospitals Actions taken by Australia, Canada, and the United Kingdom suggest that veterans’ benefits can be preserved and even enhanced without preserving the system’s acute care hospitals. Alternatives to maintaining the current direct delivery system include (1) establishing a VA-operated health financing system to purchase care from other public and private providers (or expanding an existing program); (2) including veterans under an existing health benefits program, such as Medicare, the Federal Employees Health Benefits Program, or TRICARE; and (3) issuing vouchers to enable veterans to purchase private health insurance. Under any of these approaches, many existing VA facilities might be closed, converted to other uses, or transferred to the community. Purchase Care From Public and Private Providers VA already purchases health care services from public and private-sector providers in many ways. For example, it purchases services from its medical school affiliates and other government facilities through sharing agreements; it purchases care for eligible veterans geographically remote from VA facilities directly from private physicians through the fee-basis program; it contracts with groups of public or private-sector providers on a capitation basis to provide primary care services to veterans; and it operates a health financing program, the Civilian Health and Medical Program of the Department of Veterans Affairs (CHAMPVA), to purchase care for survivors and dependents of certain veterans. Expanding or combining these programs into a single health financing program could increase VA’s purchasing power in the health care marketplace, allowing it to purchase health care services at lower prices. For example, expansion of capitation funding could shift risks for controlling veterans’ health care costs from the government to private providers contracting with VA. And increasing the use of private-sector providers within the VA health care system could retain the focus on veterans’ health care needs that might be lost by merging veterans’ health care with another program. Include Veterans Under an Existing Program On the other hand, additional economies would be likely to be achieved by merging the veterans’ health program with one or more of the existing federal health programs. For example, Medicare has many years of experience in negotiating and monitoring contracts with managed care plans and fee-for-service providers to ensure that the interests of both beneficiaries and the government are protected. Although the Health Care Financing Administration continues to face problems in identifying and eliminating fraud and abuse, it nonetheless has more experience than VA in wide-scale contracting. copayment features provide financial incentives for beneficiaries to select this option, the most highly managed of the three options. Under an agreement between VA and DOD, VA facilities can apply to become providers under TRICARE Prime. To date, no VA facilities are participating in TRICARE other than as fee-for-service providers. In many respects, VA’s restructuring efforts parallel DOD’s efforts in establishing TRICARE. Expanding TRICARE to include veterans’ health benefits and VA facilities and physicians might further expand health care accessibility and options for beneficiaries of both programs. Finally, veterans could be allowed to enroll in the Federal Employees Health Benefits program, which provides federal employees and annuitants and their dependents a choice of private health insurance programs, including traditional fee-for-service plans, preferred provider plans, and HMOs. Enrollment costs and cost-sharing vary widely, depending on the plan selected. Issue Vouchers to Buy Private Insurance Of the various health care options, offering veterans vouchers to use in purchasing health care services would give veterans the maximum choice. Acting individually to purchase care or insurance, veterans would probably be unable to obtain the same prices on health care services and policies that they could obtain through the volume purchasing advantages of the federal health care programs. For example, individual health insurance policies are generally much more expensive than comparable coverage obtained through a group policy such as those available under the Federal Employees Health Benefits Program. Any of the options for increasing the use of private-sector providers would address the primary reasons many veterans give for not using VA care: perceptions of poor quality and customer service and limited accessibility. As a result, these options would be likely to generate new demand. Such new demand could be expected to create upward pressure on VA appropriations unless actions were taken under current budget rules to offset new costs. The new options could, however, be structured to supplement, rather than duplicate, veterans’ coverage under other health programs. For example, eligibility for veterans with nonservice-connected disabilities might be limited to those without other public or private insurance. Benefits for other veterans might be limited to services not typically well covered under other public and private insurance, such as dental and vision care and long-term care services. Conclusions The VA health care system is at a crossroads—particularly in view of the dramatic changes occurring throughout the nation’s health care system. These changes raise many important questions concerning the system. Should VA hospitals be opened to veterans’ dependents or other nonveterans as a way of preserving the system? Should veterans be given additional incentives to use VA facilities? Should some of VA’s acute care hospitals be closed, converted to other uses, or transferred to states or local communities? Should additional VA hospitals be constructed when use of existing inpatient hospital capacity is declining both in VA and in the private sector? Should VA remain primarily a direct provider of veterans’ health care? Should VA become primarily a purchaser of health care from other providers for veterans? Decisions regarding these and other questions will have far-reaching effects on veterans, taxpayers, and private providers. We believe that attention is needed to position VA to ensure that veterans receive high-quality health care in the most cost-efficient manner, regardless of whether that care is provided through VA facilities or through arrangements with private-sector providers. The declining veteran population in the United States, in concert with the increased availability of community-based care, makes preserving the current acute care workload of existing VA health care facilities exceedingly difficult. VA will have to attract an ever-increasing proportion of the veteran population if it is to keep its acute care facilities open. Other countries have successfully made the transition from direct providers to financiers of veterans’ health care without losing the special status of veterans. The cost of maintaining VA’s direct delivery infrastructure limits VA’s ability to ensure similarly situated veterans equal access to VA health care, and funds that could be used to expand the use of fee-basis care are used instead to pay for care provided to veterans in the discretionary care category at VA hospitals and outpatient clinics. Mr. Chairman, this concludes my prepared statement. We will be happy to answer any questions that you or other Members of the Subcommittee may have. Contributors For more information on this testimony, please call Jim Linz, Assistant Director, at (202) 512-7110 or Paul Reynolds, Assistant Director, at (202) 512-7109. Related GAO Products VA Health Care: Efforts to Improve Veterans’ Access to Primary Care Services (GAO/T-HEHS-96-134, Apr. 24, 1996). VA Health Care: Approaches for Developing Budget-Neutral Eligibility Reform (GAO/T-HEHS-96-107, Mar. 20, 1996). VA Health Care: Opportunities to Increase Efficiency and Reduce Resource Needs (GAO/T-HEHS-96-99, Mar. 8, 1996). VA Health Care: Challenges and Options for the Future (GAO/T-HEHS-95-147, May 9, 1995). VA Health Care: Retargeting Needed to Better Meet Veterans’ Changing Needs (GAO/HEHS-95-39, Apr. 21, 1995). VA Health Care: Barriers to VA Managed Care (GAO/HEHS-95-84R, Apr. 20, 1995). Veterans’ Health Care: Veterans’ Perceptions of VA Services and VA’s Role in Health Reform (GAO/HEHS-95-14, Dec. 23, 1994). Veterans’ Health Care: Use of VA Services by Medicare-Eligible Veterans (GAO/HEHS-95-13, Oct. 24, 1994). Veterans’ Health Care: Implications of Other Countries’ Reforms for the United States (GAO/HEHS-94-210BR, Sept. 27, 1994). Veterans’ Health Care: Efforts to Make VA Competitive May Create Significant Risks (GAO/T-HEHS-94-197, June 29, 1994). Veterans’ Health Care: Most Care Provided Through Non-VA Programs (GAO/HEHS-94-104BR, Apr. 25, 1994). VA Health Care: A Profile of Veterans Using VA Medical Centers in 1991 (GAO/HEHS-94-113FS, Mar. 29, 1994). VA Health Care: Restructuring Ambulatory Care System Would Improve Service to Veterans (GAO/HRD-94-4, Oct. 15, 1993). VA Health Care: Comparison of VA Benefits With Other Public and Private Programs (GAO/HRD-93-94, July 29, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the future of the Department of Veterans Affairs' (VA) health care system. GAO noted that: (1) VA hospitals' workload has decreased 56 percent during the last 25 years and will probably decrease further as more veterans die and delivery settings and health care plans change; (2) the demand for nursing home care has increased for veterans 85 years of age and older; (3) VA and other public and private health benefit programs cannot meet all veterans' health care needs, notably for specialized and long-term care; (4) to meet such needs, VA could reduce services to certain veterans and use those funds to purchase private-sector health care services for other eligible veterans or increase the availability of specialized care; (5) VA could increase veterans' access to care by improving its facility resource allocations and the consistency of its coverage decisions; (6) other countries have closed veteran hospitals and integrated veterans' health care into their general health care systems; (7) VA could increase hospital workloads by attracting more veterans or extending coverage to veterans' dependents or nonveterans on a reimbursable basis; (8) converting VA hospitals to long-term care facilities is feasible, but operating costs would be higher than the cost of purchasing private-sector nursing home care unless cost-sharing arrangements are included; and (9) alternatives to the VA direct delivery system include purchasing more services directly from the private sector, issuing vouchers for private insurance, and covering veterans under other existing federal health benefit programs.
Background Several federal legislative provisions support preparation for and response to disasters. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act) primarily establishes the programs and processes for the federal government to provide major disaster and emergency assistance to states, local governments, tribal nations, individuals, and qualified private nonprofit organizations. FEMA has responsibility for administering the provisions of the Stafford Act, and the Act provides the FEMA Administrator with the authority to prepare federal response plans and programs. In April 1992, FEMA issued a Federal Response Plan, which outlined how the federal government implements the Stafford Act. The Federal Response Plan described, among other things, the response and recovery responsibilities of each federal department and agency for saving lives and protecting public health and safety during an emergency or major disaster. After the events of September 11, 2001, and with the passage of the Homeland Security Act in November 2002, FEMA became part of the newly formed Department of Homeland Security (DHS). Under the Act, FEMA retained its authority to administer the provisions of the Stafford Act as well as its designation as the lead agency for the Federal Response Plan. The Homeland Security Act required DHS to consolidate existing federal government emergency response plans into a single, integrated, and coordinated national response plan. In December 2004, DHS issued the 2004 Plan to integrate the federal government’s domestic prevention, preparedness, response, and recovery plans into one plan that addressed all disaster situations, whether due to nature, terrorism, or other man- made activities. The 2004 Plan incorporated or superceded other federal interagency plans such as the Federal Response Plan and the Federal Radiological Emergency Response Plan. In August 2005, Hurricane Katrina and, shortly after, hurricanes Wilma and Rita revealed a number of limitations in the 2004 Plan. Beginning in February 2006, reports by the House Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina, the Senate Homeland Security and Governmental Affairs Committee, the White House Homeland Security Council, the DHS Inspector General, and DHS and FEMA all identified a variety of failures and some strengths in the preparations for, response to, and initial recovery from Hurricane Katrina. After reviewing these reports, DHS concluded that the 2004 Plan required revision. In May 2006, DHS released immediate modifications to the 2004 Plan pending a more comprehensive review. In June 2006, Congress passed the Emergency Supplemental Appropriations Act for Defense, the Global War on Terror, and Hurricane Recovery, 2006. In the conference report accompanying this act, the conferrees recommended that FEMA apply $3 million of its Preparedness, Mitigation, Response, and Recovery appropriation to immediately review and revise the 2004 Plan as well as its companion document, the National Incident Management System, which provides standard command and management structures that apply to response activities. On October 4, 2006, the Post-Katrina Act was enacted. This act, among other things, made certain organizational changes within DHS to consolidate emergency preparedness and emergency response functions within FEMA, required that DHS maintain FEMA as a distinct entity within the department, and designated the FEMA Administrator—the new title of the official who will lead the agency—as the principal advisor to the President, the Homeland Security Council, and the Secretary for all matters relating to emergency management. Most of the organizational changes, such as the transfer of various functions from DHS’s Directorate of Preparedness to FEMA, became effective as of March 31, 2007. Others, such as the increase in the organizational autonomy for FEMA and the establishment of the National Integration Center, became effective upon enactment. The Post-Katrina Act specified that the FEMA Administrator, acting through the Center, “shall ensure ongoing management and maintenance of the…National Response Plan,” including periodic review and revision. The Post-Katrina Act also directed the Secretary to establish a National Advisory Council (NAC) by December 2006 to, among other things, incorporate state, local, and tribal government and private sector input in the development and revision of the 2004 Plan. As established by the Post-Katrina Act, the NAC is intended to be an ongoing advisory council that draws upon individuals with a broad body of expertise and geographic and substantive diversity. The Act requires the NAC to advise the Administrator on a variety of emergency management issues across the national preparedness spectrum, including the 2004 Plan. In January 2008, DHS issued the 2008 NRF, the product of the revision of the 2004 Plan. The NRF became effective in March 2008 and retained the basic structure of the 2004 Plan. For example, like the 2004 Plan, the NRF’s core document describes the doctrine that guides national response actions and the roles and responsibilities of officials and entities involved in response efforts. Further, the NRF also includes Emergency Support and Incident Annexes. However, in contrast to the 2004 Plan, FEMA plans to include four partner guides to the NRF that describe key roles and actions for local, tribal, state, federal and private sector entities involved in response activities. DHS Did Not Collaborate with Non- Federal Stakeholders As Fully As Planned or Required in Developing the NRF While DHS included non-federal stakeholders at the initial and final stages in the process of revising the December 2004 Plan, it did not collaborate with them as fully as planned in its revision work plan or as required by the Post-Katrina Act. DHS based the work plan, which was approved by a White House Homeland Security Council–chaired policy committee, on a section in the 2004 Plan that provided procedural guidance for managing revisions of the document. DHS managed the initial stages of the revision process according to the work plan. However, DHS deviated from its work plan after the first draft was completed in April 2007. Instead of widely disseminating the first draft to all stakeholders, including non-federal stakeholders, for comment and modification, DHS retained the draft to make modifications that it felt were necessary and conducted an internal, federal review of the draft for a 5-month period. DHS delayed the release of the April 2007 draft and provided limited communication to state and local stakeholders on the status of the review until after releasing the draft for public comment in September 2007. In addition, DHS did not manage the revision process in accordance with the Post-Katrina Act’s provision that DHS establish FEMA’s NAC by December 2006 and incorporate the NAC’s non-federal input into the revision. DHS Created a Work Plan to Revise the 2004 National Response Plan, Specifying Revision Issues, Entities and Tasks, and a Time Line for Completing the Revision Hurricane Katrina hit the Gulf Coast in August 2005 and the nation’s response prompted DHS to revise the 2004 Plan. In May 2006, DHS issued an official Notice of Change to the 2004 Plan to incorporate lessons learned from the response to hurricanes Katrina, Wilma, and Rita as well as to incorporate organizational changes within DHS. This Notice of Change—which was distributed to all signatories of the 2004 Plan, DHS headquarters and regional offices, and state emergency management and homeland security offices—noted that DHS intended to initiate a comprehensive stakeholder review of the 2004 Plan in the fall of 2006. Accordingly, DHS developed a work plan to manage the revision of the 2004 Plan that established (1) the issues that were to be the focus of the revision process, (2) the entities to be created to implement the process and the tasks involved, and (3) a timeline for completing the revision process and issuing the final document. DHS based its work plan for revising the 2004 Plan on guidance found in the Plan itself. Anticipating that modifications or updates would arise when needed, the 2004 Plan included a section specifying how DHS would conduct interim changes and full revisions, listing the time frames and circumstances—within the first year and every 4 years, or more frequently if the Secretary deems necessary, to incorporate new presidential directives, legislative changes, and procedural changes based on lessons learned from exercises and actual events. The Domestic Readiness Group, an interagency group that coordinates preparedness and response policy and is chaired by staff of the White House Homeland Security Council, approved DHS’s work plan in September 2006. For the revision process, the Domestic Readiness Group was to provide strategic policy coordination, be a mechanism for vetting the revision at the federal level, and was to resolve conflicting policy issues. The work plan contained an initial list of 14 revision issues. According to FEMA officials, they compiled these issues by reviewing Hurricane Katrina after-action and lessons-learned reports from the White House, Congress, GAO, and the DHS Inspector General and identifying common issues that were raised in multiple reports. According to the work plan, DHS was to conduct meetings with selected stakeholders to review the initial list and identify other issues to be considered during the revision process. The result of these meetings was to be a finalized list of revision issues that would serve as the starting point for revising the 2004 Plan. Based on the 2004 Plan, DHS created three entities to facilitate the revision process: the Steering Committee, the Writing Team, and 12 Work Groups. DHS provided a copy of the approved work plan to all participants. The Steering Committee was to conduct the day-to-day management and oversight of the 2004 Plan revision process, which included managing the Work Groups and overseeing the Writing Team. The work plan assigned overall management of the 2004 Plan rewrite to the Writing Team, which was to assign issues to the Work Groups and track the Work Groups’ progress on resolving the assigned issues. The Work Groups, which were chaired by designated co-leaders, were to examine the issues received from the Writing Team and determine if existing language in the 2004 Plan adequately addressed the issues. If the Work Groups determined that current language in the 2004 Plan did not adequately address the issue, they were required to provide recommendations to the Writing Team on how the issues should be addressed. Figure 2 shows the relationship between the entities involved in the revision process. The revision schedule in the work plan was to begin in December 2006 with a goal to complete the revision process by June 2007. As a first step in the plan, the Writing Team was to provide the Work Groups with writing assignments. Once the Work Groups completed their writing assignments, the Writing Team was to review their recommendations and submit a draft of the revised NRF to the Steering Committee for its review and approval. The Steering Committee was to release the first draft of the revised NRF for stakeholder comment by the end of January 2007 with an approximate 30-day review period. According to the work plan, the Steering Committee, Writing Team, and Work Groups would review comments on this first draft, make any needed modifications, and release a second draft at the end of March 2007 for the final of two 30-day comment periods. Per the work plan, these two comment periods would ensure wide dissemination of the product to all stakeholders, including federal agencies, state and local governments, and major professional associations. The work plan schedule also included a 2-month internal, federal review process to take place beginning in May 2007, after which DHS would provide the final draft for approval to the Domestic Readiness Group and the signatories of the 2004 Plan, with the final issuance of the revised 2004 Plan targeted for June 2007. See figure 3 for the proposed timeline for the revision process. DHS Included Non-Federal Stakeholders at the Beginning of the Revision Process DHS included non-federal stakeholders in the early stages of the 2004 Plan revision process in accordance with the work plan. For example, in October 2006, DHS hosted a meeting with approximately 90 non-federal stakeholders where DHS sought feedback on the 14 revision issues from participants using structured breakout groups. At this meeting, FEMA reported that non-federal stakeholders identified the need for enhancements to the 2004 Plan to further describe coordination processes with the private sector and volunteer organizations. DHS held a similar meeting with federal stakeholders in November 2006. According to DHS, it modified the scope of some of the 14 revision issues and added three additional issues. (See app. II for a listing of the 17 revision issues.) DHS assigned non-federal stakeholders to serve as members of the Steering Committee and the Work Groups. Although the work plan called for the engagement of all levels of stakeholders in the revision process and described the Steering Committee and Work Groups, it did not specify the composition of the Work Groups but stated that one non-federal stakeholder would serve on the Steering Committee. In the spirit of the plan, DHS selected non-federal officials to serve on the Steering Committee and Work Groups. Of the 32 members on the Steering Committee, six members or 19 percent were non-federal officials, including representatives from state and local government emergency management associations as well as a local fire department and police associations. According to a FEMA official, the Steering Committee, led by FEMA and DHS co-chairs, generally met on a weekly basis via teleconferences throughout the revision process. Of the approximately 710 members who served on the 12 Work Groups, 224 officials, or 32 percent, were non-federal officials, including 3 of the 27 Work Group co-leaders. These non-federal officials included representatives from state and local emergency management agencies and tribal governments as well as officials from the fire, law enforcement, and public health sector. See figure 4 for the composition of the 12 Work Groups members by level of government, nongovernmental organization, and private sector. See appendix III for a listing of the 12 Work Groups and a table showing the occupational demographics of the non-federal stakeholders who served on the Work Group. The Writing Team, which consisted of 11 federal officials and private contractors for administrative support, did not include any non-federal stakeholders. DHS stated that they invited one non-federal stakeholder to be on the team but that they were not successful in their attempts to secure that person. The 12 Work Groups met in January and February 2007. During that time, and in accordance with the work plan, the Work Groups met to address the issues assigned to them by the Writing Team. Most Work Groups addressed their issues by submitting recommended language changes to the 2004 Plan, which generally consisted of inserting new language or clarifying existing language. The Work Groups supported a recommended language change by providing the rationale for such a change. For example, the Writing Team tasked the Roles and Responsibilities Work Group with clarifying and strengthening the role of state governments in the 2004 Plan. One recommended language change suggested by this Work Group was to describe the state government’s role in the coordination of resources through the Emergency Management Assistance Compact, an interstate mutual aid compact that provides a legal structure through which affected states may request assistance from other states during a disaster. All Work Group recommendations were due to the Writing Team by the middle of February 2007. Although the work plan provided for the Work Groups’ continued involvement after submitting their recommendations, this did not occur. DHS Departed from the Work Plan by Conducting an Internal Federal Review Rather Than Providing a Draft to Non- Federal Stakeholders for Comment On March 13, 2007, DHS officials e-mailed stakeholders that the release of the first revision draft for the first 30-day comment period was being delayed. According to the message, DHS still planned to release the draft within the next several weeks and issue a final document by June 1, 2007. The message noted that once an updated timeline was approved, DHS would share the dates with the stakeholders. According to FEMA officials, the first draft of the revised 2004 Plan was completed in April 2007 and incorporated many of the Work Groups’ recommendations. However, rather than sending this first draft to stakeholders for comment, DHS conducted its internal, federal review of the draft document for approximately 5 months until September 2007. FEMA officials said that DHS did not release this April 2007 draft for comment because the draft required further modifications DHS considered necessary. An April 11, 2007, notice subsequently posted on DHS’s Web site described the status of the process and its plans to further revise the draft for comment. “As the NRP revision process unfolded, it became apparent that some important issues were more complex than we originally thought and require national-level policy decisions. We also came to the realization that creating a more user-friendly document that clearly addressed the roles and responsibilities of stakeholders and incident management structures would require substantial format changes to the NRP… An updated timeline has not been determined but we will share one with you quickly.” FEMA officials said that the length of time for the review and approval process, about 3 months longer than planned, was unpredictable and that it took longer than they had expected. DHS did not modify or update the work plan to reflect this deviation from the approved revision process or propose how the revision process would now be completed. Certain non-federal stakeholders we interviewed who served on the Steering Committee and as co-leaders on the Work Groups reported receiving occasional or no communication from DHS on the decision not to release the first draft for comment or how the revision process would be completed during this internal, federal review process. FEMA’s Deputy Administrator acknowledged that the federal government should have done a better job in communicating the status of the draft and the revision process to non-federal stakeholders while the document was undergoing the internal, federal review. During this internal, federal review, DHS and FEMA officials continued to revise the April 2007 draft. For example, FEMA officials said that they added a chapter to explain the need for all levels of government to plan for preparedness and response actions and additional language to clarify the role of state and local governments during disaster response. At this point in the process, around August 2007, DHS’s Office of the Deputy Secretary decided to release a revised draft just to the Steering Committee and the Domestic Readiness Group for comment. Writing Team officials assumed that the Deputy Secretary would make the final decision on whether to incorporate the comments received while staff from his office would be responsible for completing any further edits. A draft of this document, dated July 2007, was leaked to the press in August 2007. During a September 11, 2007, hearing before the House Transportation and Infrastructure Committee, officials representing state and local emergency management associations expressed their concerns that the July 2007 leaked draft had changed significantly from the April 2007 draft. The government affairs committee chair of the International Association of Emergency Managers testified, “The document we saw bore no resemblance to what we had discussed so extensively with FEMA and other stakeholders in the December 2006 through February 2007 timeline.” Additionally, the National Emergency Management Association representative, who served on the Steering Committee, expressed his concern that its association had been effectively shut out of the process, testifying that the collaborative process in rewriting the 2004 Plan “broke down…with no stakeholder input, working group involvement, or steering committee visibility.” After the Internal, Federal Review, DHS Provided All Stakeholders an Opportunity to Comment before Final Publication and Considered All Comments in Finalizing the New Framework After the approximate 5-month internal, federal review period, DHS released a draft of the newly renamed National Response Framework for public comment on September 10, 2007. However, as we stated earlier, the original work plan called for DHS to provide stakeholders with two 30-day public comment periods before the internal, federal review; after the review, DHS was to publish the revised document without further comment by stakeholders. The public comment period starting on September 10 allowed for both federal and non-federal stakeholders to provide their reactions to the changes made during the internal federal review process. FEMA officials said they conducted this unplanned public comment period to address the work plan’s requirement that the draft NRF be widely disseminated for all stakeholders to review. FEMA provided a 40-day public comment period for the NRF core document. FEMA received 3,318 comments on the core NRF. The Writing Team led the adjudication—review, analysis, and resolution— of the comments received during the public comment period. The Writing Team examined each comment, made an initial disposition recommendation—accepted, modified, rejected, or noted—and forwarded that recommendation to the FEMA leadership and the Domestic Readiness Group for review. In addition, FEMA posted a spreadsheet on www.regulations.gov that included, among other things, the comments made by non-federal stakeholders and the final disposition FEMA assigned to each of those comments. This allowed these stakeholders to see how FEMA did or did not incorporate their comments into the final NRF document. The Work Groups and Steering Committee, both of which contained non-federal stakeholders, were not involved in adjudicating the public comments, although this was called for by the work plan. A FEMA official said that the agency tried to recruit a non-federal stakeholder to serve on the Writing Team, but that its efforts were unsuccessful. DHS’s Establishment of the National Advisory Council Did Not Meet Post-Katrina Act Deadlines, Which Also Limited Collaboration with Non-Federal Stakeholders The Post-Katrina Act required the DHS Secretary to establish a National Advisory Council (NAC) by December 2006 to advise the FEMA Administrator on all aspects of emergency management. Among its specific responsibilities, the NAC was to incorporate input from state, local, and tribal governments as well as the private sector in the revision of the 2004 Plan. The Act stated that the membership of the NAC should represent a geographic and substantive cross-section of officials, emergency managers, and emergency response providers, such as law enforcement, fire service, health scientists, and elected officials. However, DHS did not incorporate the NAC by amending its approved September 2006 work plan for revising the 2004 Plan or establish the NAC in time for the Council to incorporate non-federal stakeholder input into the revision of the 2004 Plan, as directed by the October 2006 Post-Katrina Act. According to a FEMA official, DHS did not amend the work plan to incorporate the NAC because of the uncertainty surrounding the time it would take to establish the NAC. The official said FEMA expected that establishing the NAC would take more time than the Post-Katrina Act allowed because FEMA wanted to ensure that the NAC’s membership complied with the requirements contained in the Post-Katrina Act while also providing adequate time to announce the NAC’s creation, solicit applications for membership, and review and select applicants for membership. FEMA announced the membership of the NAC in June 2007, 6 months after the Post-Katrina Act deadline, and the NAC did not hold its inaugural meeting until October 22, 2007, the last day of the public comment period for the base NRF. According to the FEMA Administrator, it was more important for the agency to invest the time needed to review hundreds of applications and create a high quality body of advisors than to rush the process to meet the 60-day statutory deadline for establishing the NAC. As a result, the NAC’s only involvement in the NRF revision process occurred when FEMA provided it with a copy of a draft in December 2007, 2 months after the public comment period closed. According to the NAC chairman, the NAC gathered and consolidated comments from individual members and provided these comments to the FEMA Administrator approximately one month before FEMA published the NRF in January 2008. The chairman noted that these comments were from individual members and did not reflect the official comments of the NAC as a whole. For the next NRF revision, the chairman stated that he expected the NAC to be actively involved with FEMA throughout the entire revision process. For example, he suggested that the NAC could have a role in the adjudication of public comments by representing non-federal stakeholders during the adjudication process to ensure FEMA is aware of issues that are critically important to state and local governments. The NAC is currently exploring its role in reviewing and implementing the 2008 NRF. For example, at the NAC’s February 2008 meeting the NAC Chairman approved a standing committee on the NRF that may focus on actions that can help FEMA implement and train stakeholders on the NRF. While the NAC filed a charter on February 6, 2007, the charter reflects the NAC’s broad array of statutory responsibilities, but does not detail any specific responsibilities the NAC would undertake relative to the NRF revision process. See figure 5 for a comparison of DHS’s actual revision process with its proposed process. The late establishment of the NAC also hindered FEMA from fully collaborating with non-federal stakeholders who were involved in the revision process established by the approved work plan. In particular, two non-federal Steering Committee members stated that after the August 2007 leak of the draft NRF, FEMA stopped sharing drafts with non-federal officials. FEMA officials said that the reason for this decision was because FEMA had yet to establish the NAC, its official advisory committee. FEMA officials said that the absence of an official advisory committee raised fairness concerns about which members of the non-federal community should be allowed to provide input before the public comment period. As a result, FEMA stopped sharing pre-decisional drafts with non-federal members of the Steering Committee because FEMA did not plan to provide the same opportunity to other non-federal stakeholders until the public comment period. FEMA and the Post- Katrina Act Have Recognized the Importance of Including Non-Federal Stakeholders in Developing National Response Doctrine, but FEMA Lacks Guidance and Procedures for Future NRF Revisions While FEMA has recognized the importance of partnering with non-federal stakeholders to achieve the nation’s emergency management goals, both in congressional testimonies as well as in its January 2008 strategic plan, FEMA has not yet developed guidance and procedures for how future revisions of the NRF will be managed or how the newly established National Advisory Council will be integrated into the revision process in accordance with the Post-Katrina Act. Standards for Internal Control in the Federal Government state that management guidance, policies, and procedures are an integral part of any agency’s planning for, and achieving, effective results. Developing such policies and procedures for how the NRF will be revised in the future and how FEMA will integrate the NAC and other non-federal stakeholders in the process is essential for helping to ensure that FEMA attains its goal of partnering with nonfederal stakeholders to help achieve the nation’s emergency management goals. FEMA and the Post- Katrina Act Stress Partnership and Communication with Non- Federal Stakeholders in Achieving the Nation’s Emergency Management Goals FEMA has recognized the importance of including the input of non-federal stakeholders to help achieve the nation’s emergency management goals. For example, in November 2006, the FEMA Administrator outlined his vision for a “New FEMA,” asserting FEMA’s dedication to partnering with all states and the private sector because of FEMA’s reliance on its partners to accomplish the national emergency response objectives. More recently, in congressional testimonies the FEMA Administrator has reaffirmed the need for FEMA to partner with both federal and non-federal stakeholders. In addition, one objective in FEMA’s Strategic Plan for 2008-2013 is to engage public and private stakeholders in developing and communicating clear national doctrine and policy. To achieve this objective, the Strategic Plan identifies the need to engage stakeholders early and often in the process of developing national doctrine. This is in accordance with internal control standards for the federal government that state that information should be communicated to those who need it and in a form and within a time frame that enables them to carry out their responsibility for an agency to achieve its objectives. For example, management should ensure there are adequate means of communicating with and obtaining information from external stakeholders who may have a significant impact on the agency’s achieving its goals. In October 2005, we also reported that frequent communication among collaborating organizations and stakeholders is a means to facilitate working across boundaries, prevent misunderstanding, and achieve agency objectives. Frequent communication is one of a number of practices that enhance and sustain collaboration. Recognizing the importance of collaboration, the Post-Katrina Act requires that the FEMA Administrator partner with non-federal stakeholders from state, local, and tribal governments, the private sector, and nongovernmental organizations to build a national system of emergency management that can effectively and efficiently utilize the full measure of the nation’s resources to respond to all disasters, including catastrophic incidents, and acts of terrorism. Specifically, the Post-Katrina Act directs the FEMA Administrator, through the National Integration Center, to periodically review and revise the National Response Plan and any successor to such plan and, as discussed above, to establish the NAC to incorporate non-federal stakeholder input in the revision and development of the Plan, among other things. The Post-Katrina Act further directs the FEMA Administrator to appoint council members who represent a geographic and substantive cross section of officials, emergency managers, and emergency response providers from the non-federal community. The FEMA Administrator’s statements, the agency’s latest strategic plan, and the Post-Katrina Act also reflect a key precept related to government performance and results—that stakeholders can have a significant effect in determining whether a federal agency’s program or action will succeed or fail, and as such, stakeholders need to be involved in major planning efforts conducted by the agency. Such involvement is important to help agencies ensure that their efforts and resources are targeted at the highest priorities and is particularly important in instances where federal agencies face a complex political environment, such as emergency management in which FEMA’s successes depend on the actions of non-federal partners at the state and local levels. FEMA Has Not Yet Developed Guidance and Procedures for Managing Future Revisions or Integrating the National Advisory Council into the Revision Process While FEMA officials and the National Response Framework acknowledge that the NRF will need to be revised in the future, FEMA has not developed guidance or policies on how it will manage future revisions or described how the NAC will be incorporated into the next NRF revision process. FEMA officials said that the agency has not yet developed guidance and procedures for any future NRF revisions because of the need to focus federal resources on creating training materials to assist all stakeholders in implementing the current NRF in anticipation of the pending 2008 hurricane season. As mentioned earlier in this report, the 2004 Plan included a section specifying the circumstances, such as lessons learned from exercises and actual events, and time frames under which it would need to be reviewed and revised. This section is in accordance with the federal internal control standard of monitoring operations to assess the quality of performance over time and ensure that the findings of reviews and evaluations are resolved. The 2008 NRF, while it states that it merits periodic review and revision, does not contain such language regarding the circumstances and time frames for its review and revision. In addition, FEMA officials said that the process established for the last revision (the 2006-approved work plan) would not be applicable for any future revisions because it did not consider the role of the NAC. The NAC has also not yet determined how it would like to be involved in the next NRF revision process. The NAC’s charter, approved in February 2007, does not provide specific procedures on how it is to be involved and, according to the chairman, the NAC’s NRF subcommittee expects to focus its efforts on helping FEMA train non-federal stakeholders. Having such guidance and procedures in place is an important internal control, and we have identified this need for other agencies in similar circumstances to FEMA’s management of future NRF revisions. As we discussed earlier in this report, control activities—such as guidance, policies, and procedures—are an integral part of an agency’s planning for and achieving effective results. In addition, while internal controls should be flexible to meet an agency’s needs, they should also be clearly documented, readily available, and properly maintained. We have also previously reported on the need to include state and local jurisdictions in the development of national response plans because they are key stakeholders and would be on the front lines if an incident occurs. In April 2008, we reported on the need for the Department of Defense’s Northern Command to collaborate and communicate with non-federal stakeholders and establish a process to guide such collaboration in accessing information on state emergency response plans and capabilities, noting that the absence of effective collaboration could impede intergovernmental planning for catastrophic incidents and overall coordination. Specifically, we reported that federal officials involved the states only minimally in the development of the Department of Defense’s major homeland defense and civil support plans and that defense officials were generally not familiar with state emergency response plans and capabilities and had not established a process for gaining access to this information. We also reported that each agency’s roles and responsibilities for planning for homeland defense and civil support during a catastrophic disaster were not clearly defined. We recommended, among other things, that the Department of Defense develop a thorough process to guide its coordination with the states. The department generally agreed with the recommendation and stated that it was coordinating with DHS to develop synchronized plans of integrated federal, state, and local operational capabilities to affect a coordinated national response. It is essential for both the Department of Defense and DHS to have such guidance in place, as both DHS’s National Response Framework and the Northern Command’s Concept of Operations emphasize coordination with non- federal stakeholders in order to prevent, prepare for, respond to, and recover from catastrophic natural and manmade disasters. In August 2007, we reported on the administration’s approach to preparing for a pandemic influenza by issuing, among other things, a National Strategy for Pandemic Influenza (Strategy) in November 2005, and a National Strategy for Pandemic Influenza Implementation Plan (Plan) in May 2006. We reported, among other things, that state and local jurisdictions were not directly involved in developing the Strategy and Plan. Neither the Strategy nor Plan described the involvement of key stakeholders, such as state, local, and tribal entities, in their development, even though these stakeholders would be on the front lines in a pandemic and the Plan identifies actions they should complete. Officials told us that while the drafters of the Plan were generally aware of their concerns, state, local, and tribal entities were not directly involved in reviewing and commenting on the Plan. We concluded that opportunities existed to improve the usefulness of the Plan because it was viewed as an evolving document and was intended to be updated on a regular basis to reflect ongoing policy decisions as well as improvements in domestic preparedness. However, time frames or mechanisms for updating the Plan were undefined. We recommended that the White House Homeland Security Council establish a specific process and time frame for updating the Plan and that the update process should involve key non-federal stakeholders and incorporate lessons learned from exercises and other sources, but the Homeland Security Council did not provide comments on this recommendation. Without similar policies and procedures documenting the circumstances and time frames under which it would review and revise the NRF and its process for collaborating with non-federal stakeholders, FEMA cannot ensure that future revision processes will be conducted in accordance with management’s directives. Conclusions All disasters occur locally, and the initial post-disaster response is local. However, large-scale disasters usually exceed local response capabilities. Effective preparation and response for major and catastrophic disasters require well-planned and well-coordinated actions among all those who would have a role in the response to such disasters. The 2008 NRF is a guide for the myriad of entities and personnel involved in response efforts at all levels. The NRF recognizes the need for collaboration among these stakeholders to collectively respond to and recover from all disasters, particularly catastrophic disasters such as Hurricane Katrina, regardless of their cause. To help ensure that the NRF meets the needs of all stakeholders who have a role in its effective implementation, it is essential that DHS fully collaborate with non-federal stakeholders in its development and revision. DHS initially involved non-federal stakeholders in the revision of the 2004 Plan but omitted a key step in its work plan by not obtaining and incorporating their comments on the first full draft. Instead, DHS undertook a closed, internal federal review of the draft that lasted about 5 months with little communication with the non-federal partners. The result was a breach of trust with DHS’s non-federal partners in the drafting process. The Post-Katrina Act gives responsibility for maintaining and updating the NRF to FEMA and charges the Administrator’s National Advisory Council with incorporating non-federal stakeholder input into the NRF’s development and revision. Established too late to fulfill this role in the creation of the current NRF, the NAC is now functioning, and it is important that there be compatible policies and procedures for how the NAC will fulfill its statutory charge. Contrary to effective government internal control and management principles, FEMA has not yet developed policies and procedures for guiding future revisions of the NRF, including specifying the conditions and time frames under which FEMA would review and revise the NRF and how FEMA will involve the NAC and collaborate with other non-federal stakeholders. Especially in view of a new administration, non-federal stakeholder participation and ownership is essential in any revision of the NRF, and the lessons learned from the process for revising the 2004 Plan will apply in the future to FEMA’s and DHS’s efforts to develop and revise other national plans and policies that make up the national preparedness system. While the NRF is published by DHS, it belongs to the nation’s emergency response community that is collectively responsible for effectively implementing the NRF’s provisions should another catastrophic disaster like Hurricane Katrina occur. Recommendation for Executive Action We recommend that the FEMA Administrator develop and disseminate policies and procedures that describe (1) the circumstances and time frames under which the next NRF revision will occur and (2) how FEMA will conduct the next NRF revision, including how its National Advisory Council and other non-federal stakeholders—state, local, and tribal governments; the private sector; and nongovernmental organizations—will be integrated into the revision process and the methods for communicating with these stakeholders. Agency Comments We requested comments on a draft of this report from DHS and FEMA. They concurred with our recommendations and had no other comments. We are sending copies of this report to the Secretary of Homeland Security, FEMA Administrator, and interested congressional committees. We will also provide copies to others on request. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix IV for a list of key contributors to this report. Appendix I: Scope and Methodology This report addresses the following questions: (1) To what extent did the Department of Homeland Security (DHS) collaborate with non-federal stakeholders in revising and updating the December 2004 National Response Plan into the January 2008 National Response Framework (NRF)? (2) To what extent has FEMA developed policies and procedures for managing future revisions of the NRF? To address these questions, we interviewed DHS, FEMA, and non-federal stakeholders who were directly involved in the revision and update of the 2004 Plan into the 2008 NRF, and we reviewed DHS and FEMA documents on the revision process. Because there were over 700 federal and non- federal officials who participated in the Steering Committee and Work Groups, we interviewed those who held key positions. The FEMA officials and non-federal stakeholders we interviewed held key positions in the revision process, such as the FEMA Administrator and Deputy Administrator and the two FEMA co-chairs of the Steering Committee. The non-federal stakeholders we interviewed included four of the five non- federal officials who served as Steering Committee members and all three of the non-federal officials who served as co-leaders of Work Groups; these non-federal stakeholders also held positions in state, county, and city governments and non-governmental organizations. To determine the extent to which DHS collaborated with non-federal stakeholders, we first determined the revision process that DHS had planned to follow to revise the 2004 Plan. We reviewed DHS’s September 2006 revision work plan that had been approved by the Domestic Readiness Group of White House’s Homeland Security Council and interviewed FEMA and non-federal officials who served in key positions in the revision process. We also reviewed applicable statutes, primarily the October 2006 Post-Katrina Emergency Management Reform Act, for statutory requirements related to the revision process. To determine what happened during the revision process and the extent to which DHS involved non-federal stakeholders in that process, we interviewed FEMA officials and non-federal stakeholders who served in key positions in the revision process and the chairman of FEMA’s National Advisory Council (NAC). Further, we reviewed DHS documentation citing the roles provided to non-federal stakeholders in the revision process and explaining how the actual revision process was conducted, FEMA documentation on the process and time frames related to the NAC’s establishment, NAC documentation regarding its role in the revision process, and congressional testimony from non-federal stakeholders on how DHS conducted and included them in the revision process. To determine the extent to which FEMA had policies and procedures in place for future revisions of the NRF, we interviewed FEMA officials. The non-federal officials we interviewed represented state and local levels of government, emergency management associations, and other non- federal entities. While the statements and views of the stakeholders we interviewed are not generalizable to the some 230 non-federal stakeholders involved in the revision process, we chose to speak to them because of their assigned key roles. There is some uncertainty in our determination of the total number of non- federal members in the 12 Work Groups, and thus the total number of Work Group members, due to duplication or the lack of adequate information identifying a member as federal in the data provided by FEMA. However, because DHS’s inclusion of non-federal members in the revision process is the focus of this report, we took steps to correctly determine the number and composition of the 224 non-federal members. Based on our analysis of FEMA’s data for federal members, we believe the total of 486 federal members is a reasonable approximation, and therefore, the grand total of 710 Work Group members is also a reasonable approximation. Appendix II: The 17 Key Revision Issues That DHS Identified for the 2004 National Response Plan In the 2004 National Response Plan revision work plan approved by the Domestic Readiness Group, a White House Homeland Security Council– chaired policy committee, in September 2006, DHS identified 14 key issues that it wanted the revision process to address. According to FEMA officials, these issues were compiled by reviewing Hurricane Katrina after- action and lessons-learned reports from the White House, Congress, GAO, and DHS’s Inspector General and identifying common issues that were raised in multiple reports. The work plan directed DHS to conduct meetings with stakeholders to review the initial list and identify other issues to be considered during the revision process. These issues were to serve as the starting point from which the 2004 Plan revision would be conducted. DHS held meetings with non-federal stakeholder and federal stakeholders in October and November 2006 respectively. After these meetings, DHS identified three additional revision issues to its initial list contained in the approved work plan. The 17 key revision issues are listed below—the 3 issues added after the stakeholder meetings are indicated with a note. The revision issues are categorized by whether they were to be addressed in either the 2004 Plan base document or its annexes. The 2004 Plan comprised four major components: the Base Plan, Emergency Support Function Annexes, Support Annexes, and Incident Annexes. The Base Plan provided an overview of the structure and processes comprising a national approach to domestic response actions. The 15 Emergency Support Function Annexes detailed the missions, policies, structures, and responsibilities of federal agencies for coordinating resource and programmatic support, such as mass care and shelter, to states, tribes, and other federal agencies or other jurisdictions and entities. The nine Support Annexes provided guidance and described the functional processes and administrative requirements necessary to ensure the 2004 Plan’s efficient and effective implementation. The seven Incident Annexes addressed contingency or hazard situations requiring specialized application of the 2004 Plan, such as biological, catastrophic, and nuclear/radiological incidents. The key revision issues identified for the 2004 National Response Plan base document were clarify roles and responsibilities of key structures, and positions, and strengthen role of states and private sector; integrate National Incident Management System concepts, principles, terminology, systems, and organizational processes into the revised National Response Plan; review Joint Field Office structure and operations, to include Unified Command; and incorporate proactive planning for incidents that render state and local governments incapable of an effective response. The key revision issues identified for the annexes to the 2004 National Response Plan base document were examine all existing National Response Plan annexes and proposed new strengthen External Affairs and Public Affairs Annexes; review logistics management issues, examine evacuation and sheltering issues, ensure the integration of all search and rescue assets, review the scope of public safety and security missions, incorporate companion animal issues, improve process for identifying and accepting donated goods and the integration of volunteers, clarify international support mechanisms, ensure consistency with National Emergency Communication Strategy, refine the Catastrophic Incident Supplement to include the review of a possible increased Department of Defense responsibility, and review federal incident management plans and determine their appropriate linkage to the National Response Plan. Appendix III: The 12 Work Groups DHS Established during the Revision Process and Their Composition The 2004 National Response Plan revision work plan approved by the Domestic Readiness Group in September 2006 directed DHS to establish Work Groups to rewrite portions of the 2004 Plan. While the work plan did not specify the number of Work Groups that should be established, DHS formed 12 Work Groups that were co-led by federal officials or a combination of federal and non-federal officials. The 12 Work Groups were Catastrophic Planning, Communications, Companion Animals, Evacuations and Sheltering (co-led by non-federal stakeholder), Functions, Incident Management and Coordination, International Support, National Incident Management System, Roles and Responsibilities (co-led by non-federal stakeholder), Special Needs, Training and Implementation, and Volunteer and Donation Management (co-led by non-federal stakeholder). Of the 709 members who served on the 12 Work Groups, 224 officials, or 32 percent, were non-federal. These non-federal stakeholders included representatives from state, tribal, and local government as well as the private sector and nongovernmental organizations. Further, the non- federal stakeholders came from various occupational sectors. See table 1 for a description of these 224 non-federal stakeholders. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments The following teams and individuals made key contributions to this report: Pille Anvelt, Patrick Bernard, Sam Hinojosa, Christopher Keisling, David Lysy, Sally Williamson, and Candice Wright, Homeland Security & Justice Team; Michele Fejfar, Applied Research & Methods; and Christine Davis, Jan Montgomery, and Janet Temko, General Counsel.
Hurricane Katrina illustrated that effective preparation and response to a catastrophe requires a joint effort between federal, state, and local government. The Department of Homeland Security (DHS), through the Federal Emergency Management Agency (FEMA), is responsible for heading the joint effort. In January 2008, DHS released the National Response Framework (NRF), a revision of the 2004 National Response Plan (2004 Plan), the national preparation plan for all hazards. In response to the explanatory statement to the Consolidated Appropriations Act of 2008 and as discussed with congressional committees, this report evaluates the extent to which (1) DHS collaborated with non-federal stakeholders in revising and updating the 2004 Plan into the 2008 NRF and (2) FEMA has developed policies and procedures for managing future NRF revisions. To accomplish these objectives, GAO reviewed DHS and FEMA documents related to the revision process, analyzed the relevant statutes, and interviewed federal and non-federal officials who held key positions in the revision process. While DHS included non-federal stakeholders--state, local, and tribal governments, nongovernmental organizations, and the private sector--in the initial and final stages of revising the 2004 Plan into the NRF, it did not collaborate with these stakeholders as fully as it originally planned or as required by the October 2006 Post-Katrina Emergency Management Reform Act (Post-Katrina Act). As the revision process began in 2006, DHS involved both federal and non-federal stakeholders by soliciting and incorporating their input in determining the key revision issues and developing the first draft in April 2007. However, after this first draft was completed, DHS deviated from its revision work plan by conducting a closed, internal federal review of the draft rather than releasing it for stakeholder comment because the draft required further modifications DHS considered necessary. DHS limited communication with non-federal stakeholders until it released a draft for public comment 5 months later on September 10, 2007. The following day, non-federal stakeholders testified at a congressional hearing that DHS had shut them out during that 5-month period. In addition, the Post-Katrina Act required that DHS establish a National Advisory Council (NAC) for the FEMA Administrator by December 2006 to, among other things, incorporate nonfederal stakeholders' input in the revision process. However, FEMA stated the necessary time to select quality NAC members required additional time, and FEMA did not announce the NAC's membership until June 2007. The NAC did not provide comments on a revision draft until one month before DHS publicly released the final NRF in January 2008. FEMA anticipates that the NRF will be revised in the future; however, FEMA does not have policies or procedures in place to guide this process or ensure a collaborative partnership with stakeholders. FEMA has emphasized the importance of partnering with relevant stakeholders to effectively prepare for and respond to major and catastrophic disasters, and the Congress, through the Post-Katrina Act, requires such partnership. In addition, the Standards for Internal Controls in the Federal Government calls for policies and procedures that establish regular communication with stakeholders and monitor performance over time as essential for achieving desired program goals. Furthermore, previous GAO work on the Department of Defense's civil support plans and the administration's national pandemic influenza implementation plan has shown the need for participation of state and local jurisdictions in emergency planning. Especially in view of a new administration, the experience of the previous revision process illustrates the importance of collaborating with stakeholders in revising a plan that relies on them for its successful implementation. While the NRF is published by DHS, it belongs to the nation's emergency response community. Developing such policies and procedures is essential for ensuring that FEMA attains the Post- Katrina Act's goal of partnering with non-federal stakeholders in building the nation's emergency management system, including the periodic review and revision of the NRF.
Background “Information reseller” is an umbrella term used to describe a wide variety of businesses that collect and aggregate personal information from multiple sources and make it available to their customers. The industry has grown considerably over the past two decades, in large part due to advances in computer technology and electronic storage. Courthouses and other government offices previously stored personal information in paper- based public records that were relatively difficult to obtain, usually requiring a personal visit to inspect the records. Nonpublic information, such as personal information contained in product registrations or insurance applications was also generally inaccessible. In recent years, however, the electronic storage of public and private records along with increased computer processing speeds and decreased data storage costs have fostered information reseller businesses that collect, organize, and sell vast amounts of personal information on virtually all American consumers. The information reseller industry is large and complex, and these businesses vary in many ways. What constitutes an information reseller is not always clearly defined and little data exist on the total number of firms that offer information products. FTC and other federal agencies do not keep comprehensive lists of companies that resell personal information, and experts say that characterizing the precise size and nature of the information reseller industry can be difficult because it is evolving and lacks a clear definition. Although no comprehensive data exist, industry representatives say there are at least hundreds of information resellers in total, including some companies that provide services over the Internet. We include in our definition of information resellers the three nationwide credit bureaus—Equifax, Experian, and TransUnion, which primarily collect and sell information about the creditworthiness of individuals—as well as other resellers such as ChoicePoint, Acxiom, and LexisNexis, which sell information for a variety of purposes, including marketing. Other companies that sell information products include eFunds, which provides depository institutions with information on deposit account histories; Thompson West and Regulatory DataCorp, which help companies mitigate fraud and other risks; and ISO, which provides insurers with insurance claims histories and fraud prevention products. Information resellers sell their products to a broad spectrum of customers, including private companies, individuals, law enforcement bureaus and other government agencies. Although major information resellers generally offer their products only to customers who have successfully completed a credentialing process, some resellers offer certain products, such as compilations of telephone directory information, to the public at large. All of these businesses differ in nature, and they do not all focus exclusively on aggregating and reselling personal information. For example, Acxiom primarily provides customized computer services, and its information products represent a relatively small portion of the overall activities of the company. Information resellers obtain their information from many different sources (see fig. 1). Generally, three types of information are collected: public records, publicly available information, and nonpublic information. Public records are a primary source of information about consumers, available to anyone, and can be obtained from governmental entities. What constitutes public records is dependent upon state and federal laws, but generally these include birth and death records, property records, tax lien records, voter registrations, licensing records, and court records (including criminal records, bankruptcy filings, civil case files, and legal judgments). Publicly available information is information not found in public records but nevertheless publicly available through other sources. These sources include telephone directories, business directories, print publications such as classified ads or magazines, Internet sites, and other sources accessible by the general public. Nonpublic information is derived from proprietary or nonpublic sources, such as credit header data, product warranty registrations, lists of magazine or catalog subscribers, and other application information provided to private businesses directly by consumers. Information resellers hold or have access to databases containing a large variety of information about individuals. Although each reseller varies in the specific personal information it maintains, it can include names, aliases, Social Security numbers, addresses, telephone numbers, motor vehicle records, family members, neighbors, insurance claims, deposit account histories, criminal records, employment histories, credit histories, bankruptcy records, professional licenses, household incomes, home values, automobile values, occupations, ethnicities, and hobbies. The various products offered by different types of information resellers are used for a wide range of purposes, including credit and background checks, fraud prevention, and marketing. Resellers often sell their data to each other—for example, the credit bureaus sell credit header data to other resellers for use in identity verification and fraud prevention products. Resellers might also purchase publicly available information from one another, rather than gathering the information themselves. The nature of the databases maintained and products offered by information resellers vary. Credit bureaus maintain an individual file on most Americans containing financial information related to that person’s creditworthiness. Most other resellers do not typically maintain complete files on individuals, but rather collect and maintain information in a variety of databases, and then provide their customers with a single consolidated source for a broad array of personal information. Financial Institutions Use Information Resellers for Eligibility Determinations, Fraud Prevention, PATRIOT Act Compliance, and Marketing Financial institutions in the banking, credit card, securities, and insurance industries use personal data purchased from information resellers primarily to help make eligibility determinations, comply with legal requirements, prevent fraud, and market their products. Credit reports from the three nationwide credit bureaus help lenders determine eligibility for and the cost of credit, and reports on insurance claims histories from specialty CRAs help insurance companies make premium decisions for new applicants and existing customers. To meet certain legal requirements and detect and prevent fraud, financial institutions we studied also use reseller products to locate individuals or confirm their identity. In addition, certain reseller products containing demographic data and information on individuals’ lifestyle interests and hobbies are used to help market financial products to existing or potential customers with certain characteristics. Consumer Reports Sold by Credit Bureaus and Other CRAs Are Used to Make Credit and Insurance Eligibility Decisions Banks, credit card companies, and other lenders rely on credit reports sold by the three nationwide credit bureaus—Equifax, Experian, and TransUnion—when deciding whether to offer credit to an individual, at what rate, and on what terms. Banks use credit reports to help assess the credit risk of new customers before opening a new deposit account or providing a mortgage or other loan. Credit card companies use credit reports to determine whether to grant a credit card to an applicant, determine the terms of that card, and to adjust the account terms of current cardholders whose creditworthiness may have changed. In addition to lenders, insurance companies often use scores generated from credit report information to help determine premiums for the policies they underwrite. Credit bureaus receive the information in credit reports from the financial institutions themselves, among other sources. Credit reports consist of a “credit header”— identifying information such as name, current and previous addresses, Social Security number, and telephone number—and a credit history, or other payment history, designed to provide information on the individual’s creditworthiness. The credit history might contain information on an individual’s current and past credit accounts, including amounts borrowed and owed, credit limits, relevant dates, and payment histories, including any record of late payments. Credit reports also may include public record information on tax liens, bankruptcies, and other court judgments related to the payment of debts. Credit bureaus also sell credit scores, which are numerical representations of predicted creditworthiness based on information in credit reports, and are often used instead of full credit reports. For example, all three credit bureaus sell FICO® credit scores, which use factors such as payment history, amount owed, and length of credit history to help financial institutions predict the likelihood that a person will repay a loan. Some financial institutions also use specialty CRAs, which maintain specific types of files on consumers, to help make eligibility decisions. Insurance companies commonly use products from ChoicePoint and ISO, which compile data from insurance companies on the claims that individuals have made against their homeowner’s or automobile insurance policies. Most insurance companies provide these CRAs with claim and loss information about their customers, including names, driver’s license information, type of loss, date of loss, and amount the insurance company paid to settle the claim. The CRAs aggregate this information from multiple insurance companies to create either full reports or risk scores designed to help assess the likelihood that an individual will file a claim. Insurance companies purchase reports, or in some cases scores, associated with individuals applying for insurance and the property being insured to help decide whether to provide coverage and at what rate. Insurance companies also use this information to help determine whether to extend coverage and set premiums for existing policy holders. (See app. II for a sample insurance claims history report.) Insurance industry representatives told us aggregated claims data provided by specialty CRAs are extremely useful in making coverage and rate determinations. They noted, for example, that past losses are the best indicator of future driving risk and thus are useful to firms that underwrite auto insurance. Banks and credit unions frequently assess applicants of new checking and other deposit accounts using products offered by resellers such as ChexSystems, a specialty CRA that is a subsidiary of eFunds. ChexSystems compiles information from banks and credit unions on accounts that have been closed due to account misconduct such as overdrafts, insufficient funds activity, returned checks, bank fraud, and check forgery. The company also aggregates available driver’s license information from state departments of motor vehicles, and receives information from check- printing companies on check order histories, which can help identify fraud. Banks we spoke with said that the name and identifying information of a customer seeking to open a new deposit account is typically run through the ChexSystems database. The reports provided back to the financial institution by ChexSystems typically include identifying information, as well as information useful in assessing an applicant’s risk, such as the applicant’s history of check orders and the source and details of any account misconduct. (See app. II for a sample deposit account history report.) Financial Institutions Use Information Resellers to Comply with the PATRIOT Act, Prevent Fraud, Mitigate Risk, and Locate Individuals Financial institutions use data purchased from information resellers to comply with legal requirements; detect, prevent, and investigate fraud; identify risks associated with prospective clients; and locate debtors or shareholders. Complying with PATRIOT Act Requirements Financial institutions we spoke with frequently use products provided by information resellers to comply with PATRIOT Act requirements. Congress intended these provisions to help prevent terrorists and other criminals from using the U.S. financial system to fund terrorism and launder money. The act requires financial institutions to develop procedures to assure the identity of new customers. Many resellers offer products that verify and validate a new customer’s identity by comparing information the customer provided to the financial institution with information aggregated from public and private sources. Some financial institutions, particularly those that offer services by telephone, mail, or the Internet, often confirm customers’ identities using these reseller products. Other companies may verify their customers’ identity from a driver’s license, passport, or other paper document, but use information resellers for additional verification. Financial institutions must also screen their customers to ensure they are not on the Department of the Treasury’s Office of Foreign Assets Control (OFAC) Specially Designated Nationals and Blocked Persons List. The list includes individuals and entities that financial institutions are generally prohibited from conducting transactions with because they have been identified as potential terrorists, money launderers, international narcotics traffickers, or other criminals. Many information resellers offer products to financial institutions that screen new customers against the OFAC list; often this screening is packaged with identity verification in a single product. (See app. II for a sample identity verification and OFAC screening report.) The OFAC list is a publicly available government document, but financial institutions told us they use resellers for their screening because it allows them to do so more quickly and helps distinguish between common names on the list that might result in false matches. Some financial institutions use resellers to screen new customers against the OFAC list, while others periodically screen all of their existing customers. Some companies told us they do most of their OFAC screening internally, but sometimes use a reseller to gather additional information confirming whether a potential match is indeed an individual that is on the OFAC list. To verify a customer’s identity or conduct an OFAC screening, a financial institution typically uses a Web-based portal to provide an information reseller with basic information about the individual being screened—such as the person’s name, Social Security number, address, driver’s license number, phone number, and date of birth. The reseller then checks the information against its own records, and typically provides a “pass” response if the information matches, or a “fail” response if, for example, the date of birth does not match the name. Resellers’ screening products generally draw on credit header data purchased from the credit bureaus, along with publicly available data such as address and telephone records and drivers’ license records from state agencies. Customer verification databases also include information that may indicate suspicious activity, such as prison or campground addresses, disconnected telephone numbers, and Social Security numbers of deceased individuals. Preventing and Detecting Fraud The financial institutions we reviewed use information reseller tools to assist their fraud prevention and detection efforts. For example, banks and credit card companies sometimes use information reseller products to authenticate the identity of existing customers who call to update or receive account information or to order a replacement credit card. Authentication products usually draw on information similar to that used for verification products, most commonly credit header data and public records. Some resellers offer products that also allow the financial institution to access the customers’ credit history with their permission, which provides additional personal information that can be used to verify identity. For example, a customer might be asked the year an automobile loan was originated or the credit limit on a credit card. Fraud departments of financial institutions in our review also use more detailed products from information resellers to investigate suspected identity theft or account fraud, such as the use of a stolen credit card number. (See app. II for a sample fraud investigation report.) In these cases, a company’s fraud department often purchases from information resellers detailed background information on a suspect’s current and prior residences, vehicles, relatives, aliases, criminal records (in certain states), and other information that can be useful in directing an investigation. Examples of the uses of fraud products offered by resellers include obtaining detailed personal information about people associated with potential fraud, or their relatives and associates; detecting links between individuals who may be co-conspirators in fraud identifying multiple insurance claims made by the same person; identifying individuals who are associated with multiple addresses, telephone numbers, or vehicles in ways that indicate potential fraud; obtaining contact information for key individuals, such as witnesses to car accidents identified in police reports; or identifying instances where insurance policy applicants have failed to disclose certain required information. Reducing Risk and Locating Individuals Financial institutions also sometimes use reseller products to help identify potential reputational risk or other risks associated with new customers or business partners. For example, securities firms told us they screen individuals like prospective wealth management clients or merger partners to check for a criminal record, disciplinary action by securities regulators, negative news media coverage, and known affiliation with terrorism, drug trafficking, or organized crime. Financial institutions we spoke with also often use information resellers to locate individuals. For example, lenders use reseller products to find customers who have defaulted on debts, and some mutual fund companies use these products to locate lost shareholders. The information provided by products used for this purpose is derived largely from credit header data, telephone records, and public records data, and may include an individual’s aliases, addresses, telephone numbers, Social Security number, motor vehicle records, as well as the names of neighbors and associates. For example, one financial institution told us its debt collectors use a ChoicePoint product called DEBTOR Discovery to get such information to help locate delinquent debtors. Some Financial Institutions Use Information Resellers for Marketing Some information resellers offer certain products that help financial institutions market their financial products and services to new or existing customers with specific characteristics. Databases held by resellers offering marketing products include a variety of information on individuals and households, such as household size, number and ages of children, estimated household income, homeownership status, demographic data, and lifestyle interests and activities. These databases derive their information from public records as well as nonpublic sources such as self- reported marketing surveys, product warranty cards, and lists of magazine subscribers, which may be used to provide financial institutions and other companies with lists of consumers meeting certain criteria. For example, a bank marketing a college savings account might request the names and addresses of all households in certain ZIP codes that have children under the age of 18 and household incomes of $100,000 or more. Financial institutions we studied also use certain reseller products to gather additional information on their existing customers to market additional products and services. For example, we spoke with an insurance company that used an information reseller to learn which of its existing customers owned boats, so those customers could be targeted for boat insurance. Similarly, one bank we spoke with used an information reseller to help market a sailing credit card to current customers who lived near bodies of water. Many companies that solicit new credit card accounts and insurance policies use nationwide credit bureaus for “prescreening” to identify potential customers for the products they offer. A lender or insurance company establishes criteria, such as a minimum credit score, and then purchases from a credit bureau a list of people in the bureau’s database who meet those criteria. In some cases, the financial institution already has a list of potential customers that it provides to the credit bureau to identify individuals on the list who meet the criteria. Financial institutions sometimes also use a second information reseller to help them obtain from a credit bureau a list that includes only consumers meeting specific demographic or lifestyle criteria. For example, in marketing a home equity line of credit, a lender may use a second information reseller to work with a credit bureau to identify creditworthy individuals that are also homeowners and live in certain geographic areas, to which the lender will then make a firm offer of credit. Financial institutions sometimes use data from information resellers for models—developed by either the institution or the reseller—that seek to predict consumers likely to be interested in a new product and unlikely to present a credit risk. For example, a firm we spoke with that was marketing credit cards to college students used reseller data to determine the characteristics of college students that indicate they will be successful credit card borrowers. Federal Privacy and Information Security Laws Apply to Many Information Reseller Products, Depending on Their Use and Source The Fair Credit Reporting Act (FCRA) and the Gramm-Leach-Bliley Act (GLBA) are the primary federal laws governing the privacy and security of personal data collected and shared by information resellers. FCRA limits resellers’ use and distribution of personal data, and allows consumers to access the data held on them, but it only applies to information collected or used primarily to make eligibility determinations. Unless FCRA applies to a product and its database, resellers typically provide only limited opportunities for the consumer to access, correct, or restrict sharing of the personal data held on them. GLBA’s privacy provisions restrict the sharing of nonpublic personal information collected by or acquired from financial institutions, including resellers covered by GLBA’s definition of financial institution (GLBA financial institutions). Further, GLBA’s safeguarding provision requires resellers that are GLBA financial institutions to safeguard this information. Several Federal Privacy and Security Laws Apply to Personal Data Held by Information Resellers No single federal law governs the use or disclosure of all personal information by private sector companies. Similarly, there are no federal laws designed specifically to address all of the products sold and data maintained by information resellers. Instead, a variety of different laws govern the use, sharing, and protection of personal information that is maintained for specific purposes or by specific types of entities. The two primary federal laws that protect personal information maintained by private sector companies are FCRA and GLBA. FCRA protects the security and confidentiality of personal information that is collected or used to help make decisions about individuals’ eligibility for, among other things, credit, insurance, or employment, while GLBA is designed to protect personal financial information that individuals provide to or that is maintained by financial institutions. In addition to FCRA and GLBA, other federal laws that directly or indirectly address privacy and data security may also cover some information reseller products. The Driver’s Privacy Protection Act of 1994 regulates the use and disclosure by state motor vehicle departments of personal information from motor vehicle records. Personal motor vehicle records may be purchased and sold only for certain purposes—such as insurance claims investigations and other anti-fraud activities—unless a state motor vehicle agency has received express consent from the individual indicating otherwise. In addition, the Federal Trade Commission Act (FTC Act), enacted in 1914 and amended on numerous occasions, gives FTC the authority to prohibit and act against unfair or deceptive acts or practices. The failure by a commercial entity, such as an information reseller, to reasonably protect personal information could be a violation of the FTC Act if the company’s actions constitute an unfair or deceptive act or practice. Finally, some federal banking regulators have authority to oversee their institutions’ third-party service providers to ensure the safety and soundness of financial institutions. For example, if a vendor such as an information reseller did not employ reasonable safeguards to maintain a bank’s records, federal banking regulators could examine the vendor to identify and remedy the risks. FCRA Applies Only to Consumer Information Used to Determine Eligibility The Fair Credit Reporting Act (FCRA), enacted in 1970, protects the confidentiality and accuracy of personal information used to make certain types of decisions about consumers. Specifically, FCRA applies to companies that furnish, contribute to, or use “consumer reports”—reports containing information about an individual’s personal and credit characteristics used to help determine eligibility for such things as credit, insurance, employment, licenses, and certain other benefits. Businesses that evaluate consumer information or assemble such reports for third parties are known as consumer reporting agencies, or CRAs. Consumer reports covered by FCRA comprise a significant portion of consumer data transactions in the United States. For example, according to an industry association that represents CRAs, the three nationwide credit bureaus sell over 2.5 billion credit reports each year on average. FCRA places certain restrictions and obligations on CRAs that issue these reports. For example, the law restricts the use of consumer reports to certain permissible purposes, such as approving credit, imposes certain disclosure requirements, and requires that CRAs take steps to ensure that information in these reports is not misused. It also provides consumers with certain rights in relation to their credit reports, such as the right to dispute the accuracy or completeness of items in the reports. Congress has amended FCRA a number of times, most recently with the Fair and Accurate Credit Transactions Act of 2003 (FACT Act), which sought to promote more-accurate credit reports and expand consumers’ access to their credit information. Information resellers are subject to FCRA’s requirements only with regard to information used to compile consumer reports—that is, reports used to help determine eligibility for certain purposes, including credit, insurance, or employment. Thus, FCRA applies to databases used to compile credit reports sold by the three nationwide credit bureaus, and its provisions apply both to the credit bureaus themselves as well as to other information resellers that purchase and resell credit reports for use by others. FCRA also applies to databases used to generate specialty consumer reports— which consist of such things as tenant history, check writing history, employment history, medical information, or insurance claims—that are used to help make eligibility determinations. For example, according to ChoicePoint, FCRA applies to the data used in most of its WorkPlace Solutions products, which employers use to make hiring decisions. Similarly, according to LexisNexis, FCRA applies to its Electronic Bankruptcy Notifier product data, which financial institutions use to determine whether to offer customers credit or other financial services. Overall, 8 of the 10 information resellers we spoke with said that at least some of their products are consumer reports as defined by FCRA. They said their contracts prohibit their customers from using their non-FCRA products for purposes related to making eligibility determinations. According to the information resellers included in our review, FCRA does not cover many databases used to create other products they offer because, as defined by the law, the information was not collected for making eligibility determinations and the products are not intended to be used for making eligibility determinations. For example, some of the information resellers we spoke with did not treat data in some products used to identify and prevent fraud as subject to FCRA. Similarly, resellers do not typically consider databases used solely for marketing purposes to be covered by FCRA. Because the definition of a consumer report under FCRA depends on the purpose for which the information is collected and on the reports’ intended and actual use, an information reseller apparently may have two essentially identical databases with only one of them subject to FCRA. FCRA also restricts financial institutions and other companies that use consumer reports from using them for purposes other than those permitted in the law. Financial institutions must also notify consumers if they take an adverse action—such as denying an applicant a credit card— based on information in a consumer report. Under FCRA, companies that furnish information to CRAs also must take steps to ensure the accuracy of information they report. Further, users of consumer reports must properly dispose of consumer reports they maintain. The law also limits financial institutions and other entities from sharing certain credit information with their affiliates for marketing purposes. Final regulations to implement this statutory limitation have not yet been promulgated. FCRA Provides Access, Correction, and Opt-Out Rights for Consumer Reports FCRA is the primary federal law that provides rights to consumers to view, correct, or opt out of the sharing of their personal information, including data held by information resellers. Under FCRA, as recently amended by the FACT Act, consumers have the right to obtain all of the information about themselves contained in the files of a CRA upon request, including their credit history; receive one free copy of their credit file from nationwide CRAs and nationwide specialty CRAs once a year or under certain other circumstances; dispute information that is incomplete or inaccurate, and have their claims investigated and any errors deleted or corrected, as provided by the law; and opt out of allowing CRAs to provide their personal information to third parties for prescreened marketing offers. Most of FCRA’s access, correction, and opt-out rights apply not just to the three nationwide credit bureaus—Experian, TransUnion, and Equifax— but also to other CRAs, including nationwide specialty CRAs that provide reports on such things as insurance claims and tenant histories. The law imposes slightly different requirements on these entities with respect to free annual reports. For example, FCRA’s implementing regulation requires Experian, TransUnion, and Equifax to create a centralized source for accepting consumer requests for free credit reports, which must include a single dedicated Web site, a toll-free telephone number, and mail directed to a single postal address where consumers can order credit reports from all three nationwide CRAs. Nationwide specialty CRAs are individually required to maintain a toll-free number and a streamlined process for accepting and processing consumer requests for file disclosures. Other CRAs must provide consumers with a copy of their report upon request (although in most cases they may charge a reasonable fee for it), and they must allow consumers to dispute information they believe to be inaccurate. In practice, consumers may find it difficult in some cases to effectively access and correct information held by nationwide specialty CRAs because there may be hundreds of such CRAs and no master list exists. For example, job seekers who want to confirm the accuracy of information about themselves in background-screening products would need to request their consumer reports from the dozens of such companies that offer such products. Consumers generally do not have the legal right to access or correct information about them contained in non-FCRA databases, such as those used for marketing purposes or, in some cases, fraud detection. The information resellers we studied varied in the extent to which they voluntarily provide consumers with additional opportunities to view, correct, and opt out of the sharing of information beyond what the law requires. The three nationwide credit bureaus allowed consumers to view only information that is subject to FCRA. However, three other information resellers we spoke with allowed consumers to order summary reports of some data maintained about them that was not subject to FCRA. These reports varied in length and detail but typically contained consumer data obtained from public records, publicly available information, and credit header information. Consumers did not typically have the right to see data maintained about them related to marketing, such as information on their household income, interests, or hobbies, which was often obtained from warranty cards or self-reported survey questionnaires. Information resellers told us that consumers who request correction of inaccurate data not covered by FCRA are typically referred to the government or private entity that was the source of the data. Many resellers told us that because their databases are so frequently updated, simply correcting their own databases would not be effective because it would soon be refreshed by new erroneous data from the original source. However, one reseller told us it has procedures that prevent such corrections from being overwritten. Some resellers offered limited opportunities for consumers to opt out of their databases even for data not covered by FCRA, but they typically allow this only for data used for marketing purposes. The five resellers we spoke with that maintain personal data used for marketing allowed consumers to request that their information not be shared with third parties. None of the resellers we spoke with offered all consumers the ability to opt out of identity verification or fraud products. They noted that it would undermine the effectiveness of the databases if, for example, criminals could remove themselves from lists of fraudsters. Some resellers do allow opt-out opportunities to certain individuals, such as judges or identity-theft victims, who may face potential harm from having their information included in reseller databases. Industry representatives, consumer advocates, and others offer differing views on whether the access, correction, and opt-out rights provided under FCRA should be expanded. Many consumer advocates and others have argued that these rights should not be limited to consumer information used for eligibility purposes, but should explicitly extend as well to databases not currently considered by resellers to be subject to FCRA, such as those used for some anti-fraud products. Proponents of this view argue that basic privacy principles dictate that consumers should have the right to know what information is being collected and maintained about them. In addition, they argue that errors in these databases have the potential to harm consumers. For example, an individual could be denied a volunteer opportunity or falsely pursued as a crime suspect due to erroneous information in a reseller database not covered under FCRA. In contrast, some information resellers, financial services firms, and law enforcement representatives have argued that providing individuals expanded access, correction, and opt-out rights is unnecessary and could harm fraud prevention and criminal investigations by providing individuals with the opportunity to see and manipulate the information that exists about them. They also note that expanding these rights could create new regulatory burdens. For example, firms maintaining databases for marketing purposes could face substantial costs and complications developing and implementing processes for consumers to see, challenge, and correct the data held on them. Information resellers noted that providing access and correction rights for personal information in marketing databases makes little sense because the accuracy of this information is much less important than for information used to make crucial eligibility decisions. GLBA Applies to Information Resellers That Are Financial Institutions or Receive Information from Financial Institutions The Gramm-Leach-Bliley Act (GLBA), enacted in 1999, limits with certain exceptions the sharing of consumer information by financial institutions and requires them to protect the security and confidentiality of customer information. Further, GLBA limits the reuse and redisclosure of the information for those receiving it. GLBA’s key provisions with regard to information resellers, therefore, cover the privacy, reuse, redisclosure, and safeguarding of information. GLBA Privacy Provisions GLBA’s privacy provisions generally limit financial institutions from sharing nonpublic personal information with nonaffiliated companies without first providing certain notice and, where appropriate, opt-out rights to their own customers and other consumers with whom they interact. GLBA distinguishes between a financial institution’s “customers” and other individuals the financial institution may interact less with, which the law refers to as “consumers.” Specifically, a consumer is an individual who obtains a financial product or service from a financial institution. On the other hand, a customer is a consumer who has an ongoing relationship with a financial institution. For example, someone who engages in an isolated transaction with a financial institution, such as obtaining an ATM withdrawal, is a consumer, whereas someone who has a deposit account with a bank would be a customer. While some GLBA requirements, such as the privacy requirements, apply broadly to cover consumer information in many cases, other provisions of GLBA apply only to customer information. For example, GLBA’s safeguarding requirements oblige financial institutions to protect only customer information. GLBA requires financial institutions to provide their customers with a notice at the start of the customer relationship and annually thereafter for the duration of that relationship. The notice must describe the company’s sharing practices and give customers, and in some cases consumers, the right to opt out of some sharing. GLBA exempts companies from notice and opt-out requirements under certain circumstances. For example, financial institutions and CRAs may share personal information for credit- reporting purposes without providing opt-out opportunities, and financial institutions and others may also share this information to protect against or prevent actual or potential fraud and unauthorized transactions. Thus, financial institutions are not required to provide their customers with opt- out rights before reporting their information to credit bureaus or sharing their information with information resellers for identity verification and fraud purposes. Under another GLBA exception, financial institutions are also not required to provide consumers with an opportunity to opt out of the sharing of information with companies that perform services for the financial institution. GLBA’s privacy provisions apply to information resellers only if (1) the reseller is a GLBA “financial institution” or (2) the reseller receives nonpublic personal information from such a financial institution (see fig. 2). The determination of whether a company is a financial institution under GLBA is complex and, for an information reseller, depends on whether the company’s activities are included in implementing regulations issued by FTC. GLBA defines “financial institutions” as entities that are in the business of engaging in certain financial activities. Such activities include, among other things, traditional banking services, activities that are financial in nature on the FRB list of permissible activities for financial holding companies in effect as of the date of GLBA’s enactment, and new permissible activities. While new financial activities may be identified, those activities are not automatically included in FTC’s definition. FTC defines “financial institutions” as businesses that are “significantly engaged” in financial activities. For example, FRB’s list of “financial activities” includes not only the activity of extending credit, but also related activities such as credit bureau services. Thus, the three nationwide credit bureaus are considered financial institutions subject to GLBA. FTC staff told us that the determination of whether a specific information reseller is a financial institution subject to GLBA depends on the specific activities of the company. They said they determine whether GLBA applies to an entity on a case-by-case basis and that it is difficult to generalize what types of information resellers are GLBA financial institutions. For example, CRAs other than the three nationwide credit bureaus may not necessarily be subject to GLBA if, for example, their activities do not fall under FRB’s definition of credit bureau services or they do not otherwise engage in any financial activity included in the 1999 FRB list. Only four resellers with whom we spoke—the three nationwide credit bureaus and a specialty CRA that collects deposit account information—told us they consider themselves financial institutions subject to GLBA’s privacy and safeguarding provisions. Moreover, we were told that these provisions do not apply to the entire company but rather only to those activities of the company that are deemed financial in nature. For example, one credit bureau told us that its credit reporting activities fall under GLBA, but that its marketing products, which are not deemed financial in nature, do not fall under GLBA. GLBA not only limits how financial institutions share nonpublic personal information with other companies, but it also restricts what those companies subsequently do with the information. Under GLBA’s “reuse and redisclosure” provision and FTC’s implementing rule, companies that receive information from a financial institution are restricted in how they further share or use that information. If a company receives information under a GLBA exception, then the reseller can only reuse and redisclose the information for activities that fall under the exception under which the information was received. Alternatively, if a company receives information from a financial institution in a way not covered by an exception—where an individual has been provided with a GLBA notice and has chosen not to opt out of sharing—then the information may be reused and redisclosed in any way the original financial institution would have been permitted. As noted earlier, the nationwide credit bureaus sell credit header data— identifying information at the top of a credit report—to other information resellers for use in fraud prevention products. Representatives of two of the credit bureaus and their industry association told us that because credit header data contains information from financial institutions, it is subject to GLBA’s reuse and redisclosure provisions. As a result, the credit bureaus can only sell credit header data under the same GLBA exception under which they received it. Credit bureau representatives said they receive the information from financial institutions under both the consumer reporting and fraud prevention exceptions, and then sell it under the fraud prevention exception. Also, some old credit header data may not be subject to GLBA at all. Prior to GLBA’s enactment in 1999, credit header information sold by credit bureaus—which included names, addresses, aliases, and Social Security numbers—could be used or resold by a third party for any purpose, as long as the information was not used to make eligibility determinations. GLBA placed restrictions on the sale of such nonpublic personal information maintained by GLBA financial institutions. Further, as noted earlier, reuse and redisclosure of the information is also restricted by GLBA. The law’s privacy restrictions generally became fully effective on July 1, 2001. A nationwide credit bureau told us that the restrictions did not apply retroactively to credit header data that credit bureaus already held at the time of GLBA’s enactment in 1999. The nationwide credit bureau said that just prior to GLBA’s enactment, it created a new database containing “pre-GLBA” credit header data and transferred those data to a separate affiliated company. The company told us that because it gathered these data prior to GLBA’s enactment, the data are not subject to GLBA’s privacy and safeguarding provisions. GLBA Safeguarding Provisions The safeguarding provisions of GLBA require financial institutions to take steps to ensure the security and confidentiality of their customers’ nonpublic personal information. Specifically, the agency regulations provide that financial institutions must develop comprehensive written policies and procedures to ensure the security and confidentiality of customer records and information, protect against any anticipated threats or hazards to the security or integrity of such records, and protect against unauthorized access to or use of such records or information that could result in substantial harm or inconvenience to any customer. Although the privacy provisions of GLBA apply broadly to financial institutions’ consumers, GLBA’s safeguarding requirements only establish obligations on financial institutions to protect their customer information. Only information resellers defined as financial institutions under the law are required to implement these safeguards. Several of the information resellers we spoke with noted that although GLBA does not apply to all of their products, they have policies and procedures to protect all of their information in a way consistent with GLBA’s safeguarding requirements. Unlike GLBA’s notice and opt-out requirements (privacy requirements), the law’s safeguarding provisions do not directly extend to third-party companies that receive personal information from financial institutions. However, federal agencies’ provisions implementing GLBA safeguarding rules require financial institutions to monitor the activities of their service providers and require them by contract to implement and maintain appropriate safeguards for customer information. Many commercial entities—including many information resellers—are not subject to GLBA and therefore are not explicitly required by a federal statute to have in place policies and procedures to safeguard individuals’ personal data. This raises concerns given that identity theft has emerged as a serious problem and that breaches of sensitive personal data have occurred at a variety of companies that are not financial institutions. For example, in 2005, BJ’s Wholesale Club, which is not considered a GLBA financial institution, settled FTC charges that it engaged in an unfair or deceptive act or practice in violation of the FTC Act by failing to take appropriate security measures to protect the sensitive information of thousands of its customers. FTC alleged that the company’s failure to secure sensitive information was an unfair practice because it caused substantial injury not reasonably avoidable by consumers and not outweighed by offsetting benefits to consumers or competition. Some policymakers, consumer advocates, and industry representatives have advocated explicit statutory requirements that would expand more broadly the number and types of companies that must safeguard their data. Had there been a statutory requirement for BJ’s Wholesale Club to safeguard sensitive information, FTC would have had authority to file a complaint based on the company’s failure to safeguard information. Expanding the class of entities subject to safeguarding laws would impose explicit data security provisions on a larger group of organizations that are maintaining sensitive personal information. FTC has testified that should Congress enact new data security requirements, FTC’s safeguards rule should serve as a model for an effective enforcement standard because it provides sufficient flexibility to apply to a wide range of companies rather than mandate specific technical requirements that may not be appropriate for all entities. To be most effective, new data security provisions would need to apply both to customer and noncustomer data because the nature of information reseller businesses is such that they hold large amounts of sensitive personal information on individuals who are not their customers. No Federal Statute Requires Notification of Data Breaches Currently, there is no federal statute requiring information resellers or most other companies to disclose breaches of sensitive personal information, although at least 32 states have enacted some form of breach notification law. Policymakers and consumer advocates have raised concerns that federal law does not always require companies to reveal instances of the theft or loss of sensitive data. These concerns have been triggered in part by increased public awareness of the problem of identity theft and by a large number of data breaches at a wide variety of public and private sector entities, including major financial services firms, information resellers, universities, and government agencies. In 2005, ChoicePoint acknowledged that the personal records it held on approximately 162,000 consumers had been compromised. As part of a settlement with the company in January 2006, FTC alleged that ChoicePoint did not have reasonable procedures to screen prospective subscribers to its data products, and provided consumers’ sensitive personal information to subscribers whose applications should have raised obvious suspicions. A December 2005 report by the Congressional Research Service noted that personal data security breaches were occurring with increasing regularity, and listed 97 recent breaches, five of which had occurred at information resellers. Data breaches are not limited to private sector entities, as evidenced by the theft discovered in May 2006 of electronic data of the Department of Veterans Affairs containing identifying information for millions of veterans. Congress has held several hearings related to data breaches, and a number of bills have been introduced that would require companies to notify individuals when such breaches occur. The bills vary in many ways, including differences in who must be notified, the level of risk that triggers a notice, the nature of the notification, exceptions to the requirement, and the extent to which federal law preempts state law. Breach notification requirements have two primary benefits. First, they provide companies or other entities with incentives to follow good security practices so as to avoid the legal liability or public relations risks that may result from a publicized breach of customer data. Second, consumers who are informed of a breach of their personal data can take actions to mitigate potential risk, such as reviewing the accuracy of their credit reports or credit card statements. However, FTC and others have noted that any federal requirements should ensure that customers receive notices only when they are at risk of identity theft or other related harm. To require notices when consumers are not at true risk could create an undue burden on businesses that may be required to provide notices for minor and insignificant breaches. It could also overwhelm consumers with frequent notifications about breaches that have no impact on them, reducing the chance they will pay attention when a meaningful breach occurs. At the same time, consumer and privacy groups and other parties have warned against imposing too weak of a trigger for notification, and expressed concerns that a federal breach notification law could actually weaken consumers’ security if it were to preempt stronger state laws. FTC Has Primary Responsibility for Enforcing Information Resellers’ Compliance with Privacy and Information Security Laws The Federal Trade Commission is the federal agency with primary responsibility for enforcing applicable privacy and information security laws for information resellers. Since 1972, FTC has initiated numerous formal enforcement actions against information resellers for providing consumer report information without adequately ensuring that their customers had a permissible purpose for obtaining the data. FTC has civil penalty authority for violations of FCRA and, in limited situations, the FTC Act, but it does not have such authority for GLBA, which may inhibit its ability to most effectively enforce that law’s privacy and security provisions. FTC Has Primary Federal Enforcement Authority over Information Resellers FTC enforces the privacy and security provisions of FCRA and GLBA over information resellers. FCRA provided FTC with enforcement authority for nearly all companies not supervised by a federal banking regulator. Similarly, GLBA provided FTC with rule-making and enforcement authority over all financial institutions and other entities not under the jurisdiction of the federal banking regulators, NCUA, SEC, the Commodity Futures Trading Commission, or state insurance regulators. In addition, the FTC Act provides FTC with the authority to investigate and take administrative and civil enforcement actions against most commercial entities, including information resellers, that engage in unfair or deceptive acts or practices in or affecting commerce. According to FTC officials, an information reseller could violate the FTC Act if it mishandled personal information in a way that rose to the level of an unfair or deceptive act or practice. State regulators also play a role in enforcing data privacy and security laws. FCRA provides enforcement authority to a state’s chief law enforcement officer, or any other designated officer or agency, although federal agencies have the right to intervene in any state-initiated action. In addition, GLBA allows states to enforce their own information security and privacy laws, including those that provide greater protections than GLBA, as long as the state laws are not inconsistent with requirements under the federal law. Several states, including Connecticut, North Dakota, and Vermont, have enacted restrictions on the sharing of financial information that are stricter than GLBA. States can also enforce their own laws related to unfair or deceptive acts or practices to the extent the laws do not conflict with federal law. FTC Has Investigated and Initiated Formal Enforcement Actions against Information Resellers for FCRA and FTC Act Violations Since 1972, FTC has initiated numerous formal enforcement actions against at least 20 information resellers for violating FCRA and, in some cases, the FTC Act. All of these companies were CRAs, and they included the three nationwide credit bureaus as well as a variety of types of specialty CRAs. In most of these cases, FTC charged that the companies provided consumer report information without adequately ensuring that their customers had a permissible purpose for obtaining the data. In many cases, FTC alleged the companies sold consumer reports to users they had no reason to believe intended to use the information legally, or didn’t require the users to identify themselves and certify in writing the purposes for which they wished to use the reports. In addition, some companies’ reports allegedly included significant inaccuracies or obsolete information; some companies also failed to reinvestigate disputed information within a reasonable period of time. Among the most significant of these FTC enforcement actions against information resellers are the following: In 1995, FTC settled charges with Equifax Credit Information Services, the credit bureau subsidiary of Equifax Inc., for alleged violations of FCRA. FTC alleged that the company furnished consumer reports to individuals without a permissible purpose, included derogatory information in consumer reports that should have been excluded after it was disputed by the consumer, and failed to take steps to reduce inaccuracies in reports and reinvestigate disputed information. The consent agreement required Equifax to take steps to improve the accuracy of its consumer reports and limit the furnishing of such reports to those with a permissible purpose under FCRA. In 2000, FTC ordered the TransUnion Corporation, a nationwide credit bureau, to stop selling consumer reports in the form of target marketing lists to marketers who lack an authorized purpose under FCRA for receiving them. The company had been selling mailing lists of the names and addresses of consumers meeting certain credit-related criteria (such as having certain types of loans). FTC found that the lists were consumer reports and that the lists therefore could not be sold for target marketing purposes. In January 2006, FTC settled charges against ChoicePoint that its security and record-handling procedures violated federal laws with respect to consumers’ privacy. FTC had alleged the company violated FCRA by providing sensitive personal information to customers despite obvious indications that the information would not be used for a permissible purpose. For example, ChoicePoint allegedly approved as customers individuals who subscribed to data products for multiple businesses using fax machines in public commercial locations. FTC also charged that the company violated the FTC Act by making false and misleading statements in its privacy policy, which said it provided consumer reports only to businesses that complete a rigorous credentialing process. Under the terms of the settlement, ChoicePoint agreed to pay $10 million in civil penalties—the largest civil penalty in FTC history—and to provide $5 million in consumer redress. ChoicePoint did not admit to a violation of law in settling the charges. A company representative told us it has taken steps since the breach to enhance its customer screening process and to assist affected consumers. FTC Cannot Levy Civil Penalties for GLBA Information Privacy and Security Violations FTC is the primary federal agency monitoring information resellers’ compliance with privacy and security laws, but it is a law enforcement rather than supervisory agency. Unlike federal financial institution regulators, which oversee a relatively narrow class of entities, FTC has jurisdiction over a large and diverse group of entities and enforces a wide variety of statutes related to antitrust, financial regulation, consumer protection, and other issues. FTC’s mission and resource allocations focus on conducting investigations and, unlike federal financial regulators, FTC does not routinely monitor or examine the companies over which it has jurisdiction. If FTC has reason to believe that violations of laws under its jurisdiction have taken place, it may initiate a law enforcement action. Under its statutory authority, it can ask or compel companies to produce documents, testimony, and other materials. FTC may in administrative proceedings issue cease and desist orders for unfair or deceptive acts or practices. Further, FTC generally may seek from the United States district courts a wide range of remedies, including injunctions, damages to compensate consumers for their actual losses, and disgorgement of ill- gotten funds. Depending on the law it is enforcing, FTC may also seek to obtain civil penalties—monetary fines levied for a violation of a civil statute or regulation. Although FTC has civil penalty authority for violations of FCRA and in limited situations the FTC Act, GLBA’s privacy and safeguarding provisions do not give it such authority. Currently, FTC may seek an injunction to stop a company from violating these provisions and may seek redress—damages to compensate consumers for losses—or disgorgement. However, determining the appropriate amount of consumer compensation requires having information on who and how many consumers were affected and the harm, in monetary terms, that they suffered. This can be extremely difficult in the case of security and privacy violations, such as data breaches. Such breaches may lead to identity theft, but FTC staff told us that they may not be able to identify exactly which individuals were victimized and to what extent they were harmed—particularly in cases where the potential identity theft could occur years in the future. FTC could benefit from having the authority to impose civil penalties for violations of GLBA’s privacy and safeguarding provisions because such penalties may be more practical enforcement tools for violations involving breaches of mass consumer data. FTC has testified that such authority is often the most appropriate remedy in such cases, and staff told us it could more effectively deter companies from violating provisions of GLBA. Unlike FTC, other regulators have civil penalty authority to enforce violations of GLBA. For example, OCC told us it can enforce GLBA privacy and safeguard provisions with civil money penalties against any insured depository institution or institution-affiliated party. Agencies Differ in Their Oversight of the Privacy and Security of Personal Information at Financial Institutions In enforcing privacy and security requirements, federal regulators do not distinguish between the data that regulated entities obtain from information resellers and other personal information these entities maintain. Federal banking regulators have overseen compliance with the privacy and security provisions of GLBA and FCRA by issuing rules and guidance, conducting examinations, and taking formal and informal enforcement actions when needed. Securities and insurance regulators enforce GLBA information privacy and security requirements in a similar fashion, but FTC is responsible for FCRA enforcement among these firms. FTC is also responsible for GLBA and FCRA enforcement for financial services firms not supervised by another regulator and has initiated several enforcement actions, though it does not conduct routine examinations. Credit union, securities, and insurance regulators told us that unlike most of the banking regulators, they do not have full authority to examine their entities’ third-party service providers, including information resellers. Financial Institutions and Their Regulators Said They Do Not Distinguish between Data from Information Resellers and Other Sources The information privacy and security provisions of GLBA and FCRA provide several federal and state agencies with authority to enforce the laws’ provisions for financial institutions. As shown in figure 3, GLBA assigns federal banking and securities regulators and state insurance regulators with enforcement responsibility for the financial institutions they oversee, and FTC has jurisdiction for all other financial institutions. FCRA similarly assigns the federal banking regulators authority over the institutions they oversee and FTC with jurisdiction over other entities. FCRA assigns FTC with enforcement responsibility for securities and insurance companies and provides securities and insurance regulators with no statutory responsibilities to enforce FCRA. Financial regulators told us that in their oversight of companies’ compliance with privacy laws, they generally do not distinguish between data obtained from information resellers versus other sources. The nonpublic personal information maintained by financial institutions includes both data they collect directly from their customers as well as data purchased from information resellers, such as credit reports or marketing lists. Banking and securities regulators told us their efforts to oversee the privacy and security of nonpublic personal information do not focus in particular on data that came from information resellers but rather look holistically at a financial institution’s information security and compliance with applicable laws. For example, OCC and FRB officials said their examiners enforce the privacy and safeguarding requirements of GLBA and FCRA regardless of whether the source of the data is an information reseller, a customer, or other source. GLBA’s safeguarding requirements apply only to nonpublic personal information that financial institutions maintain on their customers and not to information they maintain about other consumers (noncustomers). However, representatives of financial institutions we interviewed said that as a matter of policy, they generally apply the same information safeguards to both customer and consumer information. They said that their information safeguards focus on the sensitivity of the information rather than whether the person is a customer. For example, files containing Social Security numbers would have more stringent safeguards than those containing only names and addresses. Officials of a global investment banking and brokerage firm told us that although their firm maintains separate databases on customers and consumers targeted for marketing, both databases use the higher security standard required for customer information. Another company with similar practices noted that it treats all information with higher standards rather than setting up many different safeguarding policies and procedures. Other companies noted that public relations and reputational risk concerns motivate them to maintain high safeguards to prevent any consumer information from being lost or stolen. Similarly, federal banking regulators told us that failing to safeguard consumer information may not be a violation of GLBA but is still taken very seriously because it represents a threat to a bank’s safety and soundness, poses reputational risks, and reflects a weakness in a bank’s corporate governance. Federal Banking Agencies Provide Guidance and Examine Regulated Banking Organizations for GLBA and FCRA Compliance The banking regulators responsible for GLBA and FCRA enforcement have issued regulations and other guidance on information privacy and security requirements. The individual banking regulators examine the financial institutions under their jurisdiction for compliance with GLBA and FCRA information privacy and safeguarding requirements and have taken enforcement actions for violations. Regulations and Other Guidance The banking agencies acting jointly and individually, and in coordination with FTC, have issued regulations and other guidance for financial institutions to follow in implementing the privacy and safeguarding requirements of GLBA. In 2000, following the law’s passage, the banking agencies—OCC, FRB, OTS, FDIC, and NCUA—issued rules for compliance with the law’s information privacy requirements. These rules helped financial institutions implement GLBA’s notice and opt-out requirements. For example, they provided examples of types of information regulated by GLBA. In 2001, the agencies jointly issued guidelines establishing standards for GLBA’s safeguarding requirements to assist financial institutions in establishing administrative, technical, and physical safeguards for customer information as required by law. In addition to the guidelines that implement GLBA safeguarding requirements, these regulators have in some cases issued guidance to provide further assistance to their institutions. For example, the banking agencies issued a guide on small entities’ compliance with GLBA’s privacy provision to help companies identify and comply with the requirements. The banking agencies also have issued additional written interagency guidance for financial institutions relating to notification of their customers in the event of unauthorized access to their information where misuse of the information has occurred or is reasonably possible. The banking regulators have also issued rules and regulations for their institutions to implement certain provisions of the Fair and Accurate Credit Transactions Act of 2003 (FACT Act), which amends FCRA. For example, in 2004, in coordination with FTC, these agencies issued a final rule to implement the FACT Act requirement that persons, including financial institutions, properly dispose of consumer report information and records. Some provisions—such as restrictions on how financial institutions can share data with their affiliates for marketing purposes— have yet to be finalized by the banking or other agencies. Through the Federal Financial Institutions Examination Council (FFIEC)—a formal interagency body comprising representatives from OCC, OTS, FRB, FDIC, and NCUA that coordinates examination standards and procedures for their institutions—the banking agencies have also issued guidance to help bank examiners oversee the integrity of information technology at their institutions. For example, FFIEC developed the FFIEC IT Examination Handbook, which is composed of 12 booklets designed to help examiners and organizations determine the level of security risks at financial institutions and evaluate the adequacy of the organizations’ risk management. Representatives of banking regulators say their examiners rely on these booklets in addition to the GLBA and FCRA guidance when examining the integrity of an institution’s information privacy and security procedures. Some of these booklets help examiners oversee financial institutions’ use of information resellers and other third-party technology service providers by addressing topics such as banks’ outsourcing of technology services, or banks’ supervision of its technology service providers. Financial institution regulators told us their examiners use these booklets to oversee the soundness of their institutions’ technology services and to address information security issues posed by third-party technology service providers such as information resellers. Examinations and Enforcement Actions Banking regulators regularly examine regulated banks, thrifts, and credit unions for compliance with GLBA and FCRA requirements. Each regulatory agency told us that their agencies’ safety and soundness, compliance, and information technology examinations include checks on whether their institutions are in compliance with GLBA’s and FCRA’s provisions related to the privacy and security of personal information. For example, OCC examination procedures tell examiners to review banks’ monitoring systems and procedures to detect actual and attempted attacks on or intrusions into customer information systems. However, the scope of the regulators’ reviews with regard to privacy and security matters can vary depending on the degree of risk associated with the institution examined. According to the banking agencies, their examinations of institutions’ GLBA and FCRA compliance have discovered limited material deficiencies and violations requiring formal enforcement actions. Instead, they have mostly found various weaknesses that they characterized as technical in nature and required informal corrective action. FDIC officials said that between 2002 and 2005, the agency took 12 formal enforcement actions for GLBA violations and no formal enforcement actions under FCRA. They noted that FDIC has also taken informal enforcement actions to correct an institution’s overall compliance management system, which covers all of the consumer protection statutes and regulations in the examination scope. According to OCC officials, between October 1, 2000, and September 30, 2005, the agency took 18 formal enforcement actions under GLBA and no formal enforcement actions under FCRA. OCC’s actions in these cases resulted in outcomes such as cease and desist orders and civil money penalties levied against violators. The agency also informally required banks to take corrective action in several instances, such as requiring a bank to notify customers whose accounts may have been compromised, or requiring a bank to correct and reissue its initial privacy notice. According to OCC staff, OCC’s examinations for compliance with GLBA’s privacy requirements most commonly found that banks’ initial privacy notices were not clear and conspicuous, and its examinations for compliance with GLBA’s safeguarding requirements most commonly found cases of inadequate customer information programs, risk assessment processes, testing, and reports to the board. FRB officials said the agency has taken 12 formal enforcement actions in the past 5 years for violations of GLBA’s information-safeguarding standards and no formal actions for FCRA violations. They said FRB has taken several informal enforcement actions, including three related to violations of Regulation P, which implements GLBA’s privacy requirements, and five informal actions for violations of FCRA. According to FRB staff, FRB’s examinations for compliance with the interagency information security standards have found cases of inadequate customer information security programs, board oversight, and risk assessments, as well as cases of incomplete assessment of physical access controls and safeguarding of the transmission of customer data. The most commonly found problem in FRB’s examinations for compliance with Regulation P was banks’ failure to provide clear and conspicuous initial notices of their privacy policies and procedures. With regard to FCRA compliance, the violations cited most frequently were the failure to provide notices of adverse actions based on information contained in consumer reports or obtained from third parties. Securities Regulators Oversee GLBA Compliance of Securities Firms SEC, NASD, and NYSE Regulation oversee securities industry participants’ compliance with GLBA’s privacy and information safeguarding requirements. Similar to the banking agencies, they have issued rules and other guidance, conducted examinations of firms’ compliance with federal securities laws and regulations, and, if appropriate, taken enforcement actions. Regulations and Other Guidance In June 2000, SEC adopted Regulation S-P, which implements GLBA’s Title V information privacy and safeguarding requirements among the broker- dealers, investment companies, and SEC-registered investment advisers subject to SEC’s jurisdiction. Regulation S-P contains rules of general applicability that are substantively similar to the rules adopted by the banking agencies. In addition to providing general guidance, Regulation S-P contains numerous examples specific to the securities industry to provide more meaningful guidance to help firms implement its requirements. For example, the rule provides detailed guidance on the provision covering privacy and opt-out notices when a customer opens a brokerage account. It also contains a section regarding procedures to safeguard information, including the disposal of consumer report information. Since Regulation S-P was adopted, SEC staff have issued additional written guidance in the form of Staff Responses to Questions about Regulation S-P. According to SEC staff, companies also receive feedback on Regulation S-P compliance during the examination process, as well as during telephone inquiries made to SEC offices. However, unlike the federal banking agencies, SEC has issued no additional written guidance on institutions notifying customers in the event of unauthorized access to customer information. SEC staff said they are considering possible measures that would address information security programs in more detail, including the issue of how to respond to security breaches. Examinations and Enforcement Actions SEC has examined registered firms for Regulation S-P compliance. SEC staff said compliance with Regulation S-P was a focus area in SEC examinations during the first 1 to 1½ years after July 2001, when it became effective. During this period, Regulation S-P compliance was reviewed in 858 broker-dealer examinations, of which 105 resulted in findings. Also, during this period, Regulation S-P compliance was reviewed in 1,174 investment adviser examinations, of which 128 resulted in findings, and 218 investment company examinations, of which 17 resulted in findings. SEC staff said that more recently SEC has adopted a risk-based approach to determine the depth of a review of compliance with Regulation S-P. Under this approach, an initial review of compliance with Regulation S-P is done to determine if a closer look is warranted. During the past 2½ years, compliance with Regulation S-P was reviewed in 1,891 investment adviser examinations, of which 301 resulted in findings, and 257 investment company examinations, of which 20 resulted in findings. SEC staff said they had not broken out separate Regulation S-P examination findings of broker-dealer examinations for this period and could not provide those numbers. They said the most common deficiencies were failure to provide privacy notices, no or inadequate privacy policy, and no or inadequate policies and procedures for safeguarding customer information. SEC staff said they had not found any deficiencies during their exams that warranted formal enforcement actions. They told us they have dealt with Regulation S-P compliance more as a supervisory matter and required registrants to resolve deficiencies without taking formal actions. SEC staff also said that SEC is now conducting a special review coordinated with NYSE Regulation looking at how broker-dealers are outsourcing certain functions that involve customer information. They said they are concerned with how registrants are managing the outsourcing process, including, among other things, due diligence in contractor selection, monitoring contractor performance, and disaster recovery/business continuity planning. NASD and NYSE Regulation Oversee Compliance of Member Broker-Dealers NASD and NYSE Regulation also oversee Regulation S-P compliance among member broker-dealers. According to NASD officials, NASD took a two-pronged approach to ensure that its members understand their obligations under Regulation S-P and comply with its requirements. First, NASD issued guidance to its members regarding requirements of the regulation. For example, when Regulation S-P was adopted, NASD issued guidance to facilitate compliance by providing a notice designed to inform and educate its members about Regulation S-P. In the summer of 2001, NASD issued an article setting forth questions and answers regarding Regulation S-P and reminding members of the mandatory compliance deadline. In July 2005, NASD issued another notice reminding members of their obligations relating to the protection of customer information. Second, according to NASD officials, NASD conducts routine examinations—approximately 2,500 per year—to check compliance with NASD rules and the federal securities laws, including Regulation S-P. Examiners check compliance with Regulation S-P using a risk-based approach in which examiners review certain information such as supervisory review procedures to assess the controls that exist at a firm. Depending on its findings, NASD determines whether to inspect in more detail the firm’s Regulation S-P policies and procedures to ensure they are reasonably designed to achieve compliance with Regulation S-P, including its safeguarding and privacy requirements. Regulation S-P compliance was reviewed in 4,760 NASD examinations of broker-dealers between October 1, 2000, and September 30, 2005. These examinations resulted in 502 informal actions and two formal actions—called Letters of Acceptance, Waiver, and Consent—for Regulation S-P violations. According to NASD, in one formal action, it censured and fined the respondents a total of $250,000 for various violations related to their failure to establish supervisory procedures and devote sufficient resources to supervision, including Regulation S-P compliance. In the other action, according to NASD, it censured and fined the firm and a principal associated person $28,500 and suspended the person for 30 days for failing to provide privacy notices to its customers and for several other non-privacy-related violations. Similarly, NYSE Regulation issued guidance on Regulation S-P to its member firms and sent its members an information memo reminding them of Regulation S-P requirements shortly before they became mandatory. NYSE Regulation’s Sales Practice Review Unit conducts examinations of member firms’ compliance with Regulation S-P and other privacy requirements on a 1-, 2- or 4-year cycle, or when the member firm is otherwise deemed to be at a certain level of risk. State Insurance Regulators Require Insurers to Comply with Information Privacy and Security Provisions, but Enforcement May Be Limited GLBA designates state insurance regulators as the authorities responsible for enforcement of its information privacy and safeguarding provisions among insurance companies. The individual states are responsible for enforcing GLBA with respect to insurance companies licensed in the state, and they may issue regulations. The National Association of Insurance Commissioners (NAIC) has issued model rules to guide states in developing programs to enforce GLBA requirements and has sponsored a multistate review of insurance companies’ performance in this regard. NAIC Has Developed Model GLBA Privacy and Safeguarding Rules, but Not All States Have Adopted GLBA Regulations NAIC has developed two model rules for states to use in developing regulations or laws to implement the GLBA information privacy and safeguarding provisions among the insurance companies they regulate. The first model rule, the Privacy of Consumer Financial and Health Information Regulation, issued in 2000, includes notice and opt-out requirements relating to insurance entities, and can be used by states as models for state laws and regulations. An August 2005 NAIC analysis showed that all states and the District of Columbia had adopted insurance laws or regulations to implement GLBA’s requirements related to the privacy of financial information. The second model rule, the Standards for Safeguarding Customer Information Model Regulation, issued in 2002, establishes standards for developing and implementing administrative, technical, and physical safeguards to protect the security, confidentiality, and integrity of customer information. In contrast to the privacy model, an October 2005 NAIC analysis showed that 17 states had yet to adopt a law or regulation setting standards for safeguarding customer information. In April 2002, GAO reported that insurance customer information and records in states that had not established safeguards may not be subject to a consistent level of legal protection envisioned by GLBA’s privacy provisions. Individual State Insurance Regulators Have Not Consistently Examined for Privacy and Security Compliance Individual state insurance regulators have procedures for examining companies for compliance with information privacy and safeguarding requirements, but do not routinely do so. According to an NAIC official, NAIC’s Market Conduct Examiners Handbook contains detailed examination procedures for reviewing information privacy requirements and its Financial Examiners Handbook has a segment devoted to security of computer-based systems. He said the individual state regulators can examine for compliance with privacy requirements as part of their comprehensive examinations of companies, but that states are focusing less on conducting comprehensive examinations and more on targeted examinations. As a result of a lack of complaints regarding privacy matters, however, he said the states are probably doing few targeted examinations of compliance with privacy requirements. To forestall possible multiple, overlapping, and inconsistent examinations by numerous states, NAIC in 2005 sponsored a multistate review to gather information on insurance companies’ compliance with GLBA privacy and safeguarding provisions. The review team, led by the District of Columbia’s Department of Insurance, Securities and Banking (DISB), with the participation of 19 states, covered more than 100 of the largest insurance groups, representing about 800 insurance companies operating in the United States. The review team administered a survey questionnaire, reviewed each insurer’s responses to the questionnaire, and subsequently held conferences with representatives of the insurer. The review resulted in 22 findings related to the risk assessment process, including failure to work toward a formalized assessment process to identify risks of internal and external threats and hazards to the safeguarding, confidentiality, and integrity of information; 18 findings related to GLBA’s requirements for information storage, transmission, and integrity; 16 findings related to the delivery of privacy notices (although 12 of those findings related to the provision of the initial notice rather than recurring findings); and no findings related to GLBA procedures for providing opt-out notifications or procedures for collecting opt-out elections. These findings were similar to those of other financial regulators’ examinations of GLBA compliance. However, unlike the other regulators, state insurance regulators do not have comparable examination programs to follow up to ensure that such findings are corrected and do not become more numerous. The DISB qualified the scope of its survey by noting that it did not include (1) a review of the insurer’s efforts with respect to remediation activities, (2) a detailed analysis of the effectiveness of the insurer’s plans to correct privacy problems or to protect the business against the consequences associated with any privacy-related occurrences, or (3) a determination of steps the insurer must take to become privacy compliant or maintain privacy compliance. Although this survey was not a substitute for regulatory examination of insurers’ compliance with GLBA, it could serve as a basis for further examination of such compliance. Other financial regulators have gathered preliminary information that they then use as a basis for further examinations of regulated entities. For example, in 2003, SEC followed up on reports of abusive practices in mutual fund trading by requesting information from various mutual fund companies on these trading practices, and this served as a basis for further examinations of individual companies. According to NAIC officials, the DISB survey results were never reviewed by state insurance regulators as part of their examinations of insurance companies. NAIC officials said the survey results were reviewed by NAIC’s Market Analysis Working Group and referred back to DISB to determine what, if any, additional follow-up was necessary. DISB staff told us that most state insurance regulators, as well as DISB, do not have staff with adequate expertise to actually examine insurers’ information privacy and safeguarding programs. They said the states would have to contract with vendors to obtain this expertise. FTC Enforces GLBA and FCRA Compliance of Financial Institutions within Its Jurisdiction As discussed earlier, FTC enforces GLBA for financial institutions not otherwise assigned to the enforcement authority of another regulator, and enforces FCRA for the same entities and others, including securities firms and insurance companies. FTC has issued rules implementing GLBA and FCRA information privacy and safeguarding requirements and developed other materials that provide detailed guidance for companies to implement the requirements. FTC issued two rules—referred to as the Privacy Rule and the Safeguards Rule—to implement GLBA’s requirements for financial institutions not covered by similar regulations issued by the financial institution regulators. These rules provide examples to clarify things such as what constitutes a customer relationship and what types of information are covered under the law’s sharing restrictions. FTC has also issued rules to implement the FACT Act amendments to FCRA, although some rules have not yet been issued in final form. FTC provides additional guidance to financial institutions on how to comply with GLBA and FCRA in the form of business alerts, fact sheets, frequently asked questions, and a compliance guide for small businesses. For example, FTC has issued alerts on safeguarding customers’ personal information, disposing of consumer report information, and insurers’ use of consumer reports. Between 2003 and 2005, FTC took enforcement actions against at least seven financial service providers for violations of GLBA information privacy and safeguarding requirements, resulting in settlement agreements with an Internet mortgage lender accused of false advertising and failure to protect sensitive consumer information; a credit card telemarketer that allegedly failed to notify consumers of its privacy practices and obtained information from consumers under false pretenses; two or more mortgage lenders charged with failing to protect consumers’ personal information; and three nonprofit debt management organizations accused of failing to notify consumers how their personal information would be used, and other violations. NCUA, Securities, and Insurance Regulators Do Not Have Full Authority to Examine Third-Party Vendors, Including Information Resellers As part of their bank examinations, FRB, FDIC, OCC, and OTS have authority to examine third-party service providers, such as some information resellers with which banks may do business. Technology service provider examinations are done under the auspices of FFIEC and coordinated with other regulators. Some vendors may be examined routinely; for example, officials of one information reseller providing services to banks told us that it is subject to periodic examinations under the auspices of FFIEC. In other cases, a service provider may be examined only once for a particular purpose. For example, OCC and FDIC examiners visited Acxiom, which provides a number of banks with information services, such as analyzing and enhancing customer information for marketing purposes. The examiners’ visit focused on a security breach in which a client was granted access to information files obtained from other clients. According to Acxiom officials, this was a one-time review of the breach that occurred in its computer services operations and did not result in the company being added to a list of technology service providers that banking regulators routinely review. Unlike the banking regulators, NCUA does not have authority to examine the third-party service providers of credit unions, including information resellers. In 2003, we reported that credit unions increasingly rely on third-party vendors to support technology-related functions such as Internet banking, transaction processing, and fund transfers. With greater reliance on third-party vendors, credit unions subject themselves to operational and reputational risks if they do not manage these vendors appropriately. While NCUA has issued guidance regarding the due diligence credit unions should apply to third-party vendors, the agency has no enforcement powers to ensure full and accurate disclosure. As such, in 2003 we suggested that Congress consider providing NCUA with legislative authority to examine third-party vendors, and NCUA has also requested such authority from Congress. However, an NCUA official told us that few of these vendors are information resellers because credit unions typically do not use them to a great extent. He said that credit unions generally use methods other than resellers to comply with PATRIOT Act customer identification requirements, and credit unions’ bylaws typically forbid sharing customers’ personal financial information for marketing purposes. Similarly, federal securities regulators and representatives of state insurance regulators told us they generally do not have authority to examine or review the third-party service providers of the firms they oversee, including information resellers. According to SEC staff, the agency can examine the third-party vendor only if the firm also is an SEC- registered entity over which the agency has examination authority. However, they said that, to date, SEC has not seen sufficient problems with third-party vendors to justify requesting the authority to examine them at this time. They noted that in their examinations, they hold entities accountable for ensuring that personal information is appropriately safeguarded whether the information is managed in-house or by a vendor. Similarly, NASD officials said that although they do not have jurisdiction to oversee third-party vendors, their examiners review member firms’ procedures for monitoring contractors, including whether such contracts contain clauses ensuring the privacy and security of customer information. In July 2005, NASD issued a Notice to Members reminding them that when they outsource certain activities as part of their business structure, they must conduct a due diligence analysis to ensure that the third-party service provider can adequately perform the outsourced functions and comply with federal securities laws and NASD rules. Similarly, NYSE Regulation examinations review third-party contracts to ensure that they contain confidentiality clauses prohibiting the contractor from using or disclosing customer information for any use other than the purposes for which the information was provided to the contractor. NYSE Regulation has proposed a rule governing its members’ use of contractors, which, if adopted, will require member firms to follow certain steps in selecting and overseeing contractors, such as applying prescribed due diligence standards and the record-keeping requirements of the securities laws. State insurance regulators generally do not have authority to examine information resellers and other third-party service providers. NAIC officials told us that state insurance regulators can only examine information resellers or other companies if they are registered as rating organizations—companies that collect and analyze statistical information to assist insurance companies in their rate-making process. For example, NAIC said state insurance regulators can examine ISO—one of the resellers included in our review—because it is registered with states as a rating organization. Conclusions Advances in information technology and the computerization of records have spawned the growth of information reseller businesses, which regularly collect, process, and sell personal information about nearly all Americans. The information maintained by resellers commonly includes sensitive personal information, such as purchasing habits, estimated incomes, and Social Security numbers. The expansion in the past few decades in the sale of personal information has raised concerns about both personal privacy and data security. Many consumers may not be aware how much of their personal information is maintained and how frequently it is disseminated. In addition, identity theft has emerged as a serious problem, and data security breaches have occurred at some major resellers. At the same time, however, information resellers also provide some important benefits to both individuals and businesses. Financial institutions rely heavily on these resellers for a variety of vital purposes, including credit reporting (which reduces the cost of credit), PATRIOT Act compliance, and fraud detection. As Congress weighs various legislative options, it will need to consider the appropriate balance between protecting consumers’ privacy and security interests and the benefits conferred by the current regime that allows a relatively free flow of information between companies. No federal law explicitly requires all information resellers to safeguard all of the sensitive personal information they may hold. As we have discussed, FCRA applies only to consumer information used or intended to be used to help determine eligibility, and GLBA’s safeguarding requirements apply only to customer data held by GLBA-defined financial institutions. Much of the personal information maintained by information resellers that does not fall under FCRA or GLBA is not necessarily required by federal law to be safeguarded, even when the information is sensitive and subject to misuse by identity thieves. Given financial institutions’ widespread reliance on information resellers to comply with legal requirements, detect fraud, and market their products, the possibility for misuse of this sensitive personal information is heightened. Requiring information resellers to safeguard all of the sensitive personal information they hold would help ensure that explicit data security requirements apply more comprehensively to a class of companies that maintains large amounts of such data. Further, although the scope of this report focused on information resellers, this work has made clear to us that a wide range of retailers and other entities also maintain sensitive personal information on consumers. As Congress considers requiring information resellers to better ensure that all of the sensitive personal information they maintain is safeguarded, it may also wish to consider the potential costs and benefits of expanding more broadly the class of entities explicitly required to safeguard sensitive personal information. Any new safeguarding requirements would likely be more effectively implemented and least burdensome if, as with FTC’s Safeguards Rule, they provided sufficient flexibility to account for the widely varying size and nature of businesses that hold sensitive personal information. The proliferation of sensitive personal information in the marketplace and increasing numbers of high-profile data breaches have motivated many states to enact data security laws with breach notification requirements. No federal statute currently requires breach notification, but such legislation could have certain benefits. Companies would have incentives to improve data safeguarding to reduce the reputational risk of a publicized breach, and consumers would know to take potential action against a risk of identity theft or other related harm. Congress has held many hearings related to data breaches, and several bills have been introduced that would require breach notification. We support congressional actions to require information resellers, and other companies, to notify individuals when breaches of sensitive information occur. In previous work, we have also identified key benefits and challenges of notifying the public about security breaches that occur at federal agencies. To be cost effective and reduce unnecessary burden on consumers, agencies, and industry, it would be important for Congress to identify a threshold for notification that would allow individuals to take steps to protect themselves where the risk of identity theft or other related harm exists, while ensuring they are only notified in cases where the level of risk warrants such action. Objective criteria for when notification is required and appropriate enforcement mechanisms are also important considerations. Congress should also consider whether and when a federal breach notification law would preempt state laws. FTC has taken many significant enforcement actions against information resellers and other companies that have violated federal privacy laws, and it is important that the agency have the appropriate enforcement remedies. Unlike FCRA, GLBA does not provide FTC with civil penalty authority, and agency staff have expressed concerns that the remedies FTC has available under GLBA—such as disgorgement and consumer redress—are impractical enforcement tools for violations involving breaches of mass consumer data. Providing FTC with the authority to seek civil penalties for violations of GLBA could help the agency more effectively enforce that law’s safeguarding provisions. Federal financial regulators generally appear to provide suitable oversight of their regulated entities’ compliance with privacy and information security laws governing consumer information. The regulators do not typically distinguish between data that entities receive from resellers and other sources, but this seems reasonable given that the sensitivity, rather than the source, of the data is the most important factor in examining data security practices. However, state insurance regulators do not have comparable examination programs to other financial regulators to ensure consistent GLBA compliance. This may be a source of concern given the recent multistate survey that identified deficiencies in GLBA compliance at insurance companies. Matters for Congressional Consideration Safeguarding provisions of FCRA and GLBA do not apply to all sensitive personal information held by information resellers. To ensure that such data are protected on a more consistent basis, Congress should consider requiring information resellers to safeguard all sensitive personal information they hold. As Congress considers how best to protect data maintained by information resellers, it should also consider whether to expand more broadly the class of entities explicitly required to safeguard sensitive personal information. If Congress were to choose to expand safeguarding requirements, it should consider providing the implementing agencies with sufficient flexibility to account for the wide range in the size and nature of entities that hold sensitive personal information. To ensure that the Federal Trade Commission has the tools it needs to most effectively act against data privacy and security violations, Congress should consider providing the agency with civil penalty authority for its enforcement of the Gramm-Leach-Bliley Act’s privacy and safeguarding provisions. Recommendation for Executive Action We recommend that state insurance regulators, individually and in concert with the National Association of Insurance Commissioners, take additional measures to ensure appropriate enforcement of insurance companies’ compliance with the privacy and safeguarding provisions of the Gramm- Leach-Bliley Act. As a first step, state insurance regulators and NAIC should follow up appropriately on deficiencies related to compliance with these provisions that were identified in the recent nationwide survey as part of a broader targeted examination of GLBA privacy and safeguarding requirements. Agency Comments We provided a draft of this report to FDIC, FRB, FTC, NAIC, NASD, NCUA, NYSE Regulation, OCC, OTS, and SEC for comment. These agencies provided technical comments, which we incorporated, as appropriate. In addition, FTC provided a written response, which is reprinted in appendix III. In its response, FTC noted that it has previously recommended that Congress consider legislative actions to increase the protection afforded personal sensitive data, including extending GLBA safeguarding principles to other entities that maintain sensitive information. FTC also noted that it concurs with our finding that a civil penalty often is the most appropriate and effective remedy in cases under GLBA privacy and safeguarding provisions. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will provide copies to other interested congressional committees, as well as the Chairman of the Board of Governors of the Federal Reserve System, the Acting Chairman of the Federal Deposit Insurance Corporation, the Chairman of the Federal Trade Commission, the President of the National Association of Insurance Commissioners, the Chairman and Chief Executive Officer of NASD, the Chairman of the National Credit Union Administration, the Chief Executive Officer of New York Stock Exchange Regulation, the Comptroller of the Currency, the Director of the Office of Thrift Supervision, and the Chairman of the Securities and Exchange Commission. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology Our report objectives were to examine (1) how financial institutions use data products supplied by information resellers, the types of information contained in these products, and the sources of the information; (2) how federal laws governing the privacy and security of personal data apply to information resellers, and what rights and opportunities exist for individuals to view and correct data held by resellers; (3) how federal financial institution regulators and the Federal Trade Commission (FTC) oversee information resellers’ compliance with federal privacy and information security laws; and (4) how federal financial institution regulators, state insurance regulators, and FTC oversee financial institutions’ compliance with federal privacy and information security laws governing consumer information, including information supplied by information resellers. For the purposes of this report, we defined “information resellers” broadly to refer to businesses that collect and aggregate personal information from multiple sources and make it available to their customers. The three nationwide credit bureaus were included in this definition. Our audit work focused primarily on larger information resellers and did not cover smaller Internet-based resellers because these companies were rarely or never used by financial institutions from which we collected information. Our scope was limited to resellers’ use and sale of personal information about individuals; it did not include other information that resellers may provide, such as data on commercial enterprises. Our review of financial institutions covered the banking, securities, property and casualty insurance, and consumer lending and finance industries, but excluded life insurance and health insurance companies because they use health data that are covered by federal laws that were outside the scope of our work. In addition, we included financial institutions’ use of reseller information for purposes related to customers and other consumers, but excluded their use of reseller products for screening their own employees or making business decisions such as where to locate a facility. To address all of the objectives, we interviewed or received written responses from 10 information resellers—Acxiom, eFunds, ChoicePoint, Equifax, Experian, LexisNexis, ISO, Regulatory DataCorp, Thompson West, and TransUnion. We also reviewed marketing materials, sample contracts, sample reports, and other items from these companies that provided detailed information on the data contained in their products. These companies were selected because, according to the financial institutions, trade associations, and industry experts we spoke with, they constitute most of the largest and most significant information resellers offering services to the financial industry sector, and collectively they represent a variety of different products. The information resellers we included and the products they offer do not necessarily represent the full scope of the industry. We also spoke with representatives of the Consumer Data Industry Association and the Direct Marketing Association, trade associations that represent portions of the information reseller industry. To determine how financial institutions use data products supplied by information resellers and the types and sources of the data, we also interviewed or received written responses, and collected and analyzed documents, from knowledgeable representatives at financial institutions in the banking, securities, property and casualty insurance, and consumer lending and finance industries. We gathered information from Bank of America, Citigroup, and JPMorgan Chase, which are the three largest U.S. bank holding companies by asset size, as well as Goldman Sachs, Morgan Stanley, and Merrill Lynch, which are the three largest global securities firms by revenue. We also interviewed representatives at American International Group, State Farm, and Allstate, which are the three largest U.S. insurance companies and include the two largest property/casualty insurers. We also interviewed representatives at GE Consumer Finance, one of the world’s 10 largest consumer finance companies, and four other financial institutions—American Express, Wells Fargo Financial, Security Finance, and Check into Cash—which together offer a variety of consumer lending products, including automobile financing, credit cards, and payday loans. We also interviewed officials at trade associations representing these financial services industries, including the American Bankers Association, Independent Community Bankers of America, Securities Industry Association, Investment Company Institute, American Insurance Association, and American Financial Services Association. These financial institutions from which we gathered information conduct a significant portion of the transactions in the financial services sector. For example, they collectively own 9 of the 50 largest commercial depository institutions, holding about 20 percent of total domestic deposits, as well as 8 of the 10 largest credit card issuers. The insurance companies we spoke with represent about a quarter of the U.S. property and casualty insurer market share. In most cases, we selected these financial institutions by determining the largest companies in each of the four industries, based on data from reputable sources. In two cases, we spoke with firms because they were recommended by representatives of their trade association. Our findings on how financial institutions use information resellers are not representative of the entire financial services industry. However, we believe they accurately represent institutions’ use of resellers because our findings from discussions with these companies and their representatives were corroborated by discussions with information resellers, regulators, legal experts, and privacy and consumer advocacy groups. To identify how federal privacy and data security laws and regulations apply to information resellers and individuals’ rights and opportunities to view and correct reseller data, we reviewed and analyzed relevant federal laws, regulations, and guidance. We also met with staff of the Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Federal Trade Commission, National Credit Union Administration, Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision, and Securities and Exchange Commission, as well as the National Association of Insurance Commissioners (NAIC), NASD (formerly known as the National Association of Securities Dealers), New York Stock Exchange Regulation (NYSE Regulation), and the District of Columbia’s Department of Insurance, Securities and Banking (DISB). In addition, we interviewed three legal experts in the area of privacy law that work in academia or represent financial institutions and information resellers. We also interviewed and collected documents from information resellers, financial institutions, federal regulators, and a variety of privacy and consumer advocacy groups, to gather views on the applicability of laws to information resellers and the adequacy of existing laws. To describe how regulators oversee information resellers’ and financial institutions’ compliance with federal privacy and data security laws, we met with the federal agencies, financial institutions, information resellers, and other parties listed above. We also reviewed federal agencies’ guidance, examination procedures, settlement agreements, and other documents, as well as relevant reports and documents from NAIC, NASD, and NYSE Regulation. To help illustrate regulators’ examination activities in this area, we also met with OCC staff who conduct examinations at three national banks and reviewed their examination workpapers. We also gathered data from regulators about the number and nature of examination findings, where applicable. To describe the efforts of state insurance regulators to oversee insurance companies’ compliance with the Gramm-Leach-Bliley Act (GLBA), we also reviewed the DISB survey report of insurance companies’ implementation of GLBA policies and procedures. DISB used the survey responses to determine findings for each company on the level of compliance with GLBA and related NAIC model rule provisions. The DISB review defined a “finding” as an occurrence of a perceived gap between a company’s privacy practices and procedures and the guidelines outlined in one of the model acts or regulations of NAIC. The findings were derived from responses to the survey questions. The companies DISB surveyed comprised major companies, including property and casualty insurance groups with 2002 gross written premiums of approximately $250 million or more; life insurance groups with 2002 gross written premiums of approximately $200 million or more; and health insurance groups with 2002 gross written premiums of approximately $500 million or more. This initial list contained 129 insurance groups. After the initial list was compiled, 26 groups were exempted from the survey examination for one of three reasons: (1) there was a prior, ongoing, or upcoming examination of the group that included (or would include) a comprehensive review of the group’s privacy policy (23 groups); (2) the group engaged primarily or solely in reinsurance (2 groups); or (3) the state insurance regulator for the company’s state of domicile requested that the group be exempted (1 group). The survey questionnaire included 93 questions asking for detailed documentary and testimonial evidence of companies’ level of compliance with GLBA and related NAIC model rule provisions. We conducted our review from June 2005 through May 2006 in accordance with generally accepted government auditing standards. Appendix II: Sample Information Reseller Reports This appendix provides examples of reports from different types of products sold by information resellers. These sample reports, which are reprinted with permission, contain fictitious data and have also been redacted to reduce possible coincidental references to actual people or places. Sample Insurance Claims History Report This sample insurance claims history report from ChoicePoint provides insurers with insurance claims histories on individuals applying for coverage. Sample Deposit Account History Report ChexSystems, a subsidiary of eFunds, offers a product that assesses risks associated with individuals applying to open new deposit accounts. The report includes information on an applicant’s account history, including accounts closed for reasons such as overdrafts, returned checks, and check forgery. The report may include a numeric score representing the individual’s estimated risk. Sample Identity Verification and OFAC Screening Report ISO, a company that provides information services to insurance companies, offers this product for screening new customers and verifying their identities. It provides a “pass” or “fail” response to indicate whether information provided by the applicant matches information maintained by the company. Sample Fraud Investigation Report Below are selected excerpts from a sample report of ChoicePoint’s AutoTrack XP product, which helps users such as corporate fraud investigators and law enforcement agencies conduct investigations, locate individuals and assets, and verify physical addresses. Appendix III: Comments from the Federal Trade Commission Appendix IV: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Jason Bromberg, Assistant Director; Katherine Bittinger; David Bobruff; Randy Fasnacht; Evan Gilman; Marc Molino; David Pittman; Linda Rego; and David Tarosky made key contributions to this report. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The growth of information resellers--companies that collect and resell publicly available and private information on individuals--has raised privacy and security concerns about this industry. These companies collectively maintain large amounts of detailed personal information on nearly all American consumers, and some have experienced security breaches in recent years. GAO was asked to examine (1) financial institutions' use of resellers; (2) federal privacy and security laws applicable to resellers; (3) federal regulators' oversight of resellers; and (4) regulators' oversight of financial institution compliance with privacy and data security laws. To address these objectives, GAO analyzed documents and interviewed representatives from 10 information resellers, 14 financial institutions, 11 regulators, industry and consumer groups, and others. Financial institutions such as banks, credit card companies, securities firms, and insurance companies use personal data obtained from information resellers to help make eligibility determinations, comply with legal requirements, prevent fraud, and market their products. For example, lenders rely on credit reports sold by the three nationwide credit bureaus to help decide whether to offer credit and on what terms. Some companies also use reseller products to comply with PATRIOT Act rules, to investigate fraud, and to identify customers with specific characteristics for marketing purposes. GAO found that the applicability of the primary federal privacy and data security laws--the Fair Credit Reporting Act (FCRA) and Gramm-Leach-Bliley Act (GLBA)--to information resellers is limited. FCRA applies to information collected or used to help determine eligibility for such things as credit or insurance, while GLBA only applies to information obtained by or from a GLBA-defined financial institution. Although these laws include data security provisions, consumers could benefit from the expansion of such requirements to all sensitive personal information held by resellers. The Federal Trade Commission (FTC) is the primary federal agency responsible for enforcing information resellers' compliance with FCRA's and GLBA's privacy and security provisions. Since 1972, the agency has initiated formal enforcement actions against more than 20 resellers, including the three nationwide credit bureaus, for violating FCRA. However, FTC does not have civil penalty authority under the privacy and safeguarding provisions of GLBA, which may reduce its ability to enforce that law most effectively against certain violations, such as breaches of mass consumer data. In overseeing compliance with privacy and data security laws, federal banking and securities regulators have issued guidance, conducted examinations, and taken formal and informal enforcement actions. A recent national survey sponsored by the National Association of Insurance Commissioners (NAIC) identified some noncompliance with GLBA by insurance companies, but state regulators have not laid out clear plans with NAIC for following up to ensure these issues are adequately addressed.
Background Begun in 1965 as a part of the effort to fight poverty, Head Start is the centerpiece of federal early childhood programs. Head Start’s primary goal is to improve the social competence of children in low-income families, that is, their everyday effectiveness in dealing with both their present environment and later responsibilities in school and life. Social competence takes into account the interrelatedness of cognitive and intellectual development, physical and mental health, nutritional needs, and other factors. To support its social competence goal, Head Start has delivered a wide range of services to over 15 million children nationwide since its inception. These services consist of education and medical, dental, nutrition, mental health, and social services. Another essential part of every program is parental involvement in parent education, program planning, and operating activities. Head Start services are provided at the local level by public and private nonprofit agencies that receive their funding directly from HHS. These include public and private school systems, community action agencies, government agencies, and Indian tribes. In fiscal year 1996, grants were awarded to about 1,400 local agencies, called grantees. Head Start grantees are typically required to obtain additional funding from nonfederal sources to cover 20 percent of the cost of their programs. The Head Start program works with various community sources to provide services. For example, some programs coordinate with public health agencies to obtain health services, while other programs contract with local physicians. Although all programs operate under a single set of performance standards, local programs have a great deal of discretion in how they meet their goals, resulting in great variability among programs. Although the program is authorized to serve children at any age before the age of compulsory school attendance, most children enter the program at age 4. The law requires Head Start to target children from poor families, and regulations require that 90 percent of the children enrolled in each program be low income. By law, certain amounts are set aside for specific subpopulations of children, including those with disabilities and Native American and migrant children. In addition to providing services to children and families, Head Start also sees one of its roles as a national laboratory for child development. Consequently, Head Start uses much of its discretionary research funding for demonstrations and studies of program innovations. Although overall funding has grown over the years, the amount of funds allocated to research, demonstration, and evaluation has represented about 2 percent or less of the Head Start budget. In fiscal year 1996, Head Start’s research, demonstration, and evaluation budget totaled $12 million (see app. II). Head Start Has Changed Over the Years Today’s Head Start is a much different program than it was 30 years ago. Although the program’s goals have changed little since its inception, Head Start changed considerably during its first decade. Begun as a summer program, Head Start became largely a full-year program by the early 1970s. In addition, in the early to mid-1970s, the program launched improvement initiatives, including promulgation of performance standards and teacher credentialing. Programs also had the option of providing home-based services. In the 1990s, the program continues to change. In 1990, the Congress passed the Head Start Expansion and Quality Improvement Act, which reauthorized Head Start and set aside funds for programs to use to enhance and strengthen the quality of services. In 1994, the Congress established a new program—called Early Head Start—to serve low-income families with infants and toddlers. The program provides continuous, intensive, and comprehensive child development and family support services to low-income families with children under age 3. In addition to changes to Head Start over the years, other changes affecting the program relate to the children and families Head Start serves and the amount appropriated to support the program. Head Start’s service population has become increasingly multicultural and multilingual and is confronted with difficult social problems such as domestic violence and drug abuse. Moreover, the number of children served by the program has grown dramatically—from 349,000 children in 1976 to about 750,000 in 1995. The amount appropriated for the program, which totaled $3.5 billion in 1995, has paralleled the growth in the number served (see fig. 1). Research on the Early Years of Head Start In the decade after Head Start’s inception, many studies of the program’s impact were conducted. One of the first major studies was conducted for the Office of Economic Opportunity by the Westinghouse Corporation in 1969. This study found that summer Head Start programs produced no lasting gains in participants’ cognitive or affective development and that full-year programs produced only marginal gains by grades one, two, and three. Several researchers criticized this study because of its methodology. Subsequently, many other studies investigated Head Start’s impact. In 1981, HHS contracted with CSR, Inc., to synthesize the findings of Head Start impact studies. CSR concluded that Head Start participants showed significant immediate gains in cognitive test scores, socioemotional test scores, and health status. Cognitive and socioemotional test scores of former Head Start students, however, did not remain superior in the long run to those of disadvantaged children who did not attend Head Start, according to CSR. In addition, on the basis of a small subset of studies, CSR reported that Head Start participants were less likely to be retained in grade and less likely to be placed in special education. Because these research studies were conducted during Head Start’s infancy, their findings provide little information on the effectiveness of the current program. For instance, most of the programs included in the Westinghouse study were summer programs. Almost all programs today are full-year programs. Similarly, the great majority of studies in CSR’s synthesis study were late 1960’s and early 1970’s programs and therefore would not have reflected many significant program changes that took place in the early to mid-1970s. Interest in Impact Research Has Increased Interest in Head Start’s impact has grown with increased congressional and public concern for substantiating federal program performance. Traditionally, federal agencies have used the amount of money directed toward their programs, the level of staff deployed, or even the number of tasks completed as some of the measures of program performance. At a time when the value of many federal programs is undergoing intense public scrutiny, however, an agency that reports only these measures has not answered the defining question of whether these programs have produced real results. Because today’s environment is results oriented, the Congress, executive branch, and the public are beginning to hold agencies accountable for outcomes, that is, program results as measured by the differences programs make. The Congress’ determination to hold agencies accountable for their performance lay at the heart of two landmark reforms of the 1990s: the Chief Financial Officers Act of 1990 and GPRA. With these two laws, the Congress imposed a new and more businesslike framework for management and accountability on federal agencies. In addition, GPRA created requirements for agencies to generate the information congressional and executive branch decisionmakers need in considering measures to improve government performance and reduce costs. Body of Research on Current Head Start Program Insufficient to Draw Conclusions About Impact The body of research on current Head Start is insufficient to draw conclusions about the impact of the national program. Drawing such conclusions from a body of research would require either (1) a sufficient number of reasonably well-designed individual studies whose findings could appropriately be combined to provide information about the impact of the national program or (2) at least one large-scale evaluation using a nationally representative sample. Findings from the individual studies we identified, however, could not be appropriately combined and generalized to estimate program impact at the national level. In addition, no single study used a nationally representative sample, permitting findings to be generalized to the national program. Findings Could Not Be Combined to Produce National Estimates of Impact The body of studies was inadequate to assess program impact by combining the findings of studies using similar outcome measures. The total number of studies found on Head Start impact was too small to permit generalizing findings to the national program. Most of these studies targeted cognitive outcomes, leaving other outcome areas, such as health and nutrition, scarcely examined. In addition, all the studies suffered to some extent from methodological problems that weakened our confidence in the findings of the individual studies. Number of Studies Too Small Although the body of literature on Head Start is extensive, the number of impact studies was insufficient to allow us to draw conclusions about the impact of the national Head Start program. Such an aggregation of findings should be based on a large number of studies. The larger the number of studies, the greater the chance that the variability in Head Start programs would be represented in the studies. Conversely, the smaller the number of studies, the greater the risk that the aggregate findings from these studies may not apply to Head Start in general. Most of the approximately 600 articles and manuscripts about Head Start that we identified could not be used to answer questions about impact for various reasons. Much of this literature consisted of program descriptions, anecdotal reports, and position papers. Of those articles that were research studies, some (for example, case studies) were not suitable for drawing general conclusions about impact. Some studies examined change in outcome measures before and after Head Start but did not control for other plausible explanations for the change, for example, maturation. Other studies using a comparison group to control for competing explanations of change did not provide statistical information about the confidence that the differences found were not chance occurrences. Only 22 of the more than 200 manuscripts we reviewed met our criteria for inclusion in our analysis. (See app. I for a detailed description of inclusion criteria.) Of these, 16 investigated impact by comparing Head Start participants with an unserved comparison group; 3 analyzed gains on normed tests. Only three studies included comparisons of Head Start with some other type of preschool or day care program. These studies represent work by a variety of researchers, including college students, college faculty, and contractors. Appendix III contains more detailed information on each study. No Outcome Area or Population Adequately Researched Although Head Start provides services in several outcome areas, such as health, nutrition, education, and the like, most of the studies we found focused on educational/cognitive outcomes, and few made distinctions on the basis of differing populations served by Head Start. For example, most of the studies examined the impact of Head Start on grade retention and other indicators of academic achievement, such as standardized reading and math scores. Of the 22 studies included in our review, 16 included one or more outcomes in the cognitive area. Conversely, only five studies investigated health- or nutrition-related outcomes, and only five examined family impacts. Similarly, few studies analyzed impact by subpopulations. Because Head Start is a multicultural program, serving children and families of varying races, ethnic backgrounds, and socioeconomic levels, research that targets these subpopulations may uncover differential effects. Studies Suffered From Methodological Weaknesses All of the studies had some methodological problems. Although research in field settings can rarely conform to rigorous scientific procedures, in general, researchers place more confidence in findings of studies that control for competing explanations for their results and that use large samples. One of the more serious of the methodological problems was noncomparability of comparison groups. The most reliable way to determine program impact is to compare a group of Head Start participants with an equivalent group of nonparticipants. The preferred method for establishing that the groups are equivalent at outset is to randomly assign participants to either the Head Start group or the comparison group. Only one of the studies we reviewed used random assignment to form the Head Start and non-Head Start comparison groups. Most of these studies formed a comparison group by selecting children who were similar to the Head Start participants on some characteristic thought to be important to the outcome under study. In most cases, researchers matched participants on one or more demographic variables, usually including some variable related to socioeconomic level. In other cases, researchers did not match treatment and comparison groups but tried to compensate statistically for any inequality between the groups. Neither of these methods compensates completely for lack of random assignment to group. Some of the studies used no comparison group; instead, they compared performance of Head Start participants with test norms. This approach to evaluating program performance indicates the performance of Head Start participants relative to the norming group. Because the norming group may be unlike the Head Start group, however, conclusions about program impact are unclear. Finally, many of the studies also suffered from small samples, especially those investigating intermediate and long-term effects. Some studies began with relatively small samples; others, which began with larger samples, ended up with smaller samples as the study progressed because of missing data and attrition. Small samples present problems in research because they adversely affect statistical procedures used in analyses. Some procedures cannot appropriately be used with small samples; others are rendered less able to detect differences, resulting in an underestimation of program effects. No National Program Evaluation Found No completed, large-scale evaluation of any outcome of Head Start that used a nationally representative sample was found in our review. One characteristic of Head Start is program variability, not only in the kind of services delivered, but also in the quality of services. Making summary statements about program impact requires that the sample of programs studied represent all programs nationwide. Although one evaluation had a study design that would have allowed findings to be generalized to the national program, this study was never completed. In the late 1970s, HHS contracted for a national evaluation of the educational services component of basic Head Start. The design called for a longitudinal study that would follow children and their parents from preschool through the fourth grade. The evaluation was to compare the Basic Educational Skills Program, regular Head Start, and a non-Head Start control group. Thirty Head Start programs were to be randomly selected, and Head Start-eligible children from these communities were to be randomly assigned to Head Start or the control group. Many methodological problems as well as funding problems occurred, however, during the implementation of this study, and it was abandoned. The 1990 act that reauthorized funding for Head Start directed the Secretary of HHS to conduct “. . . a longitudinal study of the effects that the participation in Head Start programs has on the development of participants and their families and the manner in which such effects are achieved.” The study, as described in the act, was to examine a wide range of Head Start outcomes, including social, physical, and academic development, and follow participants at least through high school. The description also stipulated that, “To the maximum extent feasible, the study . . . shall provide for comparisons with appropriate groups composed of individuals who do not participate in Head Start programs.” The act authorized the appropriation of funds to carry out this study for fiscal years 1991 through 1996. According to HHS, however, funds were never appropriated for the study, and it was not conducted. Research Planned by HHS Focuses on Program Improvement, Not Impact Head Start’s planned research will provide little information about the impact of regular Head Start programs because it focuses on descriptive studies; studies of program variations, involving new and innovative service delivery strategies and demonstration projects; and studies of program quality. Although these types of studies are useful in evaluating programs, they do not provide the impact information needed in today’s results-oriented environment and encouraged by GPRA. HHS Focuses Research on Program Improvement The primary focus of research, according to Head Start Bureau officials, is to improve the program by exploring ways to maximize and sustain Head Start benefits. Thus, HHS studies evaluate which practices seem to work best for the varying populations Head Start serves and ways to sustain program benefits. Some of these studies are descriptive, providing information on service delivery and the characteristics of populations receiving services. For example, HHS is currently conducting a descriptive study of the characteristics of families served by the Head Start Migrant Program. Other descriptive studies have been conducted on health services and bilingual/multicultural programs. HHS also funds studies designed to answer questions about the effectiveness of new or innovative service delivery strategies and demonstrations and how effectiveness may relate to characteristics of the population served. Such studies typically involve special program efforts and demonstration projects conducted on a trial basis at a few Head Start sites that focus on practices or services not typically found in regular Head Start programs. For example, both Early Head Start and the Comprehensive Child Development Program target infants and children younger than those normally served by Head Start. Similarly, the Family Service Center demonstrations place more emphasis on family services and provide assistance in a variety of areas such as illiteracy, substance abuse, and unemployment. In addition, HHS funds research to explore program quality and to develop instruments to assess program performance. In 1995-96, HHS funded several Quality Research Centers and a Performance Measure Center to develop and identify instruments for measuring the quality of Head Start programs and to collect performance measure data on a nationally representative sample of Head Start programs. The major purpose of this effort, according to HHS officials, is to determine which program characteristics relate to meeting program goals. Some of the performance measure assessments use instruments for which national norms are available, however, and HHS will be able to compare participant performance to national norms for these measures. Identifying performance measures is an important step in building a research and evaluation base for Head Start. Because the program’s goals are so broad and difficult to assess, precisely defining expected outcomes and identifying appropriate instruments should produce a more valid, useful body of research. But identifying standard performance measures is also valuable because it provides a set of common measures upon which a body of research could be built, including impact research. Although descriptive studies, studies of new or innovative programs and demonstrations, and studies of program quality provide information useful both to HHS and the Congress, they do not provide full information on the impact of regular Head Start. Even the performance measures study already discussed will not provide clear-cut impact information because no comparison group is being used. Over time, this type of study will provide some useful information about program outcomes; however, such a study can neither attribute effect nor estimate the precise effect size with the level of confidence found in comparison group studies. Research Planned by HHS Will Provide Little Information on Program Impact Research planned by HHS will provide little program impact information on regular Head Start programs. HHS officials expressed concerns about using their research dollars for impact research rather than program improvement. The effectiveness of Head Start has been proven by early research, according to these officials, who also pointed to difficulties in conducting impact studies. In addition, because Head Start is such a varied program, averaging across local programs to produce national estimates of effect is not appropriate, they said. Finally, HHS maintains that Head Start is unique because of the comprehensiveness of services it offers and the population it serves; therefore, comparing Head Start with other service programs would be inappropriate, HHS officials believe. Most of the research that HHS cited as evidence of Head Start’s impact is outdated, however, and, as previously mentioned, insufficient research has been done in the past 20 years to support drawing conclusions about the current program. Furthermore, it appears that impact studies on Head Start could be done and would provide valuable results-oriented information. In addition, although research on programs that vary greatly could be methodologically more challenging to producing national estimates of impact, variation alone should not prevent developing such estimates. Moreover, comparisons with other service programs, if designed to answer questions about specific program outcomes, would provide useful information about assessing program impacts. HHS Believes Effectiveness of Head Start Is Already Proven, So Further Impact Research Is Not Warranted HHS maintains that early research has proven the effectiveness of early childhood education, including Head Start, so impact research is not the most effective use of limited research funds. Findings from early studies, however, do not conclusively establish the impact of the current Head Start program because today’s program differs from that of the late 1960s and early 1970s. Although program changes might be assumed to increase positive impact, this assumption is largely unsubstantiated. In addition, program impact may be affected by changes in the population served; Head Start families today face different problems than those in the past because of an increase in substance abuse, violence, and homelessness. Furthermore, an increased availability of social services may have lessened the impact of Head Start because families may get services from other sources if not from Head Start. The net effect of these changes on program impact is unknown. Later studies offered to support Head Start’s impact do not provide enough evidence to conclude that current Head Start is effective. Findings in literature reviews cited by Head Start proponents to support its effectiveness often involve only a few Head Start programs. For example, HHS cited a review in a recent Packard Foundation report that reported positive cognitive results of early childhood programs. This review, however, had only five studies involving Head Start participation in 1976 or later, and two of the five studies combined Head Start and other public preschools in the analyses. Authors of other studies of high-quality preschool programs have sometimes warned against applying their findings to Head Start. For instance, researchers in the Consortium for Longitudinal Studies, which produced a major study reporting positive long-term effects of preschool, explicitly stated that caution should be used in generalizing their findings to Head Start and that the programs were “. . . examples of what Head Start could be rather than what it has been.” HHS Believes Conducting Impact Studies Would Be Difficult HHS believes conducting impact research would present methodological difficulties. Two types of research designs are commonly used in conducting impact studies, experimental and quasi-experimental. HHS officials mentioned difficulties with both types of designs in studying Head Start’s impact. In addition, finding enough unserved children to form comparison groups would be a problem with either kind of research design, they said. True experimental designs, also called randomized trials, are comparison group studies that randomly assign study participants to either a treatment or control group. In the case of Head Start, these studies would require recruiting more eligible children than the program can serve. From these recruits, some children would be randomly assigned to Head Start; the rest, the unserved children, would constitute the control group. HHS officials cited ethical considerations of assigning children to an unserved control group as one of the difficulties in conducting randomized trials. Randomized trials, however, could be appropriately applied to Head Start research. In fact, the evaluation of the Early Head Start project, now under way, has randomly assigned potential participants to Early Head Start or a control group that has not received Early Head Start services. Alternatively, a research design that delays, rather than withholds, services could be used. This would involve selecting a study group and randomly assigning some children to Head Start the first year, while the remainder would serve as a control group. The control group would receive services the following year. Another strategy that could be used to study specific parts of the program would be to use an alternative treatment design. In this case, some randomly assigned participants would receive the full Head Start program, while others would receive partial services. For example, if the study interest is in school readiness and cognitive issues, the control group might receive only nutritional and health services. Most researchers believe that randomized trials yield the most certain information about program impact. Random assignment is an accepted practice in virtually every area of research, including medicine, economics, and social sciences. In some cases, the treatment of study interest is simply withheld from the control group. In other cases, for example, when researchers suspect that withholding treatment would have a profoundly negative impact, treatment may be delayed for awhile or some lesser, alternative treatment offered. While acknowledging the difficulties of random assignment, some early childhood researchers we spoke with suggested that Head Start conduct randomized trials to study regular Head Start programs because this type of study provides the most conclusive information on program impact. A common alternative to randomized trials, quasi-experimental designs, uses a naturally occurring, unserved comparison group. In the case of Head Start, some researchers have tried to identify other children in the community who are like Head Start participants in ways thought to be important (usually socioeconomic level) but who are not enrolled in Head Start. This group became the comparison (control) group. Quasi-experimental research is less rigorous than research that uses random assignment, and less confidence can be placed in its conclusions. Rarely are pre-existing groups equivalent. Even when statistical adjustments are made to compensate for known nonequivalencies, some questions always remain about the degree to which pre-existing differences in the groups may have contributed to study results. When well planned and well executed, however, such designs can provide some indication of program impact. Because Head Start strives to serve the neediest children, those in quasi-experimental comparison groups would be less likely to be disadvantaged than children in the Head Start group, according to HHS officials. If true, this nonequivalency in groups would bias the outcome in favor of the comparison group, resulting in underestimation of program effects. Because investigating the characteristics of Head Start participants was beyond the scope of this study, we do not know to what extent, if any, Head Start children may be more disadvantaged than similar children not attending Head Start. Even assuming that Head Start has identified and is serving the neediest applicants, however, it seems possible that a comparably disadvantaged, unserved group could be identified from the applicants whom the program cannot serve and nonapplicants in a community. Regardless of which design is used, experimental or quasi-experimental, finding enough truly unserved children for a comparison group would be extremely difficult because of the growing number of public preschool programs and the increased availability of child care, according to HHS officials. Statistics on the percentage of children being served by preschools suggest, however, that finding disadvantaged children unserved by preschools is possible. In our report, Early Childhood Programs: Many Poor Children and Strained Resources Challenge Head Start (GAO/HEHS-94-169BR), we found that only 35 percent of poor 3- and 4-year-olds attended preschool in 1990. The Congressional Research Service estimated that in fiscal year 1994, about 30 percent of eligible 3- to 4-year-olds were being served by Head Start. On the basis of these estimates, it appears that some locations do exist where a control group of children not attending preschool could be formed. HHS Believes National Estimates of Program Impact Are Not Appropriate Estimating program impact at the national level is not appropriate because of the extreme variability of local programs, HHS officials said. Local Head Start sites have great flexibility, and, even though all programs share common goals, they may operate very differently. Therefore, on the advice of HHS’ research advisory panel, HHS considers a single, large-scale, national study of impact to be methodologically inappropriate. For this same reason, HHS believes that summing across sites for an aggregate estimate of effect is not justified in cases where sites are not basically operating the same way. Evaluating outcomes at the national program level is an accepted program evaluation procedure, however, even for programs with a great deal of variability. It is the only way to determine with certainty whether the program is making an overall difference in any particular outcome area. Aggregate analysis does not, however, replace the need for lower level analyses, which provide insight into the summary finding. In cases where effects are not uniform across sites, this lower level analysis provides more understanding of which service areas and delivery approaches are working for which subpopulations. Evaluations can be planned to answer both the aggregate and disaggregate question in a single study. HHS Believes Comparisons With Other Service Providers Are Not Appropriate Another way to evaluate Head Start’s impact is to compare its effects with some other types of preschool, for instance, state or local preschools. When several programs exist that deliver similar services, studies comparing programs in areas that have common goals can provide useful information. For instance, Head Start and public preschools share the goal of school readiness. A study might be conducted to compare Head Start and public preschool students on the basis of a measure of school readiness. Such a study might compare the performance of program participants, while describing relevant program differences that might affect results, such as level of service in the area studied and program costs. Regarding a comparative study, HHS has maintained that Head Start is unique in the comprehensiveness of the services it offers. Therefore, according to the agency, any comparison of programs would be misleading. In addition, HHS claims that children served by Head Start are more disadvantaged than children in other types of preschools. The agency also points out that in some places, other public preschools have adopted the Head Start model, making such comparisons essentially Head Start with Head Start. Concerns about differences in populations served by the programs would relate to the rigor of the study design, that is, whether it is experimental or quasi-experimental. When quasi-experimental designs are used, researchers frequently use statistical techniques to mitigate for pre- existing differences; but these designs always suffer to some degree from the limitations referred to earlier in our discussion of quasi- experimental designs. Therefore, confidence in the study’s results would vary depending on the study design used. In the case of Head Start-like programs, one might reasonably expect a difference in outcome on the basis of such factors as program administration and context. For example, a preschool program operated by a local school system might have different outcomes in school readiness because of the possible advantage of transitioning its students into kindergarten. Research that compares Head Start with alternative ways of accomplishing a particular goal might provide insight into the most effective and efficient way to provide services to needy children and families. Conclusions and Recommendations Increasing demand for shrinking federal resources has raised the concerns of the Congress, the executive branch, and taxpayers about the impact of multibillion dollar federal investments in federal programs such as Head Start. In addition, GPRA requires agencies to be more accountable for substantiating program results. Although research has been conducted, it does not provide information on whether today’s Head Start is making a positive difference in the lives of participants who live in a society that differs vastly from that of the sixties and early seventies. While we acknowledge the difficulties of conducting impact studies of programs such as Head Start, research could be done that would allow the Congress and HHS officials to know with more certainty whether the $4 billion dollar federal investment in Head Start is making a difference. For this reason, we recommend that the Secretary of HHS include in HHS’ research plan an assessment of the impact of regular Head Start programs. Agency Comments In commenting on a draft of our report, HHS expressed the belief that the research base on the efficacy of Head Start is more substantial than depicted in our report and that the strategy of the Department to extend this base is appropriate to produce findings about both impact and program quality. HHS also indicated plans to evaluate the feasibility of conducting impact studies such as we recommended. The Quality Research Centers are evaluating the feasibility of conducting randomized trials in small-scale evaluations, and, on the basis of these experiences may consider implementing larger scale studies. The full text of HHS’ comments appears in appendix IV. HHS supported the claim that the research base is more substantial than we depict by pointing to the findings from the 1985 synthesis conducted by CSR (cited as “McKey et al., 1985” in HHS’ comments) and two more recent studies (the Currie and Thomas study and the Fosburg study). For reasons discussed in this report, we do not agree that findings drawn from studies more than 20 years old adequately support claims about the impact of the current Head Start program. Similarly, the findings from the two more recent studies mentioned fail to support conclusions about impact that can be generalized to the national program. Even though these studies were larger than others we found, both had significant methodological limitations. The Currie and Thomas study examined information in a database to reach conclusions about Head Start. This study used an after-the-fact, post-test-only design. Although this design is frequently used when researchers must rely on existing data as their only source, the design is vulnerable to serious threats to validity, as discussed earlier in this report. Because of these design limitations, neither positive conclusions about Head Start (that is, that children’s test scores show immediate positive effects) nor negative conclusions (that is, that these effects quickly disappear for African American children) can be firmly drawn from the findings of this study. The second study, the Fosburg study, as HHS pointed out, used a much stronger research design, which randomly assigned children in four Head Start programs to either Head Start or a non-Head Start control group. The site selection methodology, however, precluded generalizing these findings to all Head Start programs. The four Head Start programs selected were chosen from areas identified as underserved in medical and dental services, and Head Start sites that were not in compliance with Head Start performance standards were excluded from selection. In addition, attrition was a significant problem in this study. HHS also mentioned that on the basis of recommendations of leading researchers, the Department is conducting a well-balanced, innovative set of new studies of Head Start. It contends that our report does not acknowledge the major longitudinal studies that HHS has planned or that are being conducted by other agencies. Our report states that HHS’ planned research focuses on program improvement, and we agree that such studies are needed. We also support the studies of program impact that HHS has under way in special program areas such as Early Head Start. Our work, however, focused specifically on HHS’ research plans that address the question of impact of the regular Head Start program. HHS’ current research plans, however, do not include such research. Finally, HHS maintained that it is building a substantial system of innovative research, development, and management tools in response to GPRA. The Department emphasized the role the Quality Centers play in these efforts and said that these centers are currently evaluating possible strategies for performing comparison group studies that use a random assignment research design. HHS maintained that we overlooked the importance of studying the quality of Head Start programs in assessing impacts. We fully support HHS’ plans to investigate the feasibility of conducting randomized trials because these studies provide the clearest indication of program impact. We also agree that the issue of quality is important in assessing program impact and findings from studies need to include information on program quality. The ultimate measure of program quality is impact, however. Until sound impact studies are conducted on the current Head Start program, fundamental questions about program quality will remain. We are sending copies of this report to the Secretary of Health and Human Services, the Head Start Bureau, appropriate congressional committees, the Executive Director of the National Head Start Association, and other interested parties. Please call me at (202) 512-7014 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix VI. Objectives, Scope, and Methodology Objectives The Chairman of the Committee on the Budget, House of Representatives, asked us to examine existing research on Head Start programs and to determine what it suggests about the impact of the current Head Start program. Another objective was to determine what types of Head Start research HHS has planned. Scope Although the bulk of research on Head Start was conducted in the early years of the program, we focused on studies of Head Start participation in 1976 or later for several reasons. First, HHS instituted quality initiatives and other important program changes in the early to mid-1970s that shaped the current Head Start program, including phasing out summer programs, implementing performance standards, and establishing teacher credentialing procedures. Second, findings from studies of early programs have limited generalizability to more stable programs. The early years of any program are not likely to represent a program in its maturity. This is especially true for Head Start, which was implemented quickly and on a large scale. Finally, earlier studies were thoroughly reviewed by the Head Start synthesis project and were reported in The Impact of Head Start on Children, Families, and Communities in 1985. Studies from the years before 1976 constituted the bulk of studies included in this synthesis. A short summary of these findings appears in the “Background” section of this report. After speaking with HHS research personnel, we anticipated that the body of studies usable for a research synthesis might be small. Therefore, in addition to comparison group studies, we included pretest/post-test-only designs in cases in which outcomes were discussed in relation to test norms. Although much less useful in providing information about program impact relative to comparison group designs, these studies provide a certain degree of valuable information. Methodology To report on what existing research says about Head Start’s impact, we identified studies meeting our basic selection criteria as outlined in the “Literature Review” section of this appendix. Because the number of studies found in the first phase was so small, we did not screen further for adequacy of information reported. To determine how HHS uses research, we reviewed HHS’ research plans and publications by their research advisory panel. We also spoke with HHS officials who direct Head Start research and with the director of research at the National Head Start Association. Literature Search We began our search for studies with two bibliographies contracted for by HHS. The first, An Annotated Bibliography of the Head Start Research Since 1965, was a product of the 1985 Head Start Evaluation, Synthesis and Utilization Project. We also reviewed An Annotated Bibliography of Head Start Research: 1985-1995. This bibliography was produced by Ellsworth Associates, Inc., for the Head Start Bureau as a part of its contract to maintain a library of Head Start and related research. Search strategies used to compile these bibliographies are described in the introductions to the documents. In addition, we conducted our own search for studies. Our primary source was the database maintained by the Education Resources Information Center. However, we also searched a number of other databases, including MEDLINE, AGRICOLA, Dissertation Abstracts, Government Printing Office, Mental Health Abstracts, Psyc INFO, Federal Research in Progress, Social SciSearch, Sociological Abstracts, IAC Business A.R.T.S., British Education Index, Public Affairs Information Service International, and National Technical Information Service. We also interviewed people knowledgeable about early childhood research. We attended the Head Start Third National Research Conference and spoke with conference participants. We also mailed letters to every conference participant asking for their assistance in locating relevant research. We interviewed personnel in charge of research for the Administration for Children Youth and Families and the Head Start Bureau and spoke with other researchers whom they recommended. We also talked with the executive director and the director of research and evaluation of the National Head Start Association and addressed the state and regional presidents of this organization at their annual meeting. In addition, we announced our effort to locate research dealing with Head Start effectiveness on several of the Internet forums sponsored by the American Educational Research Association. Literature Review From these sources, we identified over 600 manuscripts that were screened for relevance to our study. We acquired about 200 of these and reviewed them carefully regarding the following selection criteria: Head Start participation had occurred in 1976 or later; studies had compared outcomes of Head Start participants with children not attending any preschool—or those attending some other type of preschool—or studies had compared Head Start outcomes with test norms; and tests of statistical significance were reported to have been performed on the differences, except in cases in which outcomes were measured using normed instruments. We excluded studies of transition or follow-through programs that provided services beyond the Head Start years and studies that pooled Head Start and other kinds of preschool participants. We considered multiple articles or later follow-ups on the same study to be one study. This final screening yielded 22 impact studies that were evaluated in our review. We performed our work between April and December 1996 in accordance with generally accepted government auditing standards. Research, Demonstration, and Evaluation Budgets for the Head Start Program Summaries of Studies Included in the Review Evaluation of the Process of Mainstreaming Handicapped Children Into Project Head Start, Phase II, Executive Summary, and Follow-Up Evaluation of the Effects of Mainstreaming Handicapped Children in Head Start Authors: Applied Management Sciences, Inc. (first study) and Roy Littlejohn Associates, Inc. (second study) Outcome area studied: Cognitive and health Overview of study: Children receiving Head Start program services compared with children receiving services from other types of programs and with children receiving no special services Design: Pretest with post-test 6 months later Population: 55 randomly selected Head Start centers and 49 non-Head Start programs Sample: 391 Head Start children, 321 non-Head Start children, and 121 unserved children Head Start program year(s): 1977-78 Measures/instrumentation: Various development indicators, including physical development, self-help skills, cognitive development, social development, communication skills, classroom social skills, and classroom behavior and social integration Findings: Developmental gains for Head Start and non-Head Start children identified as physically handicapped, mentally retarded, and health or developmentally impaired were generally not significantly greater than those of unserved children. Developmental gains were significant in physical, self-help, academic, and communications skills for children identified as speech impaired in Head Start and non-Head Start programs relative to unserved children. A Longitudinal Study to Determine If Head Start Has Lasting Effects on School Achievement Author: Colleen K. Bee Outcome area studied: Cognitive Overview of study: Followed up Head Start participants in kindergarten, first grade, and second grade Design: Post-test only, comparison group selected from waiting list for each respective year Population: Head Start participants in Sioux Falls, South Dakota Sample: 10 girls and 10 boys were selected for each Head Start year, 10 girls and 10 boys were selected each year for the comparison groups Head Start program year(s): 1977-78, 1978-79, 1979-80 Measures/instrumentation: Metropolitan Reading Readiness Test, special education placements, and grade retention Findings: No significant differences were found at the .01 level of confidence on reading readiness scores for any of the years studied. Non-Head Start group retained in grade less than the Head Start group in 1977-78 (difference significant at .01 level). No significant difference was found in special education placements for any of the years studied. Evaluation of Public Preschool Programs in North Carolina Authors: Donna M. Bryant, Ellen S. Peisner-Feinberg, and Richard M. Clifford Outcome area studied: Cognitive and socioemotional Overview of study: Followed up public preschool graduates in kindergarten Design: Post-test only, comparison group comprised children from same kindergarten classes Population: Public preschool programs in North Carolina Sample: 97 children participated in Head Start, 99 in community day care, and 120 in no group care Head Start program year(s): 1992-93 Measures/instrumentation: Reading and math subscales of the Woodcook-Johnson Tests of Achievement (WJ-R), Peabody Picture Vocabulary Test-Revised (PPVT-R), developmental assessment on communication development, and an assessment on social behavior completed by a kindergarten teacher; adapted questionnaire form of the Communication Domain of the Vineland Adaptive Behavior Scale used to provide a measure of children’s cognitive development; and Social Skills Questionnaire used to measure teachers’ ratings of children’s classroom behaviors Findings: Significant group effects were found for the PPVT-R, with all the groups performing better than children in the non-day care group. For the WJ-R reading scale, the preschool group showed no effects. Significant main effects were found for preschool group on the WJ-R math scale, with the community day care sample scoring higher than the four other groups. Significant preschool group differences were found on the Vineland Communication Domain, with community day care children rated higher than the other four groups. Social skills of community child care children were rated significantly higher by their kindergarten teachers than children who attended the standard or the family-focused classes or children who did not attend group day care and marginally higher than the Head Start children. On the Academic Competence scale of the Social Skills Questionnaire, children who previously attended community child care scored significantly higher than those in the other four groups. The Impact of Escalating Family Stress on the Effectiveness of Head Start Intervention Authors: Mary Anne Chalkley and Robert K. Leik Outcome area studied: Family Overview of study: Explored the effects that declining conditions among the U.S. poor may have on the potential for intervention programs to make a difference in the lives of those receiving services Design: Pretest/post-test, followed up Head Start Family Impact Project participants in 1993; comparison group recruited from Head Start-eligible families Population: Head Start families in Minneapolis, Minnesota Sample: 130 of the 190 families in the original study Head Start program year(s): 1986-87, 1989-90 Measures/instrumentation: Mothers reported various measures on their families, themselves, and their children. Children completed the pictorial form of Perceived Competence and Acceptance. Findings: An examination of the absolute amount of change in the mother’s perception of the child was inconclusive on the impact of Head Start. Developmental Progress of Children Enrolled in Oklahoma Head Start Programs in 1987-1988 Outcome area studied: Cognitive and social Overview of study: Head Start students were tested in the fall and again in the spring in multiple developmental areas. Population: Children in 15 Head Start programs in Oklahoma Head Start program year(s): 1987-88 Measures/instrumentation: Brigance Diagnostic Inventory of Early Development and Head Start Measures Battery Findings: Gains on the Brigance ranged from 9 to 16 months. Similar claims were made for results on the Head Start Measures Battery, but findings were reported in raw scores with no intrinsic meaning. Does Head Start Make a Difference? Authors: Janet Currie and Duncan Thomas Outcome area studied: Cognitive and health Overview of study: Examined the impact of Head Start on school performance, cognitive attainment, and various health and nutritional measures Design: Post-test only, comparison groups comprised participants in other preschool or no preschool Population: U.S. Head Start participants Sample: National sample of data for nearly 5,000 children from the National Longitudinal Survey of Youth and the National Longitudinal Survey’s Child-Mother file Head Start program year(s): 1986-90 Measures/instrumentation: Picture Peabody Vocabulary Test and grade retention Findings: Head Start had positive and persistent effects on test scores and school attainment of white children relative to participation in either other preschool or no preschool after controlling for family and background effects. An increase in test scores was noted for African American children, but these gains were quickly lost, and there appeared to be no positive effects in school attainment. Greater access to preventive health care was reported for white and African American children who attended Head Start or other preschools. A Comparison of Head Start and Non-Head Start Reading Readiness Scores of Low-Income Kindergarten Children of Guam Author: Maria D. Esteban Outcome area studied: Cognitive Overview of study: Followed up Head Start participants in kindergarten Design: Post-test only, comparison group comprised low-income kindergarten students that did not attend Head Start Population: Head Start participants from six public schools on Guam Sample: 35 male and 35 female Head Start children and 35 male and 35 female non-Head Start children Head Start program year(s): 1985-86 Measures/instrumentation: Brigance K&I Screen for Kindergarten Findings: Differences among the four groups were not significant at the p = .05 level. Head Start to non-Head Start comparison was not significant. The Effectiveness of Family Health Care in Head Start: The Role of Parental Involvement Author: Barbara A. Facchini Outcome area studied: Health Overview of study: Relationship of the amount of parental involvement in the Head Start program to the amount of health care received by both Head Start-age children and their siblings Design: Post-test only, comparison group selected from waiting list Population: West Haven, Connecticut, Head Start program Sample: 40 Head Start children and 20 waiting-list children for comparison group Head Start program year(s): 1980-81 Measures/instrumentation: Immunizations, physical examinations, health screenings, and dental examinations Findings: Immunizations were up to date for about one-half of both the Head Start children and the waiting-list children before the beginning of the Head Start programs. All of the Head Start children were up to date during the Head Start year, but only a few additional waiting-list children were up to date. Head Start children were more likely to receive health screenings and dental examinations (p < .001). Head Start children were more likely to receive physical examinations, but only the difference for children of highly involved parents was significantly different from the waiting-list children (p < .05). No significant difference for immunizations was found between the siblings of Head Start children and siblings of waiting-list children. Head Start siblings were more likely to have received health and dental screenings (p < .05). The Effects of Head Start Health Services: Executive Summary of the Head Start Health Evaluation Authors: Linda B. Fosburg and Bernard Brown Outcome area studied: Health Overview of study: Longitudinal study of the Head Start health services Design: Pretest/post-test, longitudinal experimental design, involving random assignment of children to a Head Start and a non-Head Start group Population: Four large Head Start programs Sample: 208 children completed both pre- and post-tests, 609 received post-tests only Head Start program year(s): 1980-81 Measures/instrumentation: Pediatric, dental, anthropometric, hematology, developmental, speech and language, vision, and hearing evaluations and nutritional observation; parent interview addressed the health history of child, nutritional evaluation of child, and family background Findings: Head Start children were more likely to receive preventive and remedial health services than other low-income children in their community. Head Start children were more likely to receive medical and dental examinations, speech evaluation and therapy services, and vision screen or examination. Head Start children tested at both pretest and post-test were less likely to have speech and language deficiencies at post-test. Nutritional intake evaluation showed exceptionally positive impacts of Head Start’s nutrition services on children and their families. Children Are a Wonderful Investment: A Study in Preschool Education Authors: Mary Fulbright and others Outcome area studied: Cognitive and family Overview of study: To examine the effects of preschool education on children of low-income families in Dallas, Texas Design: Post-test only, followed up students who had attended Sunnyview Head Start Center during the previous 5 years; comparison group selected from children in district who did not attend Sunnyview Head Start Center but were matched on demographic characteristics Population: Dallas Independent School District students who had attended Sunnyview Head Start Center during the previous 5 years Sample: 83 former Sunnyview parents and 76 comparison group parents; for the grade retention analysis, 43 Sunnyview and 41 comparison students Head Start program year(s): 1984-89 (estimated) Measures/instrumentation: Demographic, economic, home environment, and educational experience/expectation information was collected from parents. Grade retention information was gathered from school files. Findings: Significantly fewer former Sunnyview students had repeated a grade than comparison group students. Sunnyview parents reported significantly more educational items in the home than comparison group parents. Health Services and Head Start: A Forgotten Formula Authors: Barbara A. Hale, Victoria Seitz, and Edward Zigler Outcome area studied: Health Overview of study: Studied the impact of Head Start’s health services on children and their siblings Design: Post-test only, comparison groups from the waiting list and from a nursery school serving middle-class families Population: Head Start participants in two adjacent small cities in Connecticut Sample: 40 Head Start children, 18 children on the Head Start waiting list, 20 children enrolled in a nursery school, and 103 siblings of the nursery school children Head Start program year(s): 1984-85 Measures/instrumentation: Immunizations, physical examinations, health screenings, and dental examinations Findings: Head Start children received more age-appropriate health screenings than middle-class children and waiting-list children. Head Start children were more likely to receive dental examinations than middle- class children and waiting-list children. Head Start siblings were less likely than middle-class siblings to receive age-appropriate immunizations and health screenings. An Analysis of the Effectiveness of Head Start and of the Performance of a Low-Income Population in MCPS Outcome area studied: Cognitive Overview of study: Examined the long-term effectiveness of Head Start by comparing the performance of Head Start graduates in elementary and secondary school to that of students who had applied for Head Start but did not attend Design: Post-test only, followed up three cohorts of Head Start graduates; one cohort, the 1978-79 group, was within the scope of our study; comparison group selected from waiting list for each respective year Population: Children continuously enrolled in Montgomery County, Maryland, school system between 1980 and 1984 and currently in the fourth grade Sample: Head Start group comprised 411 children; the comparison group had 89. Head Start program year(s): 1978-79 Measures/instrumentation: California Achievement Test, Cognitive Abilities Test, special education placements, and grade retention Findings: Head Start group had a higher percentage of students who scored above the 80th percentile on one of the subtests of the Cognitive Abilities Test administered in the third grade. A Comparison of the Academic Achievement of Urban Second Grade Pupils With Different Forms of Public Preschool Experience Author: Elva Williams Hunt Outcome area studied: Cognitive Overview of study: Comparison of the academic achievement of urban second grade students from low-income families with different forms of public preschool experience Design: Post-test only, followed up Head Start participants in second grade; comparison groups comprised students with public preschool (First Step) or no preschool experience Population: Three cohorts of second grade students from the Newport News Public Schools Sample: 74 former Head Start students, 92 former First Step preschool students, and 92 students with no preschool experience Head Start program year(s): 1980-81, 1981-82, 1982-83 Measures/instrumentation: Standardized test scores and grade retention Findings: Achievement test scores of the three groups were not significantly different. No conclusion was reached about the performance of Head Start students on the grade retention measure. A Head Start Program Evaluation in Terms of Family Stress and Affect: A Pilot Study Authors: Ron Iverson and others Outcome area studied: Family Overview of study: Assessment of the effect of a local Minnesota Head Start’s family services on family stress levels Design: Pretest/post-test, comparison groups selected from waiting-list families and from the local population Population: Families with children enrolled in the Clay-Wilkin Opportunity Council Head Start program Sample: 149 Head Start families were surveyed at the beginning of the program year and completed a post-test in May and a 1-year follow-up the following May. Twenty-one waiting-list families and 35 randomly selected families with young children from the general population were surveyed as comparison groups. Head Start program year(s): 1991-92, 1992-93 Measures/instrumentation: Index of Family Stress and Adjustment—Stress was measured by a correlated subscale of 13 of the original 55 stress items, and affect was measured by a correlated subscale of 9 of the original 30 affect items. Findings: Head Start and waiting-list families were both significantly higher in stress means and lower in affect means than the general population families. Head Start families were both significantly lower in stress and significantly higher in affect than the waiting-list families at Head Start post-test. Head Start gains in both stress and affect measures appeared to reverse at the 1-year follow-up, but the changes were not significant. Final Report-The Head Start Family Impact Project Authors: Robert K. Leik and Mary Anne Chalkley Outcome area studied: Family Overview of study: Studied family functioning and optimal involvement of parents in Head Start Design: Pretest/post-test of two treatment groups—regular Head Start and an enriched program with a comparison group selected from the Head Start waiting list Population: Head Start participants in Minneapolis, Minnesota Sample: 51 families in regular Head Start, 30 families in the enriched program, and 21 waiting-list families Head Start program year(s): 1986-87 Measures/instrumentation: Various measures of family characteristics, mother’s evaluation of her child’s behavior and competence, and children’s feelings of competence and social acceptance Findings: Head Start families exhibited large and significant changes in family cohesion and adaptability. Mothers in both Head Start groups increased their evaluation of their children’s competence. Children in all samples increased their sense of competence and acceptance. A Longitudinal Study to Determine the Effects of Head Start Participation on Reading Achievement in Grades Kindergarten Through Six in Troy Public Schools Author: Paula J. Nystrom Outcome area studied: Cognitive Overview of study: Review impact of the Head Start program on academic achievement of children Design: Post-test only, followed up Head Start participants in kindergarten through sixth grade; comparison group comprised children who had not attended Head Start, but who were similar to the Head Start group on certain demographic variables Population: Head Start participants from three schools in Troy, Michigan Sample: 54 Head Start children and 54 comparison children Head Start program year(s): 1980-81, 1981-82, 1982-83, 1983-84 Measures/instrumentation: Metropolitan Readiness Test, Gates-MacGinite Reading Test, Iowa Tests of Basic Skills, Cognitive Abilities Test, special education placements, and grade retention Findings: Mean scores at kindergarten were higher for the Head Start group compared with the comparison group at kindergarten; no significant differences were found at any of the other grade levels. An analysis of change in scores over time showed a significant difference in favor of the Head Start group. No significant differences were found between the Head Start and comparison group for special education placement and grade retention. A Comparison of Long Range Effects of Participation in Project Head Start and Impact of Three Differing Delivery Models Author: Yvonne B. Reedy Outcome area studied: Cognitive, socioemotional, and family Overview of study: Investigated possible differences among groups of children receiving Head Start through three different delivery models Design: Post-test only, followed up Head Start participants after 2 to 4 years in public schools to examine the long-range effects of different delivery models; comparison group comprised children who might have attended Head Start but did not Population: Head Start participants in rural Pennsylvania Sample: 18 were children for each of the three groups: classroom, mixed model, and home based; and 18 children were included in the control group Head Start program year(s): Not specified Measures/instrumentation: Woodcock-Johnson Psychoeducational Battery - Part II, Tests of Achievement; PPVT-R; Child Behavior Checklist - Parent Rating Scale; Child Behavior Checklist - Teacher Rating Scale; Vineland Adaptive Behavior Scale - Survey Form; and Head Start Follow-up Family Questionnaire Findings: No differences among Head Start and non-Head Start children in reading, math, written language, or receptive language. Levels were in the average range when compared with national norms. Head Start children obtained significantly higher mean scores on the measure of general knowledge. Head Start children had significantly lower mean scores on both subscales and total scale on the measure of maladaptive behavior. Correlations with teacher reports were significant. On the socialization scale, differences were not significant at p = .05. On the adaptive behavior measures, the non-Head Start children obtained significantly higher means on the communication, daily living skills, and social skills domains, as well as on the total adaptive behavior score. On the parent questionnaire, non-Head Start parents reported they felt less capable of providing a good learning environment, spent less time working with the child on homework or other learning activities, were less likely to seek information about age-appropriate expectations, were more likely to resort to spanking as a form of discipline, were less able to find community services when needed and to feel their involvement with their child’s education had resulted in any noticeable accomplishments. On the daily living skills, social skills, and total independent living scales, the children in the classroom model obtained lower means than the two groups who received home visits. Parents of children in the classroom model reported they spent smaller amounts of time working with their children at home, and they were less likely to seek out information about age-appropriate information and to feel that their involvement in their children’s education resulted in any noticeable accomplishments. A Study of Duration in Head Start and Its Impact on Second Graders’ Cognitive Skills Author: Joyce Harris Roberts Outcome area studied: Cognitive and socioemotional Overview of study: Assessed the impact of Head Start programming on later school success and the development of social competence in its graduates Design: Post-test only, compared 1-year Head Start participants with 2-year Head Start participants and a non-preschool comparison group of second grade classmates Population: Second grade students in four public schools in a large suburban school district Sample: 30 children with 1 year of Head Start year, 22 children with 2 years of Head Start, and 33 children with no preschool Head Start program year(s): 1978-79, 1979-80 Measures/instrumentation: Locus of Control Scale for Children - Pre-School and Primary, Form A; Self-Concept Inventory; and Cognitive Abilities Test (Primary Level) Findings: No significant difference was found between groups on any measure. Changes in Mental Age, Self-Concept, and Creative Thinking in Ethnically Different 3- and 4-Year-Old Head Start Students Author: Linda L.B. Spigner Outcome area studied: Cognitive Overview of study: Studied Head Start participants’ progress after 8 months of Head Start participation and conducted home interviews for the children who showed the highest gains and for the children who made the least progress Population: Head Start participants in a north Texas community Sample: 37 Head Start participants Head Start program year(s): Exact year not specified Measures/Instrumentation: Bankson Language Screening Test, Developmental Test of Visual-Motor Integration, Peabody Picture Vocabulary Test, Self-Concept Adjective Checklist, and Torrance Tests of Creative Thinking Findings: Average mental age gain of almost 11 months was significant at the .01 level. Gains in self-concept and creative thinking were significant at the .01 level. Learning by Leaps & Bounds Author: Texas Instruments Foundation, Head Start of Greater Dallas, and Southern Methodist University Outcome area studied: Cognitive Overview of study: Followed up Margaret H. Cone Preschool Head Start program participants; cohorts 4, 5, and 6 participated in a new Language Enrichment Activities Program Design: Post-test only, comparison group comprised classmates who did not attend the Margaret H. Cone Preschool Population: Six cohorts of children attending the Margaret H. Cone Preschool Head Start program in Dallas, Texas Sample: Cohorts ranged from about 30 to 58 children Head Start program year(s): 1990-96 Measures/instrumentation: Battelle Developmental Inventory, PPVT-R, Clinical Evaluation of Language Fundamentals - Preschool, and Iowa Test of Basic Skills Findings: Results of cohorts 2, 3, 4, 5, and 6 revealed a pattern of improved performance in vocabulary, language skills, concept development, and social-adaptive skills during the years of the language enrichment program. Early Childhood Educational Intervention: An Analysis of Nicholas County, Kentucky, Head Start Program Impacts From 1974-1986 Author: Marium T. Williams Outcome area studied: Cognitive Overview of study: Examined the impact of the Nicholas County Head Start Program over a 12-year period Design: Post-test only, followed up Head Start participants in first grade through sixth grade; comparison groups were selected from comparable first grade enrollment Population: Children who entered first grade in the years 1975, 1976, 1979, 1980, and 1981 in Nicholas County, Kentucky; the first three groups are outside our period of study Sample: 14 Head Start and 9 comparison children for 1979-80 and 11 Head Start and 10 comparison children for 1980-81 Head Start program year(s): 1979-80, 1980-81 Measures/instrumentation: Comprehensive Tests of Basic Skills, Cognitive Skills Index, mathematics and reading/English grades, Kentucky Essential Skills Test, special education placements, and grade retention Findings: No significant differences were found for most comparisons. Reading scores for the Head Start children were significantly better than those of the comparison group for 3 of the 6 years at the .05 level of significance. Is an Intervention Program Necessary in Order to Improve Economically Disadvantaged Children’s IQ Scores? Authors: Edward Zigler and others Outcome area studied: Cognitive Overview of study: Studied changes in intelligence quotient scores of children attending Head Start Design: Pretest/post-test, comparison group comprised Head Start-eligible children not attending Head Start; testing was done at three points in the Head Start year Population: Preschool children from economically disadvantaged families living in low-income, inner-city neighborhoods in New Haven, Connecticut Sample: 59 Head Start children and 25 comparison children Head Start program year(s): Not stated Measures/instrumentation: Stanford-Binet Intelligence Scale, Form L-M Findings: Both groups increased from test to retest, a result which was attributed to familiarity with the testing situation. Only the Head Start group continued to show improvement on the post-test, which was interpreted as reflecting changes in the children’s motivation from attending a preschool intervention program. Comments From the Department of Health and Human Services Acknowledgments Many researchers and early childhood experts provided valuable assistance and information used in producing this report. In particular, we wish to acknowledge the following individuals who reviewed the draft report: Dr. Richard Light, Harvard University; Dr. Mark Lipsey, Vanderbilt University; Greg Powell, National Head Start Association; and Dr. Edward Zigler, Yale University. Although these reviewers provided valuable comments, they do not necessarily endorse the positions taken in the report. GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, the following individuals made important contributions to this report: Sherri Doughty managed the literature search and co-wrote the report, Wayne Dow led the literature review and screening, and Paula DeRoy performed the literature searches and collected the manuscripts. Related GAO Products Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996). Head Start: Information on Federal Funds Unspent by Program Grantees (GAO/HEHS-96-64, Dec. 29, 1995). Early Childhood Centers: Services to Prepare Children for School Often Limited (GAO/HEHS-95-21, Mar. 21, 1995). Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). Early Childhood Programs: Parent Education and Income Best Predict Participation (GAO/HEHS-95-47, Dec. 28, 1994). Early Childhood Programs: Local Perspectives on Barriers to Providing Head Start Services (GAO/HEHS-95-8, Dec. 21, 1994). Early Childhood Programs: Multiple Programs and Overlapping Target Groups (GAO/HEHS-95-4FS, Oct. 31, 1994). Early Childhood Programs: Many Poor Children and Strained Resources Challenge Head Start (GAO/HEHS-94-169BR, May 17, 1994). Infants and Toddlers: Dramatic Increase in Numbers Living in Poverty (GAO/HEHS-94-74, Apr. 7, 1994). Poor Preschool-Age Children: Numbers Increase but Most Not in Preschool (GAO/HRD-93-111BR, July 21, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the impact of the current Head Start Program, focusing on what: (1) the studies conducted on current Head Start programs suggest about Head Start's impact; and (2) types of Head Start studies are planned by the Department of Health and Human Services (HHS). GAO noted that: (1) although an extensive body of literature exists on Head Start, only a small part of this literature is program impact research; (2) this body of research is inadequate for use in drawing conclusions about the impact of the national program in any area in which Head Start provides services such as school readiness or health-related services; (3) not only is the total number of studies small, but most of the studies focus on cognitive outcomes, leaving such areas as nutrition and health-related outcomes almost completely unevaluated; (4) individually, the studies suffer to some extent from methodological and design weaknesses, such as noncomparability of comparison groups, which call into question the usefulness of their individual findings; (5) in addition, no single study used a nationally representative sample so that findings could be generalized to the national program; (6) failing to find impact information in existing research, GAO examined HHS' research plans for Head Start; (7) planned research will focus on new or innovative service delivery strategies and demonstrations but will provide little information on the impact of regular Head Start programs; (8) HHS' planned research includes descriptive studies, studies of program variations, involving new and innovative service delivery strategies and demonstration projects, and studies of program quality; (9) HHS officials, in explaining the agency's research emphasis, stated that early research has proven Head Start's impact; (10) such research, however, conducted over 20 years ago, may no longer apply to today's program because of program changes and changes in the population served; (11) HHS also noted some ethical and methodological difficulties of conducting impact research, especially studies that would produce national estimates of program effect; (12) neither ethical nor methodological issues present an insurmountable deterrent to conducting research on Head Start's impact; and (13) moreover, the size and cost of the program appear to warrant an investment in such research.
Background Following the 1998 terrorist attacks on our embassies in Dar es Salaam, Tanzania, and Nairobi, Kenya, several investigative efforts cited the need for embassy rightsizing. In January 1999, the Accountability Review Boards recommended that State look into decreasing the size and number of embassies and consulates to reduce employees’ vulnerability to attack. To follow up on the boards’ recommendations, OPAP reported in November 1999 that overseas staffing levels had not been adjusted to reflect changing missions and requirements; thus, some embassies were too large and some were too small. OPAP said rightsizing was an essential component of an overall program to upgrade embassy and consulate capabilities, and it recommended that this be a key strategy to improve security by reducing the number of staff at risk. OPAP also viewed rightsizing as a way to decrease operating costs by as much as $380 million annually if a 10 percent worldwide staffing reduction could be achieved. The panel recommended creating a permanent interagency committee to adopt a methodology to determine the appropriate size and locations for the U.S. overseas presence. It also suggested a series of actions to adjust overseas presence, including relocating some functions to the United States and to regional centers where feasible. In response to OPAP’s recommendations, in February 2000, President Clinton directed the secretary of state to lead an interagency effort to (1) develop a methodology for assessing embassy staffing, and (2) recommend adjustments, if necessary, to staffing levels at six pilot study embassies. While the interagency committee did mention some potential areas for staff reductions, our review of its efforts found that the committee was not successful in developing such a methodology. In fact, the committee concluded that it was impractical to develop a standard approach because of differences among embassies; however, we reported that the pilot studies had limited value because they were conducted without focused, written guidelines, and committee members did not spend enough time at each embassy for a thorough evaluation.In August 2001, The President’s Management Agenda identified rightsizing as one of the administration’s priorities. In addition, the president’s fiscal year 2003 international affairs budget highlighted the importance of making staffing decisions based on mission priorities and costs and directed OMB to analyze agencies’ overseas staffing and operating costs. In addition to citing the importance of examining the U.S. overseas presence at a broad level, rightsizing experts have highlighted the need for reducing the size of specific embassies. In November 1999, the chairman of OPAP said that rightsizing embassies and consulates in western Europe could result in significant savings, given their large size. OPAP proposed that flagship posts from the cold war be downsized while some posts in other parts of the world be expanded. A former undersecretary of state agreed that some embassies in western Europe were heavily staffed and that positions could be reallocated to meet critical needs at other embassies. A former U.S. ambassador to France – also a member of OPAP – testified in April 2000 that the Paris Embassy was larger than needed and should be a candidate for substantial staff reductions to lessen security vulnerabilities, streamline embassy functions, and decrease costs. Proposed Rightsizing Framework Although there is general agreement on the need for rightsizing the U.S. overseas presence, there is no consensus on how to do it. As a first step, we believe it is feasible to create a framework that includes a set of questions to guide decisions on overseas staffing. We identified three critical elements that should be evaluated together as part of this framework: (1) physical security and real estate, (2) mission priorities and requirements, and (3) operational costs. If the evaluation shows problems, such as security risks, decision makers should then consider the feasibility of rightsizing options. Figure 1 further illustrates the elements of our framework that address desired staffing changes. We envision State and other agencies in Washington, D.C., including OMB, using our framework as a guide for making overseas staffing decisions. For example, State and other agencies could use our framework to free up resources at oversized posts, to reallocate limited staffing resources worldwide, and to introduce greater accountability into the staffing process. We can also see ambassadors using this framework to ensure that embassy staffing is in line with security concerns, mission priorities and requirements, and costs to reduce the number of people at risk. The following sections describe in more detail the three elements of the framework we are developing, some important questions to consider for each element, and potential rightsizing options to be considered. Physical Security and Real Estate The substantial loss of life caused by the bombings of the U.S. embassies in Africa and the ongoing threats against U.S. diplomatic buildings have heightened concern about the safety of our overseas personnel. The State Department has determined that about 80 percent of embassy and consulate buildings do not fully meet security standards. Although State has a multibillion-dollar plan under way to address security deficiencies around the world, security enhancements cannot bring most existing facilities in line with the desired setback and related blast protection requirements. Recurring threats to embassies and consulates highlight the importance of rightsizing as a tool to reduce the number of embassy employees at risk. What Is the Threat and Security Profile of the Embassy? The Accountability Review Boards recommended that the secretary of state review the security of embassies and consider security in making staffing decisions. We agree that the ability to protect personnel should be a key factor in determining the staffing levels of embassies. State has prepared a threat assessment and security profile for each embassy that can be used when assessing staff levels. While chiefs of mission and the State Department have primary responsibility for assessing overseas security needs and allocating security resources, all agencies should consider the risks associated with maintaining staff overseas. What Actions Are Practical to Improve the Security of Facilities? There are a variety of ways to improve security including constructing new buildings, adding security enhancements to existing buildings, and working with host country law enforcement agencies to increase embassy protection. In addition, space utilization studies may suggest alternatives for locating staff to more secure office buildings or may point to other real estate options, such as leasing commercial office space. If security and facilities reviews suggest that security enhancements, alternative space arrangements, or new secure real estate options are impractical, then decision makers should consider rightsizing actions. The Paris Embassy, our case study, illustrates the importance of security and real estate issues in determining overseas staffing levels. The security situation in Paris is not good and suggests the need to consider reducing staff. None of the embassy’s office buildings currently meets security standards. One of the buildings is particularly vulnerable and staff face a variety of threats. Space reengineering and security adjustments to embassy buildings may improve security for some embassy staff, but significant vulnerabilities will remain even after planned changes are made. However, it is difficult to assess the full range of options for the embassy in Paris because State does not have a comprehensive plan identifying facilities and real estate requirements. If the State Department decides it is not feasible to build or lease another office building in Paris that would provide better security, then decision makers will need to seriously consider relocating staff to reduce the number of people at risk. Mission Priorities and Requirements The placement and composition of staff overseas must reflect the highest priority goals of U.S. foreign policy. Moreover, The President’s Management Agenda states that U.S. government overseas staffing levels should be the minimum necessary to serve U.S. foreign policy goals. What Are the Priorities of the Embassy? Currently, there is no clear basis on which to evaluate an embassy’s mission and priorities relative to U.S. foreign policy goals. State’s current Mission Performance Plan process does not differentiate among the relative importance of U.S. strategic goals. In recent months, State has revised the Mission Performance Plan process to require each embassy to set five top priorities and link staffing and budgetary requirements to fulfilling these priorities. A successful delineation of mission priorities will complement the framework we are developing and support future rightsizing efforts to adjust the composition of embassy staff. Are Workload Requirements Validated and Prioritized? Embassy requirements include influencing policy of other governments, assisting Americans abroad, articulating U.S. policy, handling official visitors, and providing input for various reports and requests from Washington. In 2000, based on a review of six U.S. embassies, the State-led interagency committee found the perception that Washington’s requirements for reports and other information requests were not prioritized and placed unrealistic demands on staff. We found this same perception as well among some offices in Paris. We believe that scrutiny of workload could potentially identify work of low priority such as reporting that has outlived its usefulness. Currently, the department monitors and sends incoming requests for reports and inquiries to embassies and consulates, but it rarely refuses requests and leaves prioritization of workload to the respective embassies and consulates. Washington’s demands on an embassy need to be evaluated in light of how they affect the number of staff needed to meet the work requirements. How Do Agencies Determine Staffing Levels? The President’s Management Agenda states that there is no mechanism to assess the overall rationale for and effectiveness of where and how many U.S. employees are deployed. Each agency in Washington has its own criteria for placing staff overseas. Some agencies have more flexibility than others in placing staff overseas, and Congress mandates the presence of others. Thorough staffing criteria are useful for determining and reassessing staffing levels and would allow agencies to better justify the number of overseas staff. Could an Agency’s Mission Be Pursued in Other Ways? Some agencies are entirely focused on the host country while others have regional responsibilities or function almost entirely outside the country in which they are located. Some agencies have constant interaction with the public, while others require interaction with their government counterparts. Some agencies collaborate with other agencies to support the embassy’s mission, while others act more independently and report directly to Washington. Analyzing where and how agencies conduct their business overseas may lead to possible rightsizing options. Our work in Paris highlights the complexity of rightsizing the U.S. overseas presence given the lack of clearly stated mission priorities and requirements and demonstrates the need for a more disciplined process. It is difficult to assess whether 700 people are needed at the embassy because the executive branch has not identified its overall priorities and linked them to resources. For example, the current Mission Performance Plan for the Paris Embassy includes 15 of State’s 16 strategic goals. Furthermore, the cumulative effect of Washington’s demands inhibits some agencies’ ability to pursue their core missions in Paris. For example, the economics section reported that Washington-generated requests resulted in missed opportunities for assessing how U.S. private and government interests are affected by the many ongoing changes in the European banking system. We also found that the criteria to locate staff in Paris vary significantly by agency. Some agencies use detailed staffing models but most do not. Nor do they consider embassy priorities or the overall requirements on the embassy in determining where and how many staff are necessary. In addition, some agencies’ missions do not require them to be located in Paris. Given the security vulnerabilities, it makes sense for these agencies to consider rightsizing options. Cost of Operations The President’s Management Agenda noted that the true costs of sending staff overseas are unknown. Without cost data, decision makers cannot determine whether a correlation exists between costs and the work being performed, nor can they assess the short- and long-term costs associated with feasible business alternatives. What Are an Embassy’s Operating Costs? We agree with President Bush that staffing decisions need to include a full range of factors affecting the value of U.S. presence in a particular country, including the costs of maintaining the embassy. Nevertheless, we found there is no mechanism to provide the ambassador and other decision makers with comprehensive data on all agencies’ costs of operations at an embassy. This lack of cost data for individual embassies makes linking costs to staffing levels, mission priorities, and desired outcomes impossible. This is a long-standing management weakness that, according to the president, needs to be corrected. Are Costs Commensurate With Expected Outcomes? Once costs are known, it is important to relate them to the embassy’s performance. This will allow decision makers to assess the relative cost effectiveness of various program and support functions and to make cost- based decisions when setting mission priorities and staffing levels and when determining the feasibility of alternative business approaches. Our work in Paris demonstrates that this embassy is operating without fundamental knowledge and use of comprehensive cost data. State officials concurred that it is difficult to fully record the cost of all agencies overseas because of inconsistent accounting and budgeting systems. However, we determined that the cost of an embassy’s operations can be documented, despite difficulties in compiling data for the large number of accounts and agencies involved. To collect cost information, we developed a template to capture different categories of operating costs, such as salaries and benefits, and applied the template to each agency at the embassy and at consulates and other sites throughout France (see app. III). We have documented the total cost for all agencies operating in France in fiscal year 2001 to be about $100 million. However, the actual cost is likely higher because some agencies did not report costs associated with staff salaries and benefits and discrepancies exist in the reporting of some operating costs. With comprehensive data, the Paris Embassy could make cost-based decisions when conducting a rightsizing analysis. Consideration of Rightsizing Options Analyses of security, mission, and costs may suggest the assignment of more or fewer staff at an embassy or an adjustment to the overall staff mix. If decision makers decide that it is necessary to reduce staff, rightsizing experts have recommended that embassies consider alternative means of fulfilling mission requirements. Moreover, President Bush has told U.S. ambassadors that “functions that can be performed by personnel in the U.S. or at regional offices overseas should not be performed at a post.” In considering options, embassy officials will also have to weigh the security, mission effectiveness, and cost trade-offs. These may include the strategic importance of an embassy or the costs of adopting different management practices. Our analysis highlights five possible options, but this list is not exhaustive. These options include: relocating functions to the United States; relocating functions to regional centers; relocating functions to other locations under chief of mission authority where relocation back to the United States or to regional centers is not practical; purchasing services from the private sector; and streamlining outmoded or inefficient business practices. Each option has the potential to reduce staff in Paris and the associated security vulnerability. Specifically: Some functions at the Paris Embassy could be relocated to the United States. State is planning to relocate more than 100 budget and finance positions from the Financial Services Center in Paris to State’s financial center in Charleston, South Carolina, by September 2003. In addition, we identified other agencies that perform similar financial functions and could probably be relocated. For example, four Voice of America staff pay correspondent bureaus and freelance reporters around the world and benefit from collocation with State’s Financial Services Center. The Voice of America should consider whether this function should also be relocated to Charleston in 2003. The Paris Embassy could potentially relocate some functions to the regional logistics center in Antwerp, Belgium, and the planned 23-acre secure regional facility in Frankfurt, Germany, which has the capacity for approximately 1,000 people. For example, the Antwerp facility could handle part of the embassy’s extensive warehouse operation, which is currently supported by about 25 people. In addition, some administrative operations at the embassy such as procurement could potentially be handled out of the Frankfurt facility. Furthermore, staff at agencies with regional missions could also be moved to Frankfurt. These include a National Science Foundation representative who spent approximately 40 percent of his time in 2001 outside of France, four staff who provide budget and finance support to embassies in Africa, and some Secret Service agents who cover eastern Europe, central Asia, and parts of Africa. We identified additional positions that may need to be in Paris but may not need to be in the primary embassy buildings where secure space is at a premium. For example, the primary function of the National Aeronautics and Space Administration (NASA) representative is to act as a liaison to European space partners. Accomplishing this work may not require retaining office space at the embassy. The American Battle Monuments Commission already has about 25 staff in separate office space in a suburb of Paris. In addition, a Department of Justice official works in an office at the French Ministry of Justice. However, dispersal of staff raises additional security issues that need to be considered. Given Paris’ modern transportation and communication links and large private sector service industry, the embassy may be able to purchase services from the private sector, which would reduce the number of full- time staff at risk at the embassy. We identified as many as 50 positions at the embassy that officials in Washington and Paris agreed are commercial in nature, including painters, electricians, plumbers, and supply clerks. Streamlining or reengineering outmoded or inefficient functions could help reduce the size of the Paris Embassy. Certain procurement procedures could potentially be streamlined, such as consolidating multiple purchase orders with the same vendor and increasing the use of government credit cards for routine actions. Consolidating inefficient inventory practices at the warehouse could also decrease staff workload. For instance, household appliances and furniture are maintained separately with different warehouse staff responsible for different inventories. Purchasing furniture locally at embassies such as Paris could also reduce staffing and other requirements. As others have pointed out, advances in technology, increased use of the Internet, and more flights from the United States may reduce the need for full-time permanent staff overseas. Moreover, we have reported in the past about opportunities to streamline embassy functions to improve State’s operations and reduce administrative staffing requirements, including options to reduce residential housing and furniture costs. Implementing a Rightsizing Framework Mr. Chairman, although it is only one of the necessary building blocks, the framework we are developing can be the foundation for future rightsizing efforts. However, a number of policy issues and challenges need to be addressed for this process to move forward with any real success. For instance, the executive branch needs to prioritize foreign policy goals and objectives and insist on a link between those goals and staffing levels. Developing comprehensive cost data and linking budgets and staffing decisions are also imperative. To their credit, State and OMB appear to be headed in the right direction on these issues by seeking both cost data and revising embassies’ mission performance planning process, which we believe will further support a rightsizing framework. We plan to do more work to expand and validate our framework. The previous discussion shows that the framework we are developing can be applied to the Paris Embassy. We also believe that the framework can be adjusted so that it is applicable worldwide because the primary elements of security, mission, and costs are the key factors for all embassies. In fact, rightsizing experts told us that our framework was applicable to all embassies. Nevertheless, we have not tested the framework at other embassies, including locations where the options for relocation to regional centers or the purchase of services from the private sector are less feasible. We believe that the next stage should also focus on developing a mechanism to ensure accountability in implementing a standard framework. Rightsizing experts and officials we spoke with suggested several different options. These options include establishing an interagency body similar to the State-led committee that was formed to implement OPAP’s recommendations; creating an independent commission comprising governmental and nongovernmental members; or creating a rightsizing office within the Executive Office of the President. Some State Department officials have suggested that State adopt an ambassadorial certification requirement, which would task ambassadors with periodically certifying in writing that the size of their embassies and consulates are consistent with security, mission, and cost considerations. Each of these suggestions appears to have some merit but also faces challenges. First, an interagency committee would have to work to achieve coordination among agencies and have leadership that can speak for the entire executive branch. Second, an independent commission, perhaps similar to OPAP, would require members of high stature and independence and a mechanism to link their recommendations to executive branch actions. Third, a separate office in the White House has potential, but it would continually have to compete with other executive branch priorities and might find it difficult to stay abreast of staffing issues at over 250 embassies and consulates. Finally, an ambassadorial certification process is an interesting idea but it is not clear what, if anything, would happen if an ambassador were unwilling to make a certification. Furthermore, ambassadors may be reluctant to take on other agencies’ staffing decisions, and in such situations the certification could essentially become a rubber stamp process. Ultimately, the key to any of these options will be a strong bipartisan commitment by the responsible legislative committees and the executive branch. Mr. Chairman and members of the subcommittee, this concludes my prepared statement. I would be pleased to answer questions you may have. Contacts and Acknowledgments For future contacts regarding this testimony, please call Jess Ford or John Brummet at (202) 512-4128. Individuals making key contributions to this testimony included Lynn Moore, David G. Bernet, Chris Hall, Melissa Pickworth, Kathryn Hartsburg, and Janey Cohen. Appendix I: Proposed Rightsizing Framework and Corresponding Questions PHYSICAL SECURITY AND REAL ESTATE What are the threat and security profiles? Do office buildings provide adequate security? Is existing secure space being optimally utilized? What actions are practical to improve the security of facilities? Do facilities and security issues put the staff at an unacceptable level of risk or limit mission accomplishment? Will rightsizing reduce security vulnerabilities? MISSION PRIORITIES AND REQUIREMENTS What are the staffing and mission of each agency? What is the ratio of support staff to program staff at the embassy? What are the priorities of the embassy? Does each agency’s mission reinforce embassy priorities? Are workload requirements validated and prioritized and is the embassy able to balance them with core functions? Are any mission priorities not being addressed? How do agencies determine embassy staffing levels? Could an agency’s mission be pursued in other ways? Does an agency have regional responsibilities or is its mission entirely focused on the host country? COST OF OPERATIONS What is the embassy’s total annual operating cost? What are the operating costs for each agency at the embassy? Are agencies considering the full cost of operations in making staffing decisions? Are costs commensurate with overall embassy importance and with specific embassy outputs? CONSIDERATION OF RIGHTSIZING OPTIONS What are the security, mission, and cost implications of relocating certain functions to the United States, regional centers, or to other locations, such as commercial space or host country counterpart agencies? Are there secure regional centers in relatively close proximity to the embassy? Do new technologies offer greater opportunities for operational support from other locations? Do the host country and regional environment have the means for doing business differently, i.e., are there adequate transportation and communications links and a vibrant private sector? To what extent can embassy business activities be purchased from the private sector at a reasonable price? What are the security implications of increasing the use of contractors over direct hires? Can costs associated with embassy products and services be reduced through alternative business approaches? Can functions be reengineered to provide greater efficiencies and reduce requirements for personnel? Are there other rightsizing options evident from the size, structure, and best practices of other bilateral embassies or private corporations? Are there U.S. or host country legal, policy, or procedural obstacles that may impact the feasibility of rightsizing options? Appendix II: Staffing Profile of the Paris Embassy (Jan. 2, 2002)
Rightsizing is the aligning of the number and location of staff assigned to U.S. embassies with foreign policy priorities, security, and other constraints. GAO is developing a framework to enable the executive branch to assess the number and mix of embassy staff. The framework will link staffing levels to the following three critical elements of overseas operations: (1) physical security and real estate, (2) mission priorities and requirements, and (3) operational costs. GAO reviewed policies and practices at the U.S. Embassy in Paris because of its large size and history of rightsizing decisions. GAO found that about 700 employees from 11 agencies work in main buildings at the Paris Embassy. Serious security concerns in at least one embassy building in Paris suggest the need to consider staff reductions unless building security can be improved. Staffing levels are hard to determine because agencies use different criteria and priorities to place staff. The lack of comprehensive cost data on all agencies' operations, which is estimated at more than $100 million annually in France, and the lack of an embassywide budget eliminate the possibility of cost-based decisionmaking on staffing. The number of staff could be reduced, particularly those in support positions, which constitute about one-third of the total. Options include relocating functions to the United States or to regional centers and outsourcing commercial activities.
Background Postsecondary institutions that serve large proportions of low-income and minority students are eligible to receive grants from Education through programs authorized under Title III and Title V of the Higher Education Act, as amended. Institutions eligible to receive these grants include historically black colleges and universities, Hispanic-serving institutions, tribally controlled colleges and universities, Alaska Native-serving institutions and Native Hawaiian-serving institutions, and other undergraduate postsecondary institutions that serve large numbers of low- income students. In 2007, Congress authorized new programs for other categories of minority serving institutions, including predominantly black institutions, Native American-serving nontribal institutions, and Asian American and Native American Pacific Islander-serving institutions. Funding for Title III and V programs included in our review has increased significantly over the past 10 years. In fact, funding almost tripled from fiscal year 1999 to fiscal year 2009, increasing from $230 million to $681 million (see table 1). In addition, fiscal year 2009 funding for the three new Title III programs created in 2007 was $30 million. While the institutions included in these programs differ in terms of the racial and ethnic makeup of their students, they serve a disproportionate number of financially needy students and have limited financial resources, such as endowment funds, with which to serve them. The Higher Education Act outlines broad goals for these grants, but provides flexibility to institutions in deciding what approaches will best meet their needs. An institution can use the grants to focus on one or more activities to address challenges articulated in its comprehensive development plan, which is required as part of the grant application and must include the institution’s strategy for achieving growth and self-sufficiency. Under Education’s program guidance, institutions are allowed to address challenges in four broad focus areas: academic quality, student support services, institutional management, and fiscal stability. For example, funds can be used to support faculty development; purchase library books, periodicals, and other educational materials; hire tutors or counselors for students; improve educational facilities; or build endowments. Long-Standing Deficiencies in Grant Monitoring and Technical Assistance Limit Education’s Ability to Ensure That Funds Are Used Properly and Grantees Are Supported Education Has Made Limited Progress toward Implementing a Systematic Approach to Monitoring and Technical Assistance GAO and Education’s Inspector General have recommended multiple times that Education implement a systematic monitoring approach to better assess the fiscal and programmatic performance of Title III and V grantees. Such an approach would include implementing formal monitoring and technical assistance plans based on risk models and developing written procedures for providing technical assistance. In 2004, for example, we recommended that Education complete its electronic monitoring system and training programs to ensure its monitoring plans are carried out and target at-risk grantees. In our 2009 report, however, we found that while Education had taken some steps to better target its monitoring in response to our previous recommendation, many of its initiatives had yet to be fully realized. Accordingly, we recommended that the Secretary of Education develop a comprehensive, risk-based approach to target grant monitoring and technical assistance based on the needs of grantees. Education officials agreed with this recommendation and told us that they were working to implement it. At this time, however, Education is still in the process of modifying its monitoring approach and it is too early to determine the effectiveness of its efforts. Table 2 summarizes the status of Education’s key monitoring initiatives, followed by a more detailed discussion of each initiative. In 2009, we found that Education had made progress in automating its monitoring tools and developing risk-based criteria. Specifically, Education redesigned its electronic monitoring system in 2007 to add several key enhancements which, if fully integrated into the oversight activities of program staff, have the potential to improve the quality and consistency of monitoring. The redesigned system brings together information about an institution’s performance in managing its entire portfolio of higher education grants, increasing Education’s ability to assess the risk of grantee noncompliance with program rules. Program officers can also enter into the system updates about a grantee’s performance, based on routine interactions with the grantee. Because the system integrates financial and programmatic data, such as institutional drawdown of grant funds and annual performance reports, staff have ready access to information needed to monitor grantees. However, it will be important for Education to ensure that staff use the system to appropriately monitor grantee performance. For example, our 2009 report found that program staff did not consistently review the annual performance reports grantees are required to submit—reports that provide key information to determine whether grantees have demonstrated adequate progress to justify continued funding. Education officials reported that they have established new processes and a new form to ensure that staff review these reports as part of their regular monitoring activities. Another feature of the system is a monitoring index, implemented in 2008, that identifies institutions that need heightened monitoring or technical assistance based on criteria designed to assess risk related to an institution’s ability to manage its grants. For example, at the time of our 2009 report, an institution that had lost accreditation or had grants totaling more than $30 million was automatically prioritized for heightened monitoring, which could involve site visits or other contacts with the school. Since our 2009 report, Education has twice updated the index. For fiscal year 2010, Education officials told us they reduced the number of criteria to focus on those that it has found more accurately identify high- risk schools that are likely to be experiencing financial or management problems. The fiscal year 2010 index has identified 64 institutions across all higher education grant programs for heightened monitoring, half of which participate in Title III or V programs. Annual Monitoring Plans Our 2009 report found that Education still lacked a coordinated approach to guide its monitoring efforts. In 2002, Education directed each program within the agency to develop a monitoring plan to place greater emphasis on performance monitoring for all grantees and to consider what assistance Education could provide to help grantees accomplish program objectives. However, Education rescinded the requirement in 2006 because the practice did not achieve the intended purpose of better targeting its monitoring resources, and Education officials told us the program office for Title III and V grants discontinued the development of annual monitoring and technical assistance plans. Since our report was published, Education required all major program offices to develop a monitoring plan for fiscal year 2010. Officials from the office responsible for administering Title III and V programs said they submitted a monitoring plan for review in February 2010, and have been using the plan in draft form while waiting for it to be approved. The plan for Title III and V programs outlines Education’s monitoring approach and describes various monitoring tools and activities—such as the monitoring index and site visits; how they are to be used to target limited monitoring resources to grantees that need it most; and an increased focus on staff training. The monitoring plan also includes a section on next steps and performance measures, but Education has not consistently developed realistic, attainable, and measurable targets for each of the monitoring tools and activities outlined in the plan. For example, Education developed specific goals for the number of site visits and technical assistance workshops it would conduct, but it will consider these goals attained if it completes at least 75 percent of them. Additionally, under staff training, Education commits to offering fiscal monitoring training sessions, but it has not established measurable targets for how many staff will receive the training or how it will determine the effectiveness of the training in meeting staff needs. Site Visits With the implementation of an electronic monitoring system and risk-based monitoring index, Education now has tools to enhance its ability to select grantees for site visits, a critical component of an effective grants management program. Targeting grantees that need assistance or are at high risk of misusing grant funds is critical, given Education’s limited oversight resources and the expansion of its grant oversight responsibilities with the addition of new Title III programs created in 2007. In our 2009 report, however, we found that overall site visits to Title III and V grantees had declined substantially in recent years (see table 3), and Education was not making full use of its risk-based criteria to select grantees for visits. Since our 2009 report, site visits to Title III and V grantees have remained limited, with six visits conducted in fiscal year 2009 and five visits completed more than half-way through fiscal year 2010. One former senior Education official told us that site visits had declined because the program office had limited staff and few had the requisite skills to conduct financial site visits. To obtain the experience and skills needed to conduct comprehensive site visits, Education leveraged staff from another office to conduct site visits for Title III and V programs in 2008, but Education officials recently told us that staff from that office have been dispersed and are no longer available to conduct site visits. They also told us they anticipate hiring four new program officers during the summer of 2010, but it is unclear what effect such hiring will have on Education’s ability to conduct site visits. Our 2009 report also found that the program office for Title III and V grants was not fully using its monitoring index to select high risk schools for site visits. Aside from referrals from the Inspector General, Education officials told us they selected schools for fiscal year 2008 and 2009 site visits based on the total amount of higher education grants awarded (i.e. grantees receiving $30 million or more), which represented only 5 percent of the monitoring index criteria in these years. In response to our 2009 report, Education officials said that they would use the revised monitoring index to select half of the schools chosen for site visits. However, none of the five site visits completed so far in fiscal year 2010 was selected based on the monitoring index. Education officials told us that they have used the index to select five of the eight remaining site visits planned for 2010, but these have not been scheduled yet. Using its monitoring index to select fewer than half of its site visits does not seem to be a fully risk-based approach, leaving open the possibility that Education will not target its limited resources to those grantees most likely to experience problems. Staff Training In our 2009 study, we reported that Education had made progress in developing grant monitoring courses to enhance the skills of Title III and V program staff, but skill gaps remained that limited their ability to fully carry out their monitoring and technical assistance responsibilities. For example, Education had developed courses on internal control and grants monitoring, but these courses were attended by less than half of the program staff. Senior Education officials also identified critical areas where additional training is needed. Specifically, one official told us that the ability of program staff to conduct comprehensive reviews of grantees had been hindered because they had not had training on how to review the financial practices of grantees. As a result, our 2009 report recommended that Education provide program staff with the training necessary to fully carry out their monitoring and technical assistance responsibilities. Education agreed with the recommendation and has developed additional training in key areas. Specifically, Education developed two courses on how to conduct programmatic and fiscal monitoring during a site visit, but only about half of the program officers have attended both courses so far. Education has also established a mentoring program that pairs new program officers with experienced staff. While Education is taking steps to develop training in needed skill areas, implementing an effective monitoring system will require sustained attention to training to ensure that all staff can perform the full range of monitoring responsibilities. Technical Assistance While Education provides technical assistance for prospective and current Title III and V grantees through preapplication workshops and routine interaction between program officers and grant administrators at the institutions, our 2009 report found that it had not made progress in developing a systemic approach that targeted the needs of grantees. According to one senior Education official, technical assistance is generally provided to grantees on a case-by-case basis at the discretion of program officers. Grantees we interviewed told us that Education does not provide technical assistance that is consistent throughout the grant cycle. Several officials complimented the technical assistance Education provided when they applied for grants, but some of those officials noted a precipitous drop in assistance during the first year after grants were awarded. During the initial year, grantees often need help with implementation challenges, such as recruiting highly qualified staff, securing matching funds for endowments, and overcoming construction delays. In the past, grantees had an opportunity to discuss such challenges at annual conferences sponsored by Education, but Education did not hold conferences for 3 years from 2007 to 2009, despite strong grantee interest in resuming them. According to Education officials, resource constraints prevented them from holding the conferences in those years. To improve the provision of technical assistance, our 2009 report recommended that Education disseminate information to grantees about common implementation challenges and successful projects and develop appropriate mechanisms to collect and use grantee feedback. In response, Education held a conference for all Title III and V grantees in March 2010, with sessions focused specifically on best practices. Education officials told us that they plan to organize another conference in 2011 and said they will explore the use of webinars to share information with grantees that may be unable to attend. Education has also created an e-mail address for grantees to express concerns, ask questions, or make suggestions about the programs. The address is displayed on every program Web page and is monitored by an Education official not associated with the program office to allow grantees to provide anonymous feedback. In addition, Education officials reported that they have developed a customer satisfaction survey that the Office of Management and Budget has approved for distribution. The survey will be sent to new grantees and grantees that are near the end of their grant period and will obtain feedback on the quality of information provided before a grant is approved, the quality of technical assistance provided, and satisfaction with communications with the program office. Education Lacks Assurance That Grant Funds Are Used Appropriately Without a comprehensive approach to target its monitoring, Education lacks assurance that grantees appropriately manage federal funds, increasing the potential for fraud, waste, or abuse. In our 2009 report, we reviewed financial and grant project records at seven institutions participating in Title III and V programs in fiscal year 2006 and identified $142,943 in questionable expenses at 4 of the 7 institutions we visited (see table 4). At one institution—Grantee D—we identified significant internal control weaknesses and $105,117 in questionable expenditures. A review of grant disbursement records revealed spending with no clear link to the grant and instances in which accounting procedures were bypassed by the school’s grant staff. Of the questionable expenditures we identified, $88,195 was attributed to an activity designed to promote character and leadership development, of which more than $79,975 was used for student trips to locations such as resorts and amusement parks. According to the grant agreement, the funds were to be used for student service learning projects; instead, more than $6,000 of grant funds was used to purchase a desk and chair and another $4,578 was used to purchase an airplane global positioning system even though the school did not own an airplane. In purchasing the global positioning system and office furniture, a school official split the payments on an institutionally-issued purchase card to circumvent limits established by the institution. Officials at the institution ignored multiple warnings about mismanagement of this activity from external evaluators hired to review the grant. Education visited the school in 2006 but found no problems, and recommended we visit the institution as an example of a model grantee. We referred the problems we noted at this institution to Education’s Inspector General for further investigation. Examples of the questionable expenditures we identified at three other institutions we visited included: At Grantee A, we were unable to complete testing for about $147,000 of grant fund transactions due to a lack of readily available supporting documentation. For one transaction that was fully documented, the grantee improperly used $2,127 in grant funds to pay late fees assessed to the college. Once we pointed out that grant funds cannot be used for this purpose, the college wrote a check to reimburse the grant. Grantee B used $27,530 to prepay subscription and contract services that would be delivered after the grant expired. Grantee F used more than $1,500 in grant funds to purchase fast food and more than $4,800 to purchase t-shirts for students. Our 2009 report recommended that Education follow up on each of the improper uses of grant funds identified. In response, Education conducted a site visit to one institution in November 2009 and approved its corrective action plans. Education officials also reported that they visited two other institutions in April 2010 and plan to visit the fourth institution before November 2010. Concluding Observations We have recommended multiple times that Education implement a systemic approach to monitoring postsecondary institutions receiving Title III and V grants. As we reported in 2009, Education has made progress in developing tools—such as an electronic monitoring system and risk-based criteria—to assess potential risks, but it lacks a comprehensive risk-based monitoring and technical assistance approach to target its efforts. In the 9 months since our report was issued, Education taken some steps to respond to our most recent recommendations, but it is too early to tell if it has fully embraced a risk-based monitoring approach. For example, Education is still not relying on its risk-based monitoring index to target site visits to schools at highest risk. Until Education is fully committed to such an approach, Title III and V funds will continue to be at risk for fraud, waste, or abuse. The internal control weaknesses and questionable expenditures we identified at some grantees we reviewed demonstrate the importance of having a strong and coordinated monitoring and assistance program in place, especially as Education is called on to administer additional programs and funding. Targeting monitoring and assistance to grantees with the greatest risk and needs is critical to ensuring that grant funds are appropriately spent and are used to improve institutional capacity and student outcomes. To do this effectively will require Education’s sustained attention and commitment. We will continue to track Education’s progress in fully implementing our recommendations. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other members of the subcommittee may have. For further information regarding this testimony, please contact George A. Scott (202) 512-7215 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Debra Prescott (Assistant Director), Michelle St. Pierre, Carla Craddock, Susan Aschoff, and James Rebbe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Higher education has become more accessible than ever before, although students from some demographic groups still face challenges in attending college. To help improve access to higher education for minority and low-income students, Titles III and V of the Higher Education Act, as amended, provide grants to strengthen and support institutions that enroll large proportions of these students. GAO was asked to testify on the Department of Education's (Education) oversight of institutions receiving Title III or V grants and progress Education has made in monitoring the financial and programmatic performance of Title III and V grantees. GAO's testimony is based primarily on its recent report, Low-Income and Minority Serving Institutions: Management Attention to Long-standing Concerns Needed to Improve Education's Oversight of Grant Programs ( GAO-09-309 , August 2009) and updated information provided by Education. In that report, GAO recommended that Education, among other things, (1) develop a comprehensive, risk-based approach to target monitoring and technical assistance; (2) ensure staff training needs are fully met; (3) disseminate information about implementation challenges and successful projects; and (4) develop appropriate feedback mechanisms. No new recommendations are being made in this testimony. GAO's 2009 report found that Education had taken steps in response to previous GAO recommendations to improve its monitoring of Title III and V grants, but many of its initiatives had yet to be fully realized. A coordinated, risk-based approach, targeting monitoring and assistance to grantees with the greatest risk and needs is critical, especially as Education's oversight responsibilities are expanding. Education agreed with GAO's 2009 recommendations and has begun taking steps to implement them, but it is too early to determine the effectiveness of these efforts, described below. (1) Risk-based monitoring criteria: At the time of the 2009 report, Education had developed a monitoring index to identify high-risk institutions, but was not using it to target schools for site visits. Education committed to use the index to select half of its fiscal year 2010 site visits, but none of the visits completed to date were based on the monitoring index. (2) Annual monitoring plan: Because it stopped developing annual monitoring plans for Title III and V programs in 2006, GAO determined that Education lacked a coordinated approach to guide its monitoring efforts. Since then, Education has developed a 2010 monitoring plan, but some of the monitoring activities lack realistic and measurable performance goals. (3) Site visits and staff training: The 2009 report found that site visits to Title III and V grantees, a key component of an effective grants management program, had declined substantially in recent years and that staff lacked the skills to conduct financial site visits. Since then, site visits have remained limited, but Education has developed training courses to address the skill deficits identified that about half the program staff have attended. (4) Technical assistance: The 2009 report found Education had not made progress in developing a systemic approach to target the needs of grantees. In response to GAO's recommendations, Education has taken some steps to encourage grantee feedback and information sharing among grantees. Without a comprehensive approach to target its monitoring, GAO previously found that Education lacked assurance that grantees appropriately manage federal funds, increasing the potential for fraud, waste, or abuse. For example, GAO identified $105,117 in questionable expenditures at one school, including student trips to amusement parks and an airplane global positioning system.
Background Each marketplace created under PPACA is intended to provide a seamless, single point of access for individuals to enroll in qualified health plans, apply for income-based financial subsidies established under the law and, as applicable, obtain an eligibility determination for other health coverage programs, such as Medicaid or the State Children’s Health Insurance Program (CHIP). To obtain health insurance offered through the marketplace, individuals must complete an application and meet certain eligibility requirements defined by PPACA, such as being a U.S. citizen or legal immigrant. For those consumers determined eligible, the marketplaces permit users to compare health plans and enroll in the plan of their choice. States had various options for marketplace participation, including (1) establishing their own state-based marketplace, (2) deferring to CMS to operate the federal marketplace in the state, or (3) participating in an arrangement called a partnership marketplace in which the state assists with some federal marketplace operations. In our June 2013 report on CMS efforts to establish the federal marketplace, we concluded that certain factors—such as the evolving scope of marketplace activities required in each state—suggested the potential for implementation challenges going forward.draft of that report, HHS emphasized the progress it had made since PPACA became law and expressed its confidence that marketplaces would be open and functioning in every state on October 1, 2013. Timeline of Key Events PPACA required the establishment of marketplaces in each state by January 2014. Based on the expectation that individuals and families would need time to explore their coverage options and plan issuers would need time to process plan selections, HHS established October 1, 2013, as the beginning of the enrollment period for all marketplaces, including the federal marketplace.legal or regulatory, and organizational events during that development period, as well as future milestones through the beginning of open enrollment for 2015. Healthcare.gov and Supporting Systems The Healthcare.gov website is supported by several systems, including the FFM and the federal data services hub. Additional components include the Enterprise Identity Management System that confirms the consumer’s identity when entering the system. Healthcare.gov Website Healthcare.gov is the Internet address of a federal government-operated website that serves as the online user interface for the federal marketplace. The website allows the consumer to create an account, input required information, view health care plan options and make a plan selection. FFM System The FFM accepts and processes data entered through the website and was intended to provide three main functions: Eligibility and enrollment. This module guides applicants through a step-by-step process to determine their eligibility for coverage and financial assistance, after which they are shown applicable coverage options and have the opportunity to enroll. Plan management. This module interacts primarily with state agencies and health plan issuers. The module is intended to provide a suite of services for activities such as submitting, monitoring, and renewing qualified health plans. Financial management. This module facilitates payments to issuers, including premiums and cost-sharing reductions, and collects data from state-based marketplaces. Other FFM functions include services related to system oversight, communication and outreach strategies, and customer service. Federal Data Services Hub The data hub routes and verifies information among the FFM and external data sources, including other federal and state sources of information and issuers.Security number with the Social Security Administration and connects to the Department of Homeland Security to assess the applicant’s citizenship or immigration status. For example, the data hub confirms an applicant’s Social The data hub’s connection with other federal and state databases enables exchanges to determine whether an applicant is eligible for or enrolled in some other type of health coverage, such as the Department of Defense’s (DOD) TRICARE program or Medicaid—and therefore ineligible for subsidies to offset the cost of marketplace plans. hub also communicates with issuers by providing enrollment information and receiving enrollment confirmation in return. See figure 2 for an overview of Healthcare.gov and selected supporting systems. These subsidies include premium tax credits to offset qualified health plan premium costs and cost-sharing reductions to reduce policyholders’ out-of-pocket payments, including deductibles and co-payments, for covered services. Federal Implementation Costs While CMS was tasked with oversight of marketplace establishment, several other federal agencies also have implementation responsibilities. Three agencies—CMS, the Internal Revenue Service (IRS), and the Department of Veterans Affairs (VA)—reported almost all of the IT-related obligations supporting the implementation of the Healthcare.gov and its supporting systems. IT-related obligations include funds committed for the development or purchase of hardware, software, and system integration services, among other activities. These obligations totaled approximately $946 million from fiscal year 2010 through March 2014, with CMS obligating the majority of this total. CMS Contracts and Task Orders for Healthcare.gov and Its Supporting Systems As of March 2014, CMS reported obligating $840 million for the development of Healthcare.gov and its supporting systems, over 88 percent of the federal total. According to agency data, these obligations were spread across 62 contracts and task orders. We focused our review on two CMS task orders issued under an existing 2007 contract. The task orders were for the development of two core Healthcare.gov systems— the FFM and the data hub. We also reviewed a letter contract awarded by CMS in January 2014 to continue FFM development. The two task orders and the additional contract account for $369 million, or more than 40 percent, of the total CMS reported obligations as of March 2014. Acquisition Process The contract and task orders we examined are subject to the Federal Acquisition Regulation System, which provides uniform policies and procedures for acquisition by all executive agencies. The system includes the HHS acquisition regulation, which implements or supplements the FAR. HHS’s supplement to the FAR, which contain additional HHS policies and procedures, is referred to as the Department of Health and Human Services Acquisition Regulation (HHSAR). The FAR and HHSAR address issues pertaining to the contracting process and include activities related to three phases: pre-award, competition and award, and post- award. See figure 3 for an overview of these phases and selected activities related to each. To implement and oversee PPACA’s marketplace and private health insurance requirements, HHS established the Office of Consumer Information and Insurance Oversight (OCIIO) in April 2010 as part of the HHS Office of the Secretary. In January 2011, the OCIIO moved to CMS and became the Center for Consumer Information and Insurance Oversight (CCIIO). Within CMS, establishment of the federal marketplace was managed by CCIIO, with responsibilities shared with the Office of Information Services (OIS), and the Office of Acquisition and Grants Management (OAGM). HHS’s acquisition process for the data hub and FFM task orders involved multiple participants, including: The contracting officer. The contracting officer has the authority to enter into, administer, and/or terminate contracts and make related determinations. The contracting officer is responsible for ensuring performance of all necessary actions for effective contracting, ensuring compliance with the terms of the contract, and safeguarding the interests of the United States in its contractual relationships. The contracting officer’s representative (COR). The COR—also referred to as the contracting officer’s technical representative—is designated in writing by the contracting officer to perform specific technical or administrative functions. Unlike the contracting officer, a COR has no authority to make any commitments or changes that affect price, quality, quantity, delivery, or other terms and conditions of the contract and cannot direct the contractor or its subcontractors to operate in conflict with the contract terms and conditions. The government task leader (GTL). The GTL is a representative of the program office who assists the COR and is responsible for day-to- day technical interaction with the contractor. The GTL is also responsible for monitoring technical progress, including the surveillance and assessment of performance, and performing technical evaluations as required, among other responsibilities. Oversight Weaknesses and Lack of Adherence to Planning Requirements Compounded Acquisition Planning Challenges CMS undertook the development of Healthcare.gov and its related systems without effective planning or oversight practices, despite facing a number of challenges that increased both the level of risk and the need for oversight. According to CMS program and contracting officials, the task of developing a first-of-its-kind federal marketplace was a complex effort that was exacerbated by compressed time frames and changing requirements. CMS contracting officials explained that meeting project deadlines was a driving factor in a number of acquisition planning activities, such as the selection of a cost-reimbursement contract, the decision to proceed with the contract award process before requirements were stable, and the use of a new IT development approach. These actions increased contract risks, including the potential for cost increases and schedule delays, and required enhanced oversight. However, CMS did not use information available to provide oversight, such as quality assurance surveillance plans. CMS also missed opportunities to consider the full range of risks to the acquisition by not developing a written acquisition strategy, even though the agency was required to do so. As a result, key systems began development with risks that were not fully identified and assessed. Acquisition Planning Activities Carried High Levels of Risk for the Government Meeting project deadlines was a driving factor in a number of acquisition planning activities. HHS had 15 months between enactment of PPACA and the agency’s request for proposal to develop requirements for the FFM and data hub. In a prior report on acquisition planning at several agencies, including HHS, we found that the time needed to complete some pre-solicitation planning activities—such as establishing the need for a contract, developing key acquisition documents such as the requirements document, the cost estimate, and, if required, the acquisition plan; and obtaining the necessary review and approvals— could be more than 2 years. The time needed depended on factors that were present for this acquisition including complexity of the requirements, political sensitivity, and funding.developing requirements for a complex, first-of-its-kind system in these compressed time frames and indicated that more time was needed. CMS program officials noted challenges The FFM and data hub task orders were issued under an existing 2007 contract for enterprise system development. This approach was reasonable in these circumstances because, according to contracting officials, the task orders could be issued more quickly than using a full and open competitive approach. The 2007 contract had been awarded to 16 vendors who were then eligible to compete for individual task orders. The 2007 contract was specifically established to improve efficiency when new IT requirement arose—such as the federal marketplace development. The 16 eligible contractors had experience with CMS’s IT architecture and could come up to speed quickly. The solicitation for the 2007 contract sought contractors with experience in software design, development, testing and maintenance in complex systems environments to provide a broad range of IT services including planning, design, development, and technical support, among others. Of the 16 eligible contractors, four contractors responded with proposals for each system. CMS used a source selection process that considered both cost and non- cost factors. This type of source selection process is appropriate when it may be in the best interest of the agency to consider award to other than the lowest priced offer or the highest technically rated offer.the request for proposals indicated that cost and non-cost factors were weighted equally. The non-cost factors for technical evaluation included logical and physical design, project plan, and staffing plan, among others. In addition, CMS considered contractor past performance, but did not include that factor in the technical evaluation. CMS determined that the selected contractors for both task orders offered the most advantageous combination of technical performance and cost. Requirements for Developing the FFM System Were Not Well Defined When the Task Order Was Issued The FAR requires that agencies ensure that requirements for services are clearly defined. In addition, in our August 2011 review of opportunities to build strong foundations for better services contracts, we found that well- defined requirements are critical to ensuring the government gets what it needs from service contractors. We also found that program and contracting officials at the four agencies we reviewed—which included HHS—noted that defining requirements can be a challenging part of acquisition planning and is a shared responsibility between program and contracting officials. Further, our March 2004 report on software- intensive defense acquisitions found that while requirements for a project can change at any point, officials must aggressively manage requirements changes to avoid a negative effect on project results, such as cost increases and schedule delays. In order to begin work quickly, CMS proceeded with the award process before FFM contract requirements, which included general technical requirements for system development, were finalized. For example, at the time the task order was issued, CMS did not yet know how many states would opt to develop their own marketplaces and how many would participate in the federally facilitated marketplace, or the size of their uninsured populations. CMS also had not completed rulemaking necessary to establish key marketplace requirements. The statement of work for the FFM acknowledged a number of these unknown requirements, for example, stating that requirements for state support were not fully known and the FFM system “must be sufficiently robust to provide support of state exchange requirements at any point in the life cycle.” In addition, the FFM statement of work noted that the requirements related to a number of FFM services would be finalized after contract award, including services related to all three main functional areas—eligibility and enrollment, financial management, and plan management—as well as system oversight, communication, and customer service. The technical requirements for both the FFM and data hub were developed by CMS staff with contractor support and documented in a statement of work for each task order. Both statements called for the contractor to design a “solution that is flexible, adaptable, and modular to accommodate the implementation of additional functional requirements and services.” However, according to CMS program officials, requirements for data hub development were more clearly defined at the time that task order was issued than FFM requirements. These officials also stated that, prior to issuing the task order, CMS was able to develop a prototype for the data hub and a very clear technical framework to guide the contractor, but due to still-changing requirements, CMS could not provide the same guidance for FFM development. We have previously found that unstable requirements can contribute to negative contract outcomes, including cost overruns and schedule delays. CMS Used a Contract Type That Carried Risk for the Government and Required Additional Oversight FAR §16.301-2(a)(1) & (2). steps to minimize the use of cost reimbursement contracts.CMS’s use of the cost-plus-fixed-fee contract type may have been a reasonable choice under the circumstances, the related risks increased the need for oversight. In our November 2007 report on internal control deficiencies at CMS, we found that certain contracting practices, such as the frequent use of cost reimbursement contracts, increased cost risks to CMS because CMS did not implement sufficient oversight for cost reimbursement contracts at that time. acknowledged the increased responsibilities and risks associated with managing a cost reimbursement contract and included a number of oversight elements in the task orders to support contract oversight and manage risks. These elements included contract deliverables such as earned value management reports, monthly financial and project status reports, and a quality assurance surveillance plan. See GAO, Centers for Medicare and Medicaid Services: Internal Control Deficiencies Resulted in Millions of Dollars of Questionable Contract Payments, GAO-08-54 (Washington, D.C.: Nov. 15, 2007). We made nine recommendations to the Administrator of CMS to improve internal control and accountability in the contracting process and related payments to contractors. All nine recommendations have been implemented. the supplies or services conform to contract requirements.found that the quality assurance surveillance plans were not used to inform oversight. For example, contracting and program officials, including the COR and contracting officer, were not sure if the quality assurance surveillance plan had been provided as required by the FFM and data hub task orders. Although a copy was found by CMS staff in June 2014, officials said they were not aware that the document had been used to review the quality of the contractor’s work. Instead, CMS program officials said they relied on their personal judgment and experience to determine quality. CMS Selected a New IT Development Approach to Save Time, but Increased Risks In 2012, GAO reported on the use of Agile methods in the Federal government. See GAO, Software Development: Effective Practices and Federal Challenges in Applying Agile Methods, GAO-12-681 (Washington, D.C.: July 27, 2012). In this report we made one recommendation to the Federal CIO Council to encourage the sharing of these practices. and that deviating from traditional procedural guidance to follow Agile methods was a challenge. We also reported that new tools and training may be required, as well as updates to procurement strategies. Therefore, the new approach that CMS selected in order to speed work also carried its own implementation risks. CMS Did Not Fully Adhere to HHS Acquisition Planning Requirements and Missed Opportunities to Capture and Consider Risks Important to the Program’s Success While a number of CMS’s acquisition planning actions were taken in an effort to manage acquisition challenges, CMS missed opportunities to fully identify and mitigate the risks facing the program. HHS acquisition policy requires the development of a written acquisition strategy for major IT investments, such as the FFM system. According to HHS policy, an acquisition strategy documents the factors, approach, and assumptions that guide the acquisition with the goal of identifying and mitigating risks.HHS provides a specific acquisition strategy template that requires detailed discussion and documentation of multiple strategy elements, including market factors and organizational factors, among others. According to program officials, the acquisition planning process for the FFM and data hub task orders began in 2010, prior to HHS’s decision to move its Office of Consumer Information and Insurance Oversight (OCIIO) to CMS, and continued into early 2011. Program officials stated that the planning process included discussions of an acquisition strategy. However, CMS program and contracting staff did not complete the required acquisition strategy for FFM and data hub development. According to contracting and program officials, CMS has not been preparing acquisition strategies for any of its major IT acquisitions, not just those related to systems supporting Healthcare.gov. This is a longstanding issue. In November 2009 we found deficiencies in CMS contract management internal controls practices such as the failure to follow existing policies and the failure to maintain adequate documentation in contract files.CMS is planning steps to strengthen the agency’s program and project management, including training related to the acquisition strategy requirement. According to CMS contracting officials, Contracting officials from OAGM explained that at CMS the majority of acquisition planning is done by the program office and OAGM began discussions of the upcoming task orders related to Healthcare.gov and its supporting systems with program officials in February 2011. In June 2011, OAGM accepted a Request for Contract package—a set of documents used to request and approve a contract action—from the program office. The package documents some elements of an acquisition strategy. Specifically, it indicated the type of contract to be used and the selected contract approach; however, the documents do not include the rationale for all decisions and did not address a number of planning elements required in HHS acquisition strategy, such as organizational factors, technological factors, and logistics. In the absence of an acquisition strategy, key risks and plans to manage them were not captured and considered as required. The acquisition strategy provides an opportunity to highlight potential risk areas and identify ways to mitigate those risks. For example, the strategy guidance requires the consideration of organizational factors that include management and their capabilities, available staff and their skills, and risks associated with the organizational structure. Organizational factors were a potential risk area for these projects because the CMS organizations responsible for the FFM and data hub experienced significant changes just prior to and during the planning period. Specifically, OCIIO was established in 2010 and integrated into CMS in January 2011, just prior to the beginning of planning discussions with OAGM. According to CMS contracting and program officials, some of the 246 OCIIO staff transitioned to the new CCIIO and others joined CMS’s Office of Information Services (OIS) and OAGM. In the context of these organizational changes and the other considerable project risks, the acquisition strategy could have been a powerful tool for risk identification and mitigation. By failing to adhere to this requirement, CMS missed opportunities to explain the rationales for acquisition planning activities and to fully capture and consider risks important to the success of the program. Changing Requirements and Oversight Gaps Contributed to Significant Cost Growth, Schedule Delays, and Reduced Capabilities during FFM and Data Hub Development CMS incurred significant cost increases, schedule slips, and reduced system functionality in the development of the FFM and data hub systems—primarily attributable to new and changing requirements exacerbated by inconsistent contract oversight. From September 2011 to February 2014, estimated costs for developing the FFM increased from an initial obligation of $56 million to more than $209 million; similarly, data hub costs increased from an obligation of $30 million to almost $85 million. New and changing requirements drove cost increases during the first year of development, while the complexity of the system and rework resulting from changing CMS decisions added to FFM costs in the second year. In addition, required design and readiness governance reviews were either delayed or held without complete information and CMS did not receive required approvals. Furthermore, inconsistent contractor oversight within the program office and unclear roles and responsibilities led CMS program staff to inappropriately authorize contractors to expend funds. FFM and Data Hub Task Orders Experienced Significant Increases Obligations for both the FFM and data hub rose significantly during the two-and-a-half-year development period, with the FFM task order increasing almost four-fold, from $55.7 million obligated when issued in late 2011 to more than $209 million obligated by February 2014. Similarly, the data hub task order almost tripled, increasing from $29.9 million to $84.5 million during the same period. Figure 4 shows FFM and data hub obligation growth during this time. Development cost increases for the FFM and data hub were due to a combination of factors, including costs associated with adding or changing requirements. For example, CMS was aware that a number of key business requirements for the FFM and data hub would not be known until after the task orders were issued in September 2011, and it acknowledged some of these uncertainties in the statements of work, such as noting that the actual number of states participating in the federal marketplace and the level of support each state required was not expected to be known until January 2013. We previously found in March 2004 that programs with complex software development experienced cost increases and schedule delays when they lacked controls over their requirements, noting that leading software companies found changing requirements tend to be a major cause of poor software development outcomes. Subsequent modifications to the FFM and data hub task orders show the costs associated with adding requirements beyond those initial uncertainties. For example, CMS obligated an additional $36 million to the FFM and $23 million to the data hub in 2012, in large part to address requirements that were added during the first year of development, such as increasing infrastructure to support testing and production and adding a transactional database. Some of these new requirements resulted from regulations and policies that were established during this period. For example, in March 2012, federal rulemaking was finalized for key marketplace functions, resulting in the need to add services to support the certification of qualified health plans for partnership marketplace states. Other requirements emerged from stakeholder input, such as a new requirement to design and implement a separate server to process insurance issuers’ claims and enrollment data outside of the FFM. CMS program officials said that this resulted from health plan issuers’ concerns about storing proprietary data in the FFM. The FFM and data hub task orders were both updated to include this requirement in 2012, which was initially expected to cost at least $2.5 million. System Complexities and Rework Further Added to FFM Costs in the Second Year During the second year of development, from September 2012 to September 2013, the number of task order modifications and dollars obligated for the development of the FFM and data hub continued to increase. New requirements still accounted for a portion of the costs, but the second-year increases also reflected the previously unknown complexities of the original requirements and associated rework, particularly for the FFM. For example, according to the FFM contractor, one of the largest unanticipated costs came from CMS’ directions to purchase approximately $60 million in software and hardware that was originally expected to be provided by another Healthcare.gov contractor. Most of these costs were added through task order modifications in 2013. In April 2013, CMS added almost $28 million to the FFM task order to cover work that that was needed because of the increasingly complex requirements, such as additional requirements to verify income for eligibility determination purposes. The FFM contractor said some of these costs resulted from CMS’s decisions to start product development before regulations and requirements were finalized, and then to change the FFM design as the project was ongoing, which delayed and disrupted the contractor’s work and required them to perform rework. In addition, CMS decisions that appeared to be final were reopened, requiring work that had been completed by the contractor to be modified to account for the new direction. This included changes to various templates used in the plan management module and the application used by insurance issuers, as well as on-going changes to the user interface in the eligibility and enrollment module. According to the FFM contractor, CMS changed the design of the user interface to match another part of the system after months of work had been completed, resulting in additional costs and delays. In November 2012, the contractor estimated that the additional work in the plan management module alone could cost at least $4.9 million. By contrast, CMS program officials explained that the data hub generally had more stable requirements than the FFM, in part due to its functions being less technically challenging and because CMS had had more time to develop the requirements. While the obligations for the data hub also increased at the same rate as the FFM in the first year of development, they did so to a lesser degree during the second year. According to the data hub contractor, these increases were due to CMS-requested changes in how the work was performed, which required additional services, as well as hardware and software purchases. CMS Experienced Schedule Delays, Conducted Incomplete Governance Oversight Reviews, and Delayed Some Capabilities for the FFM and Data Hub In addition to increased costs, the FFM and data hub experienced schedule delays, which contributed to CMS holding incomplete governance oversight reviews and eventually reduced the capabilities it expected the FFM contractor to produce by the October 1, 2013, deadline. CMS Delayed Scheduled Governance Reviews, Reducing Time Available for FFM and Data Hub Testing and Implementation Reviews CMS initially established a tight schedule for reviewing the FFM and data hub development in order to meet the October 1, 2013, deadline for establishing enrollment through the website. Each task order lists the key governance reviews that the systems were required to meet as they progressed through development. The FFM and data hub task orders initially required the contractors to be prepared to participate in most of the CMS governance reviews— including a project baseline and final detailed design reviews—within the first 9 months of the awards. This would allow CMS to hold the final review needed to implement the systems—operational readiness—at least 6 months before the Healthcare.gov launch planned for October 1, 2013. In April 2013, CMS extended the requirements analysis and design phase. According the CMS program officials, requirements were still changing and more time was needed to finalize the FFM design. As a result, CMS compressed time frames for conducting reviews for the testing and implementation phases. Under the revised schedule, the contractor had until the end of September 2013—immediately prior to the date of the planned launch—to complete the operational readiness review, leaving little time for any unexpected problems to be addressed despite the significant challenges the project faced. Figure 5 shows the schedule of planned and revised development milestone reviews in the FFM and data hub task orders. The four reviews shown in figure 5—architecture, project baseline, final detailed design, and operational readiness— are among those required under the exchange life cycle framework, the governance model CMS specifically designed to meet the need to quickly develop the FFM and data hub using the Agile development approach. The life cycle framework requires technical reviews at key junctures in the development process, such as a final detailed design review to ensure that the design meets requirements before it is developed and tested. To accommodate different development approaches, the life cycle framework allows program offices leeway regarding how some reviews are scheduled and conducted, permitting more informal technical consultations when holding a formal review would cause delays. However, the framework requires that the four governance or milestone reviews be approved by a CMS governance board. Some Governance Reviews Were Not Fully Conducted or Approved Despite the revised FFM schedule, it is not clear that CMS held all of the governance reviews for the FFM and data hub or received the approvals required by the life cycle framework. The framework was developed to accommodate multiple development approaches, including Agile. A senior CMS program official said that although the framework was used as a foundation for their work, it was not always followed throughout the development process because it did not align with the modified Agile approach CMS had adopted. CMS program officials explained that they held multiple reviews within individual development sprints—the short increments in which requirements are developed and software is designed, developed, and tested to produce a building block for the final system. However, CMS program officials indicated that they were focused on responding to continually changing requirements which led to them participating in some governance reviews without key information being available or steps completed. Significantly, CMS held a partial operational readiness review for the FFM in September 2013, but development and testing were not fully completed and continued past this date. As a result, CMS launched the FFM system without the required verification that it met performance requirements. Furthermore, the life cycle framework states that CMS must obtain governance-board approval before the systems proceed to the next phase of development, but we did not see evidence that any approvals were provided. CMS records show that CMS held some governance reviews, such as design readiness reviews. However, the governance board’s findings identified outstanding issues that needed to be addressed in subsequent reviews and they were not approved to move to the next stage of development. CMS Postponed Some FFM Capabilities to Meet Deadlines By March 2013, CMS recognized the need to extend the task orders’ periods of performance in order to allow more time for development. CMS contract documents from that time estimated that only 65 percent of the FFM and 75 percent of the data hub would be ready by September 2013, when development was scheduled to be completed. Recognizing that neither the FFM nor the data hub would function as originally intended by the beginning of the initial enrollment period, CMS made trade-offs in an attempt to provide necessary system functions by the October 1, 2013, deadline. Specifically, CMS prioritized the elements of the system needed for the launch, such as the FFM eligibility and enrollment module, and postponed the financial module, which would not be needed until post- enrollment. CMS also delayed elements such as the Small Business Health Options Program marketplace, initially until November 2013, and then until 2015. See figure 6 for the modules’ completion status as of the end of the task order in February 2014. In September 2013, CMS extended the amount of time allotted for development under the FFM and data hub task orders, which accounted for the largest modifications. The additional obligations—$58 million for the FFM and $31 million for the data hub—included some new elements, such as costs associated with increasing FFM capacity needed to support anticipated internet traffic, but our review of the revised statements of work show that the additional funding was primarily for the time needed to complete development work rather than new requirements. After the FFM was launched on October 1, 2013, CMS took a number of steps to respond to system performance issues through modifications to the FFM task order. These efforts included adding more than $708,000 to the FFM task order to hire industry experts to assess the existing system and address system performance issues. CMS also greatly expanded the capacity needed to support internet users, obligating $1.5 million to increase capacity from 50 terabytes to 400 terabytes for the remainder of the development period. While CMS program officials said that the website’s performance improved, only one of the three key components specified in the FFM task order was completed by the end of the task order’s development period. (See figure 6.) According to program officials, the plan management module was complete, but only some of the elements of the eligibility and enrollment module were provided and the financial management remained unfinished. Unclear Contract Oversight Responsibilities Exacerbated FFM and Data Hub Cost Growth We identified approximately 40 instances during FFM development in which CMS program staff inappropriately authorized contractors to expend funds totaling over $30 million because those staff did not adhere to established contract oversight roles and responsibilities. Moreover, CMS contract and program staff inconsistently used and reviewed contract deliverables on performance to inform oversight. CMS Staff Inappropriately Authorized Contractors to Expend Funds The FFM task order was modified in April 2013 to add almost $28 million to cover cost increases that had been inappropriately authorized by CMS program officials in 2012. This issue also affected the data hub task order, which had an estimated $2.4 million cost increase over the same period. In November 2012, the FFM contractor informed CMS of a potential funding shortfall due to work and hardware that CMS program officials had directed the contractor to provide. The FAR provides that the contracting officer is the only person authorized to change the terms and conditions of the contract. Further, other government personnel shall not direct the contractor to perform work that should be the subject of a contract modification. The federal standards for internal control also state that transactions and significant events need to be authorized and executed by people acting within the scope of their authority, to ensure that only valid transactions to commit resources are initiated. CMS documents show that the cost growth was the result of at least 40 instances in which work was authorized by various CMS program officials, including the government task leader (GTL)—who is responsible for day-to-day technical interaction with the contractor—and other staff with project oversight responsibilities, who did not have the authority to approve the work. This was done without the knowledge of the contracting officer or the contracting officer’s representative. This inappropriately authorized work included adding features to the FFM and data hub, changing designs in the eligibility and enrollment module, and approving the purchase of a software license. CMS later determined that the work was both necessary and within the general scope of the task order but the cost of the activities went beyond the estimated cost amount established in the order and thus required a modification. Inappropriate Authorizations Due to Unclear Oversight Responsibilities A senior CMS program official described a three-pronged approach to contract oversight that involved various CMS offices, including the COR and GTL in the program offices, and the contracting officer in OAGM. The COR and GTL were assigned overlapping responsibilities for monitoring the contractor’s technical performance, but CMS’s guidance to clarify their roles did not fully address the need to ensure that directions given to contractors were appropriate. CMS program officials said the guidance was issued in 2006, several years before the FFM and data hub task orders were issued. The guidance generally noted that CORs are responsible for financial and contractual issues while GTLs have day-to- day technical interactions with the contractors. However, the guidance did not clarify the limitations on COR’s and GTL’s authorities, such as not providing contractors with technical direction to perform work outside the scope of the contract. CMS program officials also described difficulties clarifying oversight responsibilities in organizations that were new to CMS, which contributed to the inappropriately authorized work. Program responsibilities were shared between CCIIO, which was primarily responsible for developing business requirements, and the information technology staff in OIS, where the GTL and COR were located. CCIIO was relatively new to CMS, having been incorporated shortly before the FFM and data hub task orders were issued. OIS program officials explained that CCIIO was not as experienced with CMS’s organization and did not strictly follow their processes, including for oversight. CMS documents show that there were concerns about inappropriate authorizations prior to the cost growth identified in late 2012, as officials in the OIS acquisition group had repeatedly cautioned other OIS and CCIIO staff about inappropriately directing contractors. Furthermore, CMS program officials said that CCIIO staff did not always understand the cost and schedule ramifications associated with the changes they requested. As the FFM in particular was in the phase of development in which complexities were emerging and multiple changes were needed, there were a series of individual directions that, in sum, exceeded the expected cost of the contract. As a result of the unauthorized directions to contractors, the CMS contracting officer had to react to ad hoc decisions made by multiple program staff that affected contract requirements and costs rather than directing such changes by executing a contract modification as required by the FAR. In April 2013, shortly after the inappropriate authorizations and related cost increases for the FFM and data hub task orders were identified, a senior contracting official at CMS sent instructions on providing technical directions to contractors to the program offices that had been involved in the authorizations and to CMS directors in general. Specifically, the program offices were reminded to avoid technical direction to contractors—particularly when there is an immediate need for critical functions—which might constitute unauthorized commitments by the government. This instruction has not been incorporated into existing guidance on the roles and responsibilities of the CORs and GTLs. CMS contracting and program officials also reported additional steps to bolster contract oversight such as reminding the FFM contractor not to undertake actions that result in additional costs outside of the statement of work without specific direction from the contracting officer. CMS Provided Inconsistent Oversight of Contract Performance It was not always clear which CMS officials were responsible for reviewing and accepting contractor deliverables, including items such as the required monthly status and financial reports and the quality assurance surveillance plan that aid the government in assessing the costs and quality of the contractor’s work. According to contracting officials, reviewing such deliverables helped to provide the additional oversight that cost-reimbursable task orders require per the FAR to reduce risks of cost growth. However, particularly in the first year of FFM development, contract documentation shows repeated questions about who was responsible for reviewing the deliverables and difficulties finding the documents. Both task orders were ultimately modified to require that deliverables be provided to the contracting officer, who had previously just been copied on transmittal letters, in addition to the program office. In September 2012, the COR oversight function transferred to the acquisition group within CMS’s OIS and a new COR was assigned to manage both the FFM and data hub task orders. A CMS program official explained that the acquisition group typically fulfills the COR role for CMS contracts and that it had been unusual for those functions to be provided by another office. Upon assuming oversight responsibilities, the new COR could not locate a complete set of FFM and data hub deliverables and the original COR was unable to provide them. Instead, the new COR had to request all monthly status and financial reports directly from the contractors. When the new COR began reviewing the reports in the fall of 2012, he said he noticed that the FFM contractor had not been projecting the burn rate, a key measure that shows how quickly money is being spent. The COR asked the contractor to provide the figures in November 2012, at which point the cost growth was identified, even though the contract had been modified in August 2012 to add almost $36 million to the task order. We found that the burn rate was not included in earlier reports, but its absence had gone unnoticed due to ineffective contract oversight. In November 2007, we had found internal control deficiencies at CMS related to the inadequate review of contractor costs. CMS Identified Significant Contractor Performance Issues for the FFM Task Order but Took Limited Action CMS took limited action to address significant FFM contractor performance issues as the October 1, 2013, deadline for establishing enrollment through the website neared, and ultimately hired a new contractor to continue FFM development. Late in the development process, CMS became increasingly concerned with CGI Federal’s performance. In April and November 2013, CMS provided written concerns to CGI Federal regarding its responsiveness to CMS’s direction and FFM product quality issues. In addition, in August 2013, CMS was prepared to take action to address the contractor’s performance issues that could have resulted in withholding of fee; however, CMS ultimately decided to work with CGI Federal to meet the deadline. CMS contracting and program officials stated that the contract limited them to only withholding fee as a result of rework. Ultimately, CMS declined to pay only about $267,000 of requested fee. This represented about 2 percent of the $12.5 million in fee paid to CGI Federal. Rather than pursue the correction of performance issues with CGI Federal, in January 2014 CMS awarded a new one-year contract to Accenture Federal Services for $91 million to continue FFM development. This work also has experienced cost increases due to new requirements and other enhancements, with costs increasing to over $175 million as of June 2014. CMS Deemed Early Contractor Performance Satisfactory and Took Limited Action to Address Significant Contractor Performance Issues as the Deadline Neared CMS generally found CGI Federal and QSSI’s performance to be satisfactory in September 2012, at the end of the first year of development. CMS noted some concerns related to FFM contractor performance, such as issues completing development and testing on time; however, CMS attributed these issues to the complexity of the FFM Further, according to and CMS’s changing requirements and policies.program officials, during the first year of FFM development, few defined products were to be delivered as requirements and the system’s design were being finalized. For example, as previously identified in this report, under the revised FFM development schedule the final detailed design review for the FFM—a key development milestone review to ensure that the design meets requirements before it is developed and tested, was delayed from June 2012 to March 2013. Therefore, CMS had limited insight into the quality of CGI Federal’s deliverables during the first year as development and testing of certain FFM functionality had not yet been completed. CMS found QSSI’s performance satisfactory in September 2012. CMS program officials told us that they did not identify significant contractor performance issues during data hub development, and that the data hub generally worked as intended when Healthcare.gov was launched on October 1, 2013. CMS Identified Significant FFM Contractor Performance Issues as the Deadline Approached, but CMS Opted Against Taking Remedial Contractual Actions at That Time During the second year of development, which began in September 2012, CMS identified significant FFM contractor performance issues as the October 1 deadline approached (see figure 7). In April 2013, CMS identified concerns with CGI Federal’s performance, including not following CMS’s production deployment processes and failing to meet established deadlines, as well as continued communication and responsiveness issues. To address these issues, the contracting officer’s representative (COR) sent an email to CGI Federal outlining CMS’s concerns and requesting that CGI Federal provide a plan for correcting the issues moving forward. CMS accepted CGI Federal’s mitigation plan. The plan included changes, according to CGI Federal officials, to accommodate CMS’ communication practices, which CGI Federal believed to be the root cause of some of the CMS-identified issues. CMS contracting officials said that they were satisfied with CGI Federal’s overall mitigation approach, which seemed to address the performance issues that CMS had identified at that time. According to CMS program officials, they grew increasingly concerned with CGI Federal’s performance late in the development process in June and July 2013 as the scheduled launch date approached. Specifically, CMS program officials identified concerns with FFM technical and code quality during early testing of the enrollment process. The initial task order schedule had called for the FFM’s development and test phase to be complete by this point, but these efforts were delayed in the revised schedule. CMS program officials explained that they identified issues such as inconsistent error handling, timeouts, and pages going blank. Overall, more than 100 defects were identified, which resulted in delays while CGI Federal worked to correct them. According to CGI Federal officials, the code reflected the instability of requirements at that time. However, once requirements were more stable, after October 2013, the contractor was able to quickly make improvements to the FFM’s performance. In August 2013, CMS contracting and program officials decided to take formal action to address their concerns with CGI Federal’s performance by drafting a letter to the contractor. Specifically, CMS identified concerns with the contractor’s code quality, testing, failure to provide a key deliverable, and scheduled releases not including all agreed upon functionality. The letter further stated that CMS would take aggressive action, such as withholding fee in accordance with the FAR, if CGI Federal did not improve or if additional concerns arose. However, the contracting officer withdrew the letter one day after it was sent to CGI Federal, after being informed that the CMS Chief Operating Officer preferred a different approach. CMS contracting and program officials told us that, rather than pursue the correction of performance issues, the agency determined that it would be better to collaborate with CGI Federal in completing the work needed to meet the October 1, 2013, launch. CMS contracting officials told us that the agency did not subsequently take any remedial actions to address the issues outlined in the August 2013 letter. By early September 2013, CMS program officials told us that they became so concerned about the contractor’s performance that CMS program staff moved their operations to CGI Federal’s location in Herndon, Virginia to provide on-site direction leading up to the FFM launch. CMS had identified issues such as deep-rooted problems with critical software defects during testing and demonstration of the product and CGI Federal’s inability to perform quality assurance adequately including full testing of software. According to CMS program officials, CMS staff members worked on-site with CGI Federal for several weeks to get as much functionality available by October 1, 2013, as possible, deploying fixes and new software builds daily. CMS Took Some Actions to Hold the FFM Contractor Accountable after the Healthcare.gov Launch After the Healthcare.gov launch on October 1, 2013, CMS contracting officials began preparing a new letter detailing their concerns regarding contractor performance which was sent to CGI Federal in November 2013. In its letter, CMS stated that CGI Federal had not met certain requirements of the task order statement of work, such as FFM infrastructure requirements including capacity and infrastructure environments, integration, change management, and communication issues—some of which had been previously expressed in writing to CGI Federal. In addition, CMS stated that some of these issues contributed to problems that Healthcare.gov experienced after the October 1, 2013 launch. CMS’s letter also requested that CGI Federal provide a plan to address these issues. CGI Federal responded in writing, stating that it disagreed with CMS’s assertion that CGI Federal had not met the requirements in the FFM statement of work. In its letter, CGI Federal stated that delays in CMS’s establishment and finalization of requirements influenced the time available for development and testing of the FFM. CGI Federal further stated that disruptions to its performance as a result of delays in finalizing requirements were compounded by the scheduled launch date, which resulted in CMS reprioritizing tasks and compressing time frames to complete those tasks. CGI Federal officials said they did not provide a formal plan for addressing CMS’s concerns because they regarded them as unfounded, but agreed to work with CMS to avoid future issues and optimize the FFM’s performance. In addition, after the October 1, 2013, launch, CMS contracting officials told us that they provided additional FFM oversight by participating in daily calls with CGI Federal on the stability of the FFM and the status of CGI Federal’s work activities. Contracting officials told us that the increased oversight of FFM development helped to fix things more quickly. Further, the COR increasingly issued technical direction letters to clarify tasks included in the FFM statement of work and focus CGI Federal’s development efforts. technical direction letters to CGI Federal in October 2013, directing CGI Federal to follow the critical path for overall performance improvement of the FFM, purchase software licenses, and collaborate with other stakeholders, among other things. According to program officials, written technical direction letters issued by the COR had more authority than technical direction provided by the GTL. CMS Declined to Pay FFM Contractor Fee for Rework Technical direction letters provide supplementary guidance to contractors regarding tasks contained in their statements of work or change requests. pays the contractor’s allowable costs, plus an additional fee that was negotiated at the time of award. This means that despite issues with CGI Federal’s performance, including CGI Federal’s inability to deliver all functionality included in the FFM statement of work, CMS was required to pay CGI Federal for allowable costs under the FFM task order. CGI Federal’s task order provides that, if the services performed do not conform with contract requirements, the government may require the contractor to perform the services again for no additional fee. If the work cannot be corrected by re-performance, the government may, by contract or otherwise, perform the services and reduce any contractor’s fee by an amount that is equitable under the circumstances, or the government may terminate the contract for default. Even though CMS was obligated to pay CGI Federal’s costs for the work it had performed for the FFM, CMS contracting and program officials said they could withhold only the portion of the contractor’s fee that it calculated was associated with rework to resolve FFM defects. Ultimately, CMS declined to pay about $267,000 of the fixed fee requested by CGI Federal. This is approximately 2 percent of the $12.5 million in fixed fee that CMS paid to CGI Federal. Officials from CGI Federal said that they disagreed with the action and that the CMS decisions were not final and they could reclaim the fee by supplying additional information. CMS contracting and program officials told us that it was difficult to distinguish rework from other work. For example, program officials explained that it was difficult to isolate work that was a result of defects versus other work that CGI Federal was performing, and then calculate the corresponding portion of fee to withhold based on hours spent correcting defects. Contractor’s Total Fee Increased during Development Through each contract modification, as CMS increased the cost of development, it also negotiated additional fixed fee for the FFM and data hub contractors. Under the original award of $55.7 million, CGI Federal would have received over $3.4 million in fee for work performed during the development period. As of February 2014, when CMS had obligated over $209 million dollars for the FFM effort, CMS negotiated and CGI Federal was eligible to receive more than $13.2 million in fee.2014, CMS had paid CGI Federal $12.5 million in fee. Likewise, CMS negotiated additional fixed fee for the data hub task order, QSSI’s eligible fee rose from over $716,000 under the original $29.9 million award to more than $1.3 million for work performed through February 2014. Costs Continue to Increase with New FFM Contractor Rather than pursue the correction of performance issues and continuing FFM development with CGI Federal, CMS determined that its best chance of delivering the system and protecting the government’s financial interests would be to award a new contract to another vendor. In January 2014, CMS awarded a one-year sole source contract (cost-plus-award- fee) with an estimated value of $91 million to Accenture Federal Services to transition support of the FFM and continue the FFM development that CGI Federal was unable to deliver. CMS’s justification and approval document for the new award states that the one-year contract action is an interim, transitory solution to meet CMS’s immediate and urgent need for specific FFM functions and modules—including the financial management This work has also experienced cost increases. Figure 8 shows module.increases in obligations for the Accenture Federal Services contract since award in January 2014. The financial management module of the FFM includes the services necessary to spread risk among issuers and to accomplish financial interactions with issuers. Specifically, this module tracks eligibility and enrollment transactions and subsidy payments to insurance plans, integrates with CMS’s existing financial management system, provides financial accounting and outlook for the entire program, and supports the reconciliation calculation and validation with IRS. According to the CMS justification and approval document, CMS estimated that it would cost $91 million over a one-year period for Accenture Federal Services to complete the financial management module and other FFM enhancements. As of June 5, 2014, the one-year contract had been modified six times since contract award and CMS had obligated more than $175 million as a result of new requirements, changes to existing requirements, and new enhancements. For example, CMS modified the contract to incorporate additional work requirements and functionality related to the Small Business Health Options Program marketplace, state-based marketplace transitions, and hardware acquisition. CMS had yet to fully define requirements for certain FFM functionality, including the financial management module, when the new contract to continue FFM development was awarded in January 2014. Accenture Federal Services representatives told us that while they had a general understanding of requirements at the time of award, their initial focus during the period January through April 2014 was on transitioning work from the incumbent contractor and clarifying CMS’ requirements. Accenture Federal Services representatives attributed contract increases during this period to their increased understanding of requirements, as well as clarifying additional activities requested under the original contract. Further, although the justification and approval document stressed that delivery of the financial management module was needed by mid-March 2014, contracting and program officials explained that time frames for developing the module were extended post-award, and as of June 2014, the financial management module was still under development. Financial management module functionality is currently scheduled to be implemented in increments from June through December 2014. Conclusions CMS program and contracting staff made a series of planning decisions and trade-offs that were aimed at saving time, but which carried significant risks. While optimum use of acquisition planning and oversight was needed to define requirements, develop solutions, and test them before launching Healthcare.gov and its supporting systems, the efforts by CMS were plagued by undefined requirements, the absence a required acquisition strategy, confusion in contract administration responsibilities, and ineffective use of oversight tools. In addition, while potentially expedient, CMS did not adhere to the governance model designed for the FFM and data hub task orders, resulting in an ineffectual governance process in which scheduled design and readiness reviews were either diminished in importance, delayed, or skipped entirely. By combining that governance model with a new IT development approach the agency had not tried before, CMS added even more uncertainty and potential risk to their process. The result was that problems were not discovered until late, and only after costs had grown significantly. As FFM contractor performance issues were discovered late in development, CMS increasingly faced a choice of whether to stop progress and pursue holding the contractor accountable for poor performance or devote all its efforts to making the October deadline. CMS chose to proceed with pursuing the deadline. After October 1, 2013, CMS decided to replace the contractor, but in doing so had to expend additional funds to complete essential FFM functions. Ultimately, more money was spent to get less capability. Meanwhile, CMS faces continued challenges to define requirements and control costs to complete development of the financial management module in the FFM. Unless CMS takes action to improve acquisition oversight, adhere to a structured governance process, and enhance other aspects of contract management, significant risks remain that upcoming open enrollment periods could encounter challenges going forward. Recommendations for Executive Action In order to improve the management of ongoing efforts to develop the federal marketplace, we recommend that the Secretary for Health and Human Services direct the Administrator of the Centers for Medicare & Medicaid Services to take the following five actions: Take immediate steps to assess the causes of continued FFM cost growth and delayed system functionality and develop a mitigation plan designed to ensure timely and successful system performance. Ensure that quality assurance surveillance plans and other oversight documents are collected and used to monitor contractor performance. Formalize existing guidance on the roles and responsibilities of contracting officer representatives and other personnel assigned contract oversight duties, such as government task leaders, and specifically indicate the limits of those responsibilities in terms of providing direction to contractors. Provide direction to program and contracting staff about the requirement to create acquisition strategies and develop a process to ensure that acquisition strategies are completed when required and address factors such as requirements, contract type, and acquisition risks. Ensure that information technology projects adhere to requirements for governance board approvals before proceeding with development. Agency Comments, Third-Party Views, and Our Evaluation We provided a draft of this product to the Department of Health and Human Services and the Centers for Medicare & Medicaid Services for review and comment. In its written comments, which are reprinted in appendix III, HHS concurred with four of our five recommendations and described the actions CMS is taking to improve its contracting and oversight practices. HHS partially concurred with our recommendation that CMS assess the causes of continued FFM cost growth. The agency says that CMS already has assessed the reasons for cost growth under the CGI Federal task order and that any increase in costs since the contract with Accenture Federal Services for continued development of the FFM was finalized is attributable to additional requirements, not cost overruns. We recognize that much of the increase in costs under the Accenture Federal Services contract is due to new requirements or enhancements. Nevertheless, based on our review of the contract modifications, not all the increase in costs from $91 million to more than $175 million, when measured from the initial projection, is attributable to new requirements. For example, as CMS stated in its comments, after additional analysis CMS determined a $30 million cost increase was needed to complete the contract’s original scope of work. We continue to believe that a further assessment is needed to ensure that costs as well as requirements are under control and that the development of the FFM is on track to support the scheduled 2015 enrollment process. All three contractors, as well as HHS, provided additional technical comments, which we incorporated in the report where appropriate. We are sending copies of this report to the Secretary of Health and Human Services and the Administrator of the Centers for Medicare & Medicaid Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov/. If you or your staff have any questions about this report, please contact William T. Woods at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines selected contracts and task orders central to the development and launch of the Healthcare.gov website by assessing (1) Centers for Medicare & Medicaid Services (CMS) acquisition planning activities; (2) CMS oversight of cost, schedule, and system capability changes; and (3) actions taken by CMS to identify and address contractor performance issues. To address these objectives, we used various information sources to identify CMS contracts and task orders related to the information technology (IT) systems supporting the Healthcare.gov website. Specifically, we reviewed data in the Federal Procurement Data System- Next Generation, which is the government’s procurement database, to identify CMS contracts and task orders related to the IT systems supporting the Healthcare.gov website and amounts obligated for fiscal years 2010 through March 2014. In addition, we reviewed CMS provided data on the 62 contracts and task orders related to the IT systems supporting the Healthcare.gov website and amounts obligated as of March 2014. To select contracts and task orders to include in our review, we analyzed Federal Procurement Data System-Next Generation and CMS data to identify contracts and task orders that represent large portions of spending for Healthcare.gov and its supporting systems. We then selected one contract and two task orders issued under an existing 2007 contract and interviewed contracting officials in CMS’s Office of Acquisition and Grants Management and program officials in CMS’s Office of Information Services to confirm that these contracts are central to development of Healthcare.gov and its supporting systems. The contract and task orders combined accounted for more than 40 percent of the total CMS reported obligations related to the development of Healthcare.gov and its supporting systems as of March 2014. Specifically, we selected the task orders issued to CGI Federal Inc. (CGI Federal) for the development of the federally facilitated marketplace (FFM) system and to QSSI, Inc. QSSI for the development of the federal data services hub (data hub) in September 2011—and the contract awarded to Accenture Federal Services in January 2014 to continue FFM development and enhance existing functionality. To describe federal implementation costs for Healthcare.gov and its supporting systems, we interviewed program officials and obtained relevant documentation to identify eight agencies that reported IT-related obligations or used existing contracts and task orders or operating budgets to support the development and launch of the Healthcare.gov website. These eight agencies include the Centers for Medicare & Medicaid Services (CMS), Internal Revenue Service (IRS), Social Security Administration, Veterans Administration (VA), Peace Corps, Office of Personnel Management, Department of Defense (DOD), and Department of Homeland Security. We then obtained and analyzed various types of agency-provided data to identify overall IT-related costs for Healthcare.gov and its supporting systems. Three agencies, including CMS, IRS, and VA reported almost all of the IT-related obligations supporting the implementation of Healthcare.gov and its supporting systems as of March 2014. We performed data reliability checks on contract obligation data provided by these three agencies, such as checking the data for obvious errors and comparing the total amount of funding obligated for each contract and task order as reported by each agency to data on contract obligations in Federal Procurement Data System-Next Generation or USASpending.gov.were sufficiently reliable for the purpose of this report. We found that these data To assess CMS acquisition planning activities, we reviewed Federal Acquisition Regulation (FAR) and relevant Department of Health and Human Services (HHS)/CMS policies and guidance. We also evaluated contract file documents for three selected contracts and task orders, including acquisition planning documentation, request for proposal, statements of work, cost estimates, and technical evaluation reports to determine the extent to which CMS’s acquisition planning efforts met FAR and HHS/CMS requirements. In assessing CMS’s acquisition planning efforts, we looked for instances where CMS took steps to mitigate acquisition program risks during the acquisition planning phase, including choice of contract type and source selection methodology. In addition, we interviewed CMS contracting and program officials to gain a better understanding of the acquisition planning process for select contracts and task orders including the rationale for choosing the selected contract type and the analysis conducted to support the source selection process. We also reviewed prior GAO reports on CMS contract management to assess the extent to which CMS’s acquisition planning activities addressed issues previously identified by GAO. To assess CMS oversight of cost, schedule, and system capability changes, we analyzed contract file documents for one selected contracts and two task orders. As part of our assessment of the selected contracts and task orders, we reviewed contract modifications, contractor monthly status and financial reports, statements of work, contractor deliverables, schedule documentation, and contracting officer’s representative files, and meeting minutes to determine if there were any changes and whether system development proceeded as scheduled. We performed a data reliability check of cost data for selected contracts and task orders by comparing contract modification documentation to contract obligation data in Federal Procurement Data System-Next Generation. To evaluate the extent to which CMS adhered to its governance process, we compared the governance model the agency intended would guide the design, development, and implementation of Healthcare.gov and its supporting systems, to the development process the agency actually used for the FFM and data hub. We also obtained and analyzed documentation from governance reviews to identify the date and content of the reviews to determine if key milestone reviews were held in accordance to the development schedule. In addition, we reviewed FAR and federal standards for internal control for contract oversight to evaluate the extent to which CMS’s approach to contract oversight for the selected contracts and task orders met FAR and federal internal control standards. We interviewed CMS contracting and program officials to gain a better understanding of FFM and data hub cost, schedule, and system capabilities, and to obtain information on the organization and staffing of offices and personnel responsible for performance monitoring for selected contracts and task orders. We also interviewed contractors to obtain their perspective on CMS’s oversight of cost, schedule, and system capabilities. Further, as part of our assessment of CMS’s development approach for the FFM and data hub, we reviewed prior GAO work regarding information technology and development. To assess actions taken by CMS to identify and address contractor performance issues, we reviewed relevant FAR and HHS guidance for contract monitoring and inspection of services to identify steps required for selected contracts and task orders and recourse options for unsatisfactory performance. In addition, we obtained and analyzed contract file documentation including contracting officer’s representative files, contractor deliverables, contractor monthly status and financial reports, contractor performance evaluations, and meeting minutes to determine the extent to which performance was reported and what steps, if any, were taken to address any issues. To determine contractor fee not paid during development, we obtained and analyzed CMS contractor invoice logs and contract payment notifications. We also interviewed CMS contracting and program officials to obtain additional information regarding contractor performance and actions taken by CMS, if any, to address contractor performance issues. We conducted this performance audit from January 2014 to July 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Cumulative Cost Increases for the Task Orders for Developing the Federally Facilitated Marketplace System and Federal Data Services Hub Task Orders Task order issued/ modified Federally Facilitated Marketplace System (FFM) Issuance $55,744,082 FFM task order issued to CGI Federal $91,515,772 Obligates an additional $35.8 million, primarily to provide for new and increased system requirements resulting from program office decisions and finalized regulations. $91,515,772 No cost modification for administrative purposes, including identifying a new contracting officer’s representative. $27,688,008 $119,203,779 Obligates an additional $27.7 million needed to avert a potential cost overrun. The funding supports an increased level of effort to add system functionality not included in the statement of work and increased infrastructure needs. $474,058 $119,677,837 Obligates approximately $474,000 for additional infrastructure requirements, specifically requirements for the content delivery network that delivers web services. $58,143,472 $177,821,309 Modified to extend the period of performance for FFM development until February 28, 2014, and obligate an additional $58.1 million, primarily to support the extension. $18,215,807 $196,037,116 Obligates an additional $18.2 million to purchase a software license. 196,037,116 Modified to issue a change order directing the contractor to develop and implement an identity management software solution. $1,479,309 $197,516,425 Obligates $1.5 million to increase capacity of the content delivery network from 50 terabytes to 400 terabytes. $6,981,666 $204,498,091 Obligates $7.0 million to definitize the change order issued under Modification 7. It also funds software licenses and the industry experts hired to improve system performance. 0 $204,498,091 Modified to issue a change order directing the contractor to begin transitioning services to a new contractor. $5,133,242 $209,631,333 Obligates $4.8 million to definitize the change order issued under Modification 10 and fund post-transition consulting services through April 30, 2014. Data Hub Issuance $29,881,693 Data hub task order issued to QSSI ($4,180,786) $25,700,907 Modified to cancel a stop work order that was issued due to a GAO bid protest and direct the contractor to continue performance of the task order. Obligations are reduced by $4.2 million in accordance with the contractor’s revised task order proposal (submitted as part of the bid protest process). Task order issued/ modified Modification 2 $48,717,984 Obligates an additional $23.0 million, primarily to provide for new and increased system requirements resulting from program office decisions and finalized regulations. $48,717,984 No cost modification for administrative purposes, including identifying a new contracting officer’s representative. $53,709,598 Obligates $5.0 million to fund an electronic data interchange tool and related labor to support enrollment services. $84,527,128 Modified to extend the period of performance for data hub development until February 28, 2014, and obligate an additional $30.8 million, primarily to support the extension. $84,527,128 No cost modification to transfer funds among contract line items and revise personnel. $99,657,839 Modified to exercise option year 1: Operations and Maintenance. Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, W. William Russell, Assistant Director; Jennifer Dougherty; Elizabeth Gregory-Hosler; Andrea Yohe; Susan Ditto; Julia Kennon; John Krump; Ken Patton; Roxanna Sun; and Kevin Walsh made key contributions to this report .
In March 2010, the Patient Protection and Affordable Care Act required the establishment of health insurance marketplaces by January 1, 2014. Marketplaces permit individuals to compare and select insurance plans offered by private insurers. For states that elected not to establish a marketplace, CMS was responsible for developing a federal marketplace. In September 2011, CMS contracted for the development of the FFM, which is accessed through Healthcare.gov. When initial enrollment began on October 1, 2013, many users encountered challenges accessing and using the website. GAO was asked to examine various issues surrounding the launch of the Healthcare.gov website. Several GAO reviews are ongoing. This report assesses, for selected contracts, (1) CMS acquisition planning activities; (2) CMS oversight of cost, schedule, and system capability changes; and (3) CMS actions to address contractor performance. GAO selected two task orders and one contract that accounted for 40 percent of CMS spending and were central to the website. For each, GAO reviewed contract documents and interviewed CMS program and contract officials as well as contractors. The Centers for Medicare & Medicaid Services (CMS) undertook the development of Healthcare.gov and its related systems without effective planning or oversight practices, despite facing a number of challenges that increased both the level of risk and the need for effective oversight. CMS officials explained that the task of developing a first-of-its-kind federal marketplace was a complex effort with compressed time frames. To be expedient, CMS issued task orders to develop the federally facilitated marketplace (FFM) and federal data services hub (data hub) systems when key technical requirements were unknown, including the number and composition of states to be supported and, importantly, the number of potential enrollees. CMS used cost-reimbursement contracts, which created additional risk because CMS is required to pay the contractor's allowable costs regardless of whether the system is completed. CMS program staff also adopted an incremental information technology development approach that was new to CMS. Further, CMS did not develop a required acquisition strategy to identify risks and document mitigation strategies and did not use available information, such as quality assurance plans, to monitor performance and inform oversight. CMS incurred significant cost increases, schedule slips, and delayed system functionality for the FFM and data hub systems due primarily to changing requirements that were exacerbated by oversight gaps. From September 2011 to February 2014, FFM obligations increased from $56 million to more than $209 million. Similarly, data hub obligations increased from $30 million to nearly $85 million. Because of unclear guidance and inconsistent oversight, there was confusion about who had the authority to approve contractor requests to expend funds for additional work. New requirements and changing CMS decisions also led to delays and wasted contractor efforts. Moreover, CMS delayed key governance reviews, moving an assessment of FFM readiness from March to September 2013—just weeks before the launch—and did not receive required approvals. As a result, CMS launched Healthcare.gov without verification that it met performance requirements. Late in the development process, CMS identified major performance issues with the FFM contractor but took only limited steps to hold the contractor accountable. In April and November 2013, CMS provided written concerns to the contractor about product quality and responsiveness to CMS direction. In September 2013, CMS program officials became so concerned about the contractor's performance that they moved operations to the FFM contractor's offices to provide on-site direction. At the time, CMS chose to forego actions, such as withholding the payment of fee, in order to focus on meeting the website launch date. Ultimately, CMS declined to pay about $267,000 in requested fee. This represents about 2 percent of the $12.5 million in fees paid to the FFM contractor. CMS awarded a new contract to another firm for $91 million in January 2014 to continue FFM development. As of June 2014, costs on the contract had increased to over $175 million due to changes such as new requirements and other enhancements, while key FFM capabilities remained unavailable. CMS needs a mitigation plan to address these issues. Unless CMS improves contract management and adheres to a structured governance process, significant risks remain that upcoming open enrollment periods could encounter challenges.
Background Title XIX of the Social Security Act establishes Medicaid as a federal-state partnership that finances health care for certain low-income individuals, including children, families, the aged, and the disabled. Within broad federal requirements, each state operates and administers its Medicaid program in accordance with a CMS-approved state Medicaid plan. These plans detail the populations served, the services covered, and the methods used to calculate payments to providers. All states must provide certain services, such as inpatient and outpatient hospital services, nursing facility services, and physician services, and may provide additional, optional services, such as prescription drugs, dental care, and certain home- and community-based services. The federal government matches most state Medicaid expenditures for covered services according to the FMAP, which is based on a statutory formula drawing on each state’s annual per capita income. To obtain federal matching funds for Medicaid, states file quarterly financial reports with CMS and draw down funds through an existing payment management system used by HHS. The Recovery Act initially provided eligible states with an increased FMAP for 27 months from October 1, 2008, to December 31, 2010. On August 10, 2010 federal legislation was enacted amending the Recovery Act and providing for an extension of increased FMAP funding through June 30, 2011, but at a lower level. Generally, for fiscal year 2009 through the third quarter of fiscal year 2011, the increased FMAP is calculated on a quarterly basis and is comprised of three components: (1) a “hold harmless” provision, which maintains states’ regular FMAP rates at the highest rate of any fiscal year from 2008 to 2011; (2) a general across-the-board increase of 6.2 percentage points in states’ regular FMAPs through the first quarter of fiscal year 2011, which will be reduced to regular FMAP by July 1, 2011; and (3) a further increase to the regular FMAPs for those states that have a qualifying increase in unemployment rates. Because the unemployment component of the increased FMAP is based on both t level of its regular FMAP and changes in a state’s unemployment rate— versus its existing unemployment rate—it does not fully differentiate among states’ economic circumstances prior to the downturn. States with comparatively high unemployment rates and higher regular FMAPs did not For always receive the largest unemployment adjustment to their FMAPs. example, Michigan had the highest pre-recession unemployment rate in he the nation at 7.3 percent in October 2007, and in June 2010, continued to have one of the nation’s highest unemployment rates at 13.2 percent. Although the state’s unemployment rate increased by 5.9 percentage points over this time, the increased FMAP attributable to the unemployment component in the fourth quarter of FFY 2010 was 3.88 percentage points. In contrast, New Hampshire received an unemployment adjustment of 5.39 percentage points for the same period, although growth in its unemployment rate was significantly lower, and in June 2010, was less than half the unemployment rate in Michigan. Following enactment of the Recovery Act, FMAP rates substantially increased in all states over the regular 2009 FMAP rates and have continued to increase, albeit at a slower rate, since that time. On average, increased FMAP rates nationally for the first and second quarters of FFY 2009 were 8.58 percentage points higher than regular FFY 2009 FMAP rates. By the fourth quarter of FFY 2010, the increased FMAP rates nationwide had increased by an average of 10.59 percentage points over the regular FFY 2010 FMAPs, with the increase ranging from 6.94 percentage points in North Dakota to 13.87 percentage points in Louisiana. (See fig. 1.) For all states, the largest proportion of the increased FMAP was the component attributable to the across-the-board increase of 6.2 percentage points. In addition, the “hold harmless” component contributed to the increase in 17 states, and all states except North Dakota received an increase to their regular FMAP rate based on qualifying increases in unemployment rates. (See app. II for additional information on increased FMAP rates available to states under the Recovery Act.) For states to qualify for the increased FMAP, they must pay the state’s share of Medicaid costs and comply with a number of requirements, including the following: States generally may not apply eligibility standards, methodologies, or procedures that are more restrictive than those in effect under their state Medicaid programs on July 1, 2008. States must comply with prompt payment requirements. States cannot deposit or credit amounts attributable (either directly or indirectly) to certain elements of the increased FMAP in any reserve or rainy day fund of the state. States with political subdivisions—such as cities and counties—that contribute to the nonfederal share of Medicaid spending cannot require the subdivisions to pay a greater percentage of the nonfederal share than would have been required on September 30, 2008. In addition, states must separately track and report on increased FMAP funds. To help states comply with these requirements, CMS provided the increased FMAP funds to states through a separate account in the payment management system used by HHS, allowing the funds to be tracked separately from regular FMAP funds as required by the act. CMS also provided guidance in the form of state Medicaid director letters and written responses to frequently asked questions, and the agency continues to work with states individually to resolve any compliance issues that may arise. Despite these restrictions, however, states are able to make certain other adjustments to their Medicaid programs without risking their eligibility for increased FMAP funds. For example, the Recovery Act does not prohibit states from reducing optional services, or reducing provider payment rates. States also continue to have flexibility in how they finance the nonfederal share of Medicaid payments. Specifically, provided they comply with federal limits, states may rely on various financing arrangements, such as provider taxes, certified public expenditures (CPE), or intergovernmental transfers (IGT), to generate the nonfederal share of payments. (See table 1.) States Have Accessed Most Available Funds and Used Them to Support Medicaid Enrollment Growth States have accessed most of the increased FMAP funds available to them through the Recovery Act, despite most having to make adjustments to their Medicaid programs to become eligible for the funds. Nearly every state used the funds to cover increased Medicaid enrollment, which grew by over 14 percent nationally between October 2007 and February 2010. States Have Accessed Most Available Increased FMAP Funds and Made Program Adjustments to Comply with the Act Through the end of the third quarter of FFY 2010, states had drawn down a total of $60.8 billion in increased FMAP funds—95 percent of the funds available at that point, or 70 percent of the total estimated $87 billion in increased FMAP that was provided through the Recovery Act. If current spending patterns continue, we estimate that the states will draw down $82 billion by December 31, 2010—about 94 percent of the estimated total allocation of $87 billion. CMS distributed the increased FMAP funds to states through an existing payment system, thereby providing states with timely access to the funds. Within 3 months of enactment, all but one state had drawn down the increased FMAP funds. Most states reported making at least one adjustment to their Medicaid programs in order to be eligible for the increased FMAP funds, and 25 states reported making multiple adjustments. Twenty-nine states reported making adjustments to comply with the act’s prompt payment requirement, and 26 states reported making adjustments to the act’s maintenance of eligibility requirement. For example, several states reported that they were in the process of replacing antiquated claims payment systems or implementing programming changes to existing systems to be able to comply with the prompt payment requirement. Specifically, Hawaii and South Carolina adjusted their claims payment systems to identify claims on a daily basis and developed reporting mechanisms to monitor compliance with the act’s prompt payment requirement. In terms of adjustments states made to comply with the maintenance of eligibility requirement, Vermont reported that it eliminated premium increases that it had imposed on certain beneficiaries, and Arizona reported reversing a policy that had increased the frequency at which it determined program eligibility. In addition, 13 states reported making adjustments to the act’s requirement on contributions by political subdivisions, and 4 states reported making adjustments to comply with the act’s requirement related to rainy day funds. When asked about the difficulty of complying with the act’s requirements in order to access funds, states most frequently reported that meeting the prompt payment requirement posed a high level of difficulty. Nine states reported having not met the prompt payment requirement at some point since the Recovery Act was enacted, with the total number of days reported by a state ranging from 1 day to 48 days. Eight states have either applied for or received a waiver from meeting the prompt payment requirement from CMS. (See app. III for additional information on the increased FMAP grant awards and drawdown amounts for each state.) States Used Increased FMAP Funds to Maintain Their Programs in Light of Enrollment Growth For FFY 2010 through the first quarter of FFY 2011, nearly all states reported using or planning to use funds freed up by the increased FMAP to cover increased Medicaid caseloads (45 states), maintain Medicaid eligibility (44 states), and to maintain Medicaid benefits and services (44 states). Additionally, the majority of states also reported using or planning to use these funds to help support general state budget needs, maintain institutional provider payment rates, and maintain practitioner payment rates. Despite the variety of purposes for which states used the increased FMAP funds, when asked about the sufficiency of the funds, fewer than half of states (18 states) reported that the 2010 funds were sufficient for the stated purposes of the act—to provide fiscal relief to states and to maintain states’ Medicaid programs. Nonetheless, 46 states reported that the increased FMAP was a major factor in their efforts to support Medicaid enrollment growth which, from October 2007 through February 2010, increased 14.2 percent nationally, which is significantly higher than in previous years. The rate of growth peaked between January 2009 and July 2009, increasing by 5 percent during this 7-month period. (See fig. 2.) Enrollment growth across the states varied considerably—ranging from about 1 percent in Tennessee and Texas to almost 38 percent in Nevada. Twenty-three states experienced a 10 to less than 20 percent enrollment increase, with 16 states experiencing an enrollment increase of 20 percent or greater. (See fig. 3.) While the magnitude of the enrollment increase across states was largely due to the economic downturn, program expansions and enrollment outreach initiatives implemented in some states prior to the economic downturn also contributed to enrollment growth. Despite states’ declining revenues, however, the act’s maintenance of eligibility requirement made the increased FMAP contingent on states not adopting more restrictive Medicaid eligibility standards, methodologies, or procedures than those that were in place on July 1, 2008. When examining regional variation in enrollment growth, states in the western region of the country most commonly had enrollment increases above the national increase of just over 14 percent (11 of 13 states), while states in the northeast region were least likely to have enrollment increases over the national increase (4 of 9 states). Various factors likely contributed to these regional variations. For example, when compared to national averages, most states in the western region experienced higher than average growth in unemployment (8 of 13 states) and poverty rates (7 of 13 states) during the recession, and higher rates of uninsurance prior to the recession (11 of 13 states). Low enrollment growth in the northeast region may be due, in part, to the fact that many of these states have historically had higher Medicaid income-eligibility levels when compared to states in other regions. For example, in 2009, the majority of states in the northeast extended Medicaid coverage to parents with incomes over 150 percent of the federal poverty level (FPL). In contrast, the majority of states in the southern and western regions generally limited program eligibility to parents under 75 percent of the FPL. Across the states, most enrollment growth was attributable to children, a population that comprises over half of total Medicaid enrollment, and is sensitive to economic downturns. However, the highest rate of increase during this period occurred among the nondisabled, nonaged adult population. Specifically, from October 2007 through February 2010, enrollment among the nondisabled, nonaged adult population increased by nearly 30 percent, compared to an increase of nearly 15 percent for children. (See fig. 4.) Of the 29 states with readily available information on the geographic distribution of Medicaid enrollment increases, 21 states reported that the increase was generally distributed evenly across the state, and 8 states reported that the increase was concentrated in certain urban or rural counties. For example, Arizona Medicaid officials reported that the largest enrollment increase occurred in Maricopa County—the state’s largest county that includes Phoenix and Scottsdale. Pennsylvania officials reported that the concentration of enrollment growth was mixed between one rural county and two urban counties—Montgomery and Cumberland. (See app. IV for more information on adjustments made by states to comply with Recovery Act requirements and states’ uses of funds freed-up by the increased FMAP. See app. V for additional information on enrollment changes in the states and the largest U.S. insular areas.) Most States Reported Taking Actions to Sustain Their Medicaid Programs, but Federal Legislation Will Influence Future Program Adjustments Most states are concerned about their ability to sustain their Medicaid programs once the increased FMAP funds are no longer available, and have taken actions or proposed actions to address program sustainability. However, states’ efforts to make future program adjustments will be influenced by recent legislation, including PPACA, and the subsequent extension of the increased FMAP through June 2011. States’ Actions to Address Program Sustainability Include New or Altered Financing Arrangements and Reductions to Provider Payments Forty-eight states reported concerns regarding the sustainability of their Medicaid programs after Recovery Act funding is no longer available, with most states reporting that the factors driving their concerns included the increased share of the state’s Medicaid payments in 2011, and the projection of the state’s economy, tax revenues, and Medicaid enrollment growth for 2011. To address program sustainability, 46 states have taken actions—such as introducing new Medicaid financing arrangements and reducing or freezing practitioner payment rates—and 44 states reported implementing multiple actions. Most commonly, states implemented new financing arrangements or altered existing ones—such as provider taxes, IGTs, and CPEs—to generate additional revenues to help finance the nonfederal share of their Medicaid programs. In addition, most states also reported that they proposed making additional changes to their Medicaid programs for the remainder of fiscal year 2010 or for fiscal year 2011. (See fig. 5.) Twenty-eight states reported that they reduced or froze Medicaid payment rates to certain Medicaid providers in response to concerns about program sustainability. For example, in December 2009, Iowa implemented across-the-board rate reductions for most providers ranging from 2.5 to 5 percent, which will remain in effect until June 30, 2011. Similarly, Maryland reduced or froze payments for physicians and hospitals, and for long-term care services and home- and community-based services. States also reported reducing benefits and services, changing prescription drug formularies, or increasing beneficiary copayments or premiums. Four states—Florida, Illinois, Mississippi, and Texas—and the District of Columbia reported that they did not implement any changes in response to their concerns about program sustainability; however, Medicaid officials in most of those states and the District told us that they were considering future changes. In addition to these program changes, over half the states reported making administrative changes that could affect Medicaid application processing time, such as decreasing the number of staff or staff hours available for processing Medicaid applications, increasing furlough days, and decreasing the number of Medicaid intake facilities. Despite these actions, most states kept pace with the increasing number of applications they received. Specifically, of the 33 states reporting data, 25 processed on average at least 95 percent of applications they received each month. (See app. IV for additional information on state actions to address program sustainability. App. VI provides more information on changes to states’ share of Medicaid payments when increased FMAP is no longer available.) States’ Efforts to Make Future Program Adjustments Will Be Influenced by Federal Legislation States indicated that legislation to extend the increased FMAP funding would help address their concerns about program sustainability. At the time of our survey, legislation extending the increased FMAP had been proposed, but not enacted. Despite uncertainties about the availability of the increased FMAP beyond December 2010, however, 30 states had assumed a 6-month extension of the increased FMAP in their fiscal year 2011 budgets without any changes to the way it is calculated as provided for under the Recovery Act, and only 9 of these states had contingency plans in place if such legislation was not enacted. On August 10, 2010, Congress passed legislation amending § 5001 of the Recovery Act to extend the increased FMAP through June 30, 2011, but at a lower level. Specifically, under the amendments to the Recovery Act, states’ increased FMAP rates will decrease by at least 3 percentage points beginning on January 1, 2011, and continue to be phased down to their regular FMAP rates by July 1, 2011. For states that had assumed an unmodified extension of the increased FMAP, the available federal funds will be less than anticipated. However, without the extension, we estimate that states, on average, would have faced a nearly 11 percentage point decrease in their FMAP rates on January 1, 2011. The additional 6 months of increased FMAP funding will allow states more time to adjust as they return to their regular FMAP rates. How states will fare as they return to their regular FMAP rates will vary depending on each state’s unique economic circumstances and the size of their Medicaid population. Officials from several states indicated that the loss of increased FMAP funds would distress their state’s budget, requiring the state to make additional program reductions, as the following examples illustrate. Wisconsin Medicaid officials reported that the state would need to reduce Medicaid expenses by $1 billion annually, or about 20 percent of the state’s Medicaid budget, and are considering several options, including eliminating the state’s prescription drug program for seniors and several rate reform initiatives. Colorado Medicaid officials reported that the state would need to reduce Medicaid expenditures by an estimated $250 million, in addition to approximately $320 million the state has already cut. The state reported that the additional expenditure reduction would require drastic cuts to optional programs, benefits, and provider rates. In addition, the recently enacted PPACA includes several provisions that affect states’ Medicaid programs, and states will need to take into account these provisions when considering additional adjustments to their programs. Specifically, the maintenance of eligibility requirement under PPACA precludes states from receiving federal Medicaid funding if they apply eligibility standards, methods, or procedures, under their plan or a waiver, that are more restrictive for adults than those in effect on the date of PPACA’s enactment until the date the Secretary of HHS determines that a health insurance exchange established by the state is fully operational, which must be no later than January 1, 2014. PPACA also provides states with an opportunity to obtain additional Medicaid funds, either immediately or in the future. For example, PPACA requires states to cover all persons under 65 who are not already eligible under mandatory eligibility groups up to 133 percent of the FPL by 2014, but states have the option to expand eligibility immediately and to receive federal funds for these individuals. As of August 12, 2010, Connecticut and the District have obtained CMS approval to shift eligible low-income adults from existing state health care programs into Medicaid. The act also includes provisions to facilitate states’ use of home- and community- based long-term care services. Agency Comments In commenting on a draft of this report, HHS provided technical comments, which we incorporated as appropriate. HHS did not comment on our findings. We are sending copies of this report to the Secretary of HHS, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix VII. Appendix I: Scope and Methodology To examine the extent to which states have accessed increased Federal Medical Assistance Percentage (FMAP) funds, we reviewed data provided by two divisions within the Department of Health and Human Services (HHS)—the Centers for Medicare & Medicaid Services (CMS) and the Office of the Assistant Secretary for Planning and Evaluation—on increased FMAP rates and Medicaid grant awards under the Recovery Act for federal fiscal years (FFY) 2009 and 2010. We analyzed data on increased FMAP rates to determine the proportion of each state’s increase attributable to the three components prescribed in the act: the across-the- board component, the hold harmless component, and the unemployment component. We compared preliminary fourth quarter FFY 2010 increased FMAP rates to the 2011 regular FMAP rates to estimate the percentage increase in states’ share of Medicaid payments once the increased FMAP is no longer available. We also analyzed CMS data on increased FMAP funds to determine each state’s total available grant award for FFYs 2009 and 2010 and the percentage each state had drawn from their available grants as of June 30, 2010. Based on these drawdown rates, we projected the total amount of increased FMAP funds that states would draw down by December 31, 2010. We interviewed CMS officials to understand how they compiled data on increased FMAP funds and to clarify anomalies we identified in the data. We also discussed CMS officials’ oversight of the Medicaid funds provided under the Recovery Act and specifically addressed their oversight of states’ actions to comply with the act’s eligibility requirements for increased FMAP. We also reviewed relevant CMS guidance, including a sample of increased FMAP grant award letters, a fact sheet, frequently asked questions documents, and state Medicaid director letters related to the act. To examine how states used the increased FMAP funds and how states planned to sustain their Medicaid program once the increased FMAP funds are no longer available, we administered a Web-based survey to the Medicaid directors or their designated contacts in all states in August 2009 and in March 2010, and obtained a response rate of 98 and 100 percent, respectively. The surveys asked states to provide information on a variety of topics, including their uses of increased FMAP funds, monthly Medicaid enrollment from October 2007 through February 2010, adjustments made in response to the act’s requirements, and any concerns they had about the longer-term sustainability of their Medicaid programs. We pretested the surveys with Medicaid officials from four states. We reviewed all survey responses, and where appropriate, included these responses in the report. As needed, we followed up with Medicaid officials in selected states to clarify responses, to request corrected enrollment data, or to obtain additional information on their compliance with certain Recovery Act requirements. We analyzed the Medicaid enrollment data obtained from the surveys to determine total enrollment growth and percentage change in enrollment for each state between October 2007 and February 2010. We also analyzed the enrollment data to determine the extent to which each Medicaid subpopulation—children, aged individuals, disabled individuals, and adults (nonaged, nondisabled)—contributed to overall enrollment growth during this period. We analyzed the survey data on Medicaid applications to determine any changes in states’ processing volumes and rates over this period. We did not independently verify these data; however, we reviewed all survey responses and federal Medicaid data for internal consistency, validity, and reliability. Based on these activities, we determined these data were sufficiently reliable for the purpose of our report. In addition, we analyzed other state economic and fiscal data—such as poverty rates, unemployment rates, and Medicaid eligibility levels—to examine their relationship to overall Medicaid enrollment growth within states and regions. We also reviewed data prepared by Federal Funds Information for States, an organization that tracks and reports on the fiscal impact of federal budget and policy decisions on state budgets and programs. Finally, we reviewed relevant provisions of the Patient Protection and Affordable Care Act, and other legislation that affect states’ Medicaid programs. We conducted a performance audit for this review from December 2009 to August 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Regular and Preliminary Increased Fourth Quarter 2010 FMAP Rates and Components of the Increase Preliminary increased FMAP, fiscal year 2010, North Dakota is the only state that did not receive an increase to its preliminary FMAP rate due to qualifying increases in the state’s unemployment rate during the period. Appendix III: Increased FMAP Grant Awards and Funds Drawn Down Appendix III: Increased FMAP Grant Awards and Funds Drawn Down (Dollars in thousands) Increased FMAP grant award for FFY 2009 and first three quarters of FFY 2010 97 (Dollars in thousands) Appendix IV: State-Reported Adjustments to Medicaid Programs and Uses of Funds Freed Up by the Increased FMAP Maintain Medicaid payment rates for (CHIP) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● State also reported using funds to expand Medicaid eligibility levels. State reported using funds to finance local or state public health insurance programs other than Medicaid or CHIP. State did not provide responses to this survey. State reported using funds to increase Medicaid payment rates for practitioners. State reported using funds to increase Medicaid payment rates for institutional providers. Finance local public health programs or (programs other than Medicaid) Finance local public health programs or (programs other than Medicaid) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● State reported using or planning to use funds to increase Medicaid payment rates for practitioners. State also reported using or planning to use funds to increase Medicaid payment rates for institutional providers. State also reported using funds to expand Medicaid eligibility levels. Appendix V: Medicaid Enrollment and Enrollment Changes by Subpopulation Group from October 2007 through February 2010 Due to limitations in enrollment data reported by certain insular areas, we were unable to conduct complete analyses. Appendix VI: Estimated Changes in States’ FMAP Rates and Share of Medicaid Payments Percentage increase in state share of Medicaid For the portion of federal fiscal year 2011 not in the Recovery Act recession adjustment period (i.e., after December 31, 2010), the Patient Protection and Affordable Care Act (PPACA) will provide Louisiana with an FMAP of 68.04 (rather than the current FMAP of 63.61). Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact name above, the following team members made key contributions to this report: Susan Anthony, Assistant Director; Emily Beller; Laura Brogan; Ted Burik; Julianne Flowers; Zachary Levinson; Drew Long; and Kevin Milne.
In February 2009, the American Recovery and Reinvestment Act of 2009 (Recovery Act) initially provided states and the District of Columbia (the District) with an estimated $87 billion in increased Medicaid funds through December 2010, provided they met certain requirements. Funds were made available to states and the District through an increase in the Federal Medical Assistance Percentage (FMAP), the rate at which the federal government matches state expenditures for most Medicaid services. In March 2010, Congress passed the Patient Protection and Affordable Care Act (PPACA), which prohibits states from adopting certain changes to program eligibility in order to receive federal reimbursement, and in August 2010, extended increased FMAP rates through June 2011. GAO was asked to examine issues related to Medicaid funds under the Recovery Act. GAO examined (1) states' and the District's access to and use of increased FMAP funds, and (2) states' and the District's plans to sustain their Medicaid programs once these funds are no longer available. To do this work, GAO surveyed state Medicaid officials in the 50 states and the District in August 2009 and March 2010 about their program enrollment, uses of funds, program adjustments, and program sustainability. GAO obtained responses from all states and the District. GAO also reviewed CMS data and guidance and interviewed CMS and state officials. States and the District are on pace to draw down about 94 percent--$82 billion of the estimated $87 billion--in increased FMAP funds provided by the Recovery Act. Most states adjusted their Medicaid programs to comply with the act's requirements, and nearly all states and the District reported using the increased FMAP to cover increased enrollment, which grew by 14.2 percent nationally between October 2007 and February 2010. Enrollment growth across the states and the District ranged from about 1 percent to 38 percent, with 22 states and the District experiencing a 10 to less than 20 percent increase. Although most enrollment growth was attributable to children, the highest growth rate was among the non-disabled, non-aged adult population. Forty-seven states and the District reported concern regarding the sustainability of their Medicaid programs without the increased FMAP, and 46 states took steps to address sustainability, including introducing financing arrangements, such as taxes on health care providers, or reducing provider payments. Most states and the District also reported proposed changes for the future. Congress passed legislation in August 2010 to extend the increased FMAP through June 2011, although at lower rates than provided by the Recovery Act. How the subsequent return to regular FMAP rates will affect states and the District will vary depending on their unique economic circumstances. GAO estimates that regular FMAP rates will be, on average, nearly 11 percentage points lower than increased FMAP rates available in December 2010. For future adjustments, states and the District will need to consider PPACA, which prohibits more restrictive eligibility standards, methods, or procedures until 2014, in order to receive federal Medicaid reimbursement. HHS provided technical comments to this report, which GAO incorporated as appropriate.
Background Recent economic, medical, technological, and social changes have increased opportunities for individuals with disabilities to live with greater independence and more fully participate in the workforce. For example, over the past several decades, the economy has shifted towards service- and knowledge-based jobs that may allow greater participation for some persons with physical limitations. Also, advances in medicine and assistive technologies—such as improved treatments for mental illnesses and advanced wheelchair design—afford greater opportunities for some people with disabilities. In addition, social and legal changes have promoted the goal of greater inclusion of people with disabilities in the mainstream of society, including adults at work. For example, the Americans with Disabilities Act supports the full participation of people with disabilities in society and fosters the expectation that people with disabilities can work and have the right to work. More recently, the President announced the New Freedom Initiative, a set of guiding principles and initiatives aimed at improving the integration of people with disabilities in all aspects of society, including employment. Public concern and congressional action have produced a broad array of federal programs designed to help people with disabilities. However, our prior reviews of the largest federal disability programs indicate that such programs have not evolved in line with these larger societal changes and therefore, are poorly positioned to provide meaningful and timely support for people with disabilities. Furthermore, program enrollment and costs for the largest federal disability programs have been growing and are poised to grow even more rapidly in the future, further contributing to the federal government’s large and growing long-term structural deficit. For example, from 1982 to 2002, the number of disabled workers receiving benefits under SSA’s Disability Insurance (DI) program doubled from 2.6 million to 5.5 million, while payments quadrupled from about $14.8 billion to $60 billion. Moreover, these disability programs are poised to grow even more as baby boomers reach their disability-prone years. This program growth is exacerbated by the low rate of return to work for individuals with disabilities receiving cash and medical benefits. In addition, the projected slowdown in the growth of the nation’s labor force has made it more imperative that those who can work are supported in their efforts to do so. Over 20 Different Agencies Administer Almost 200 Programs That Provide a Wide Range of Assistance We identified over 20 federal agencies and almost 200 federal programs that are either wholly or partially targeted to serving people with disabilities. These programs provide a wide range of assistance such as employment-related services, medical care, and monetary support. Multiple agencies run programs that provide similar types of assistance, but these programs often serve different populations of people with disabilities because of varying eligibility criteria. About 59 percent of the programs we identified provide indirect support to people with disabilities through means such as grants to states, while the rest provide support directly to people with disabilities. In fiscal year 2003, over $120 billion in federal funds were spent on programs that serve only people with disabilities. Although there were insufficient data available to estimate the total additional funds spent on people with disabilities by programs that also serve people without disabilities, benefit payments for people with disabilities for two such programs alone—Medicare and Medicaid— amounted to about $132 billion in fiscal year 2002. Multiple Federal Agencies Administer Programs Serving People with Disabilities Twenty-one federal agencies—under the jurisdiction of more than 10 Congressional committees—administer 192 programs that target or give priority to people with disabilities (see table 1). However, four agencies— the departments of Health and Human Services (HHS), Education, Veterans Affairs, and Labor—are responsible for over 65 percent of these programs. About half of the programs that we identified are wholly targeted (targeted exclusively) to people with disabilities. The rest of the programs are partially targeted to people with disabilities—they serve people with and without disabilities. Specifically, of the 192 programs we identified, 95 reported being wholly targeted, and 97 reported being partially targeted. The wholly targeted programs reported that they served over 34 million beneficiaries or clients in fiscal year 2003, with the largest among these—SSA’s DI program and VA’s Veterans Compensation for Service-Connected Disability program—serving about 10 million of these beneficiaries. Although some of the partially targeted programs we surveyed could not provide data on the number of people with disabilities they serve, our survey data indicate that these programs served at least 15 million beneficiaries or clients with disabilities in fiscal year 2003, with the largest of these programs—SSA’s Supplemental Security Income Program—serving about 5.7 million of these beneficiaries. Federal Programs Provide a Wide Range of Assistance to People with Disabilities Federal programs provide a wide range of assistance to people with disabilities (see fig. 1). The most common primary types of assistance provided are employment-related services and medical care, although a number of programs provide civil protections or legal services, education, and monetary support as well as other benefits or services (see fig. 2). Most of the federal programs provide more than one type of assistance and over one-quarter of the programs provide three or more types of assistance to people with disabilities (see fig. 3). For example, the Developmental Disabilities Basic Support and Advocacy Grants program run by HHS provides multiple types of assistance to people with disabilities including housing, education, transportation, and information dissemination services. About 59 percent of the programs we identified provide support indirectly through other entities such as state agencies or private organizations, while the rest provide it directly to people with disabilities. For example, the Department of Education’s Preschool Grants program provides special education to preschool children with disabilities via funding to state education agencies, whereas the Department of Labor’s Coal Mine Workers’ Compensation program provides monetary support directly to eligible coal mine workers with disabilities. Of the programs that provide assistance indirectly to people with disabilities, the most common means is through nonfederal government entities (e.g., state or local agencies). Multiple Federal Agencies Provide Similar Types of Assistance Multiple federal agencies administer programs that provide similar types of assistance to people with disabilities (see table 2). For example, seven agencies—including the Social Security Administration, the Committee for the Purchase from People who are Blind or Severely Disabled, the Office of Personnel Management, and the departments of Agriculture, Education, Labor, and Veterans Affairs—administer 28 programs that primarily provide employment-related services to people with disabilities. Although programs from multiple agencies provide the same primary type of assistance, these programs often have varying eligibility criteria that may limit the populations served to distinct groups of people with disabilities. For example, the American Indian Vocational Rehabilitation Services program run by the Department of Education and the Department of Veterans Affairs’ Vocational Rehabilitation for Disabled Veterans program each provide employment-related assistance, but to distinct groups of people. Furthermore, the 28 programs that primarily provide employment-related services often have distinct eligibility criteria beyond the specific populations served. Billions Are Spent on Programs for People with Disabilities The programs that provide assistance only to people with disabilities spent over $120 billion in fiscal year 2003. SSA and VA accounted for about 88 percent of this amount (see fig. 4). In particular, SSA’s DI program accounted for about 64 percent of the total spending for wholly targeted programs, and the VA’s Veterans Compensation for Service-Connected Disability program accounted for approximately 17 percent of this total. Approximately 86 percent of the wholly targeted program spending was for programs that primarily provided monetary support to people with disabilities (see fig. 5). In addition to the billions of dollars spent on programs that serve only people with disabilities, additional amounts are spent on individuals with disabilities by partially targeted programs whose beneficiaries also include people without disabilities. While we were not provided with sufficient data to determine the total amount spent by all of these partially targeted programs on benefits or services for individuals with disabilities, these costs are certainly significant given that they include such programs as Supplemental Security Income (SSI), Medicaid, and Medicare. In 2002, SSI paid about $26 billion in cash benefits to people with disabilities and Medicaid and Medicare together paid about $132 billion in benefits for such individuals. Federal Programs That Support People with Disabilities Face an Array of Challenges Both our past work and our recent survey of federal programs supporting people with disabilities indicate that these programs face a number of challenges. Among these are challenges in ensuring timely and consistent processing of applications for assistance, ensuring timely provision of services and benefits, interpreting complex eligibility requirements, planning for growth in the demand for program benefits and services, making beneficiaries or clients aware of program services or benefits, and communicating or coordinating with other federal programs. Timely and Consistent Processing of Applications for Assistance Our past work examining disability programs administered by SSA and VA highlighted the challenges that federal programs face in ensuring timely and consistent processing of applications for assistance. Both SSA and VA have experienced lengthy processing times for disability claims over the past several years, with claimants waiting, on average, more than 4 months for an initial decision and for more than 1 year for a decision on appeal of a denied claim. In addition, we have also pointed out that inconsistencies in these agencies’ disability claim decisions across adjudicative levels and locations have raised questions about the fairness, integrity, and cost of these programs. Our survey provides further evidence of such challenges facing programs that provide monetary support. Almost half of these programs reported that ensuring timely processing of applications was a major or moderate challenge, and more than one-quarter of monetary support programs reported that consistent processing of applications was a major or moderate challenge. Timely Provision of Services and Benefits Our past work also identified the challenges encountered by federal programs in ensuring timely provision of services and benefits. For example, we noted that structural weaknesses in SSA’s DI and SSI programs have prevented the agency from offering return-to-work services when it may help most—soon after a person becomes disabled. Our survey indicates that some other federal programs also face the challenge of providing services in a timely fashion. For example, 38 percent of the programs that provide employment-related assistance to people with disabilities reported that ensuring timely provision of services and benefits was a challenge. Officials from the Department of Education, for instance, told us that of the 80 Vocational Rehabilitation (VR) agencies they are responsible for overseeing, about half of these agencies operate under a special procedure for prioritizing services because the demand for VR services outweighs the available resources. Interpreting Complex Eligibility Requirements Our past work indicated that SSA and VA’s eligibility requirements are complex and difficult to interpret. For example, we have reported that the high costs of administering SSA’s DI program reflects the complex and demanding nature of making disability decisions. Our survey provides further evidence of such challenges for federal disability programs. For example, 53 percent of programs providing monetary support to people with disabilities reported that interpreting complex eligibility requirements was a challenge. Planning for Growth in the Demand for Services and Benefits Our past work noted that federal disability programs are facing challenges in planning for the anticipated increase in demand for their benefits and services. For example, by the year 2010, SSA expects the number of Social Security DI beneficiaries to increase by more than one-third over 2001 levels. However, our past work found that most of the state Disability Determination Services agencies responsible for processing DI claims face significant challenges in ensuring there are enough trained staff to handle DI as well as SSI claims. Similarly, in our prior work we reported that despite VA’s recent progress in reducing its disability claims workload, it will be difficult for the agency to cope with future workload increases due to several factors, including increased demand for services as a result of military conflicts and legislative mandates. Our survey of federal disability programs indicates that planning for growth in the demand for benefits or services is also a challenge for other programs that support people with disabilities. For example, 54 percent of the programs that provide medical care and almost half of the programs that provide employment-related assistance reported that planning for growth in the demand for assistance was a challenge. Our discussions with responsible agency officials reinforced the challenges posed by potential growth in demand for program services or benefits. For example, officials from the Department of Labor’s one-stop center program told us they are not sure if the program has sufficient resources to meet any increased demand for services that might result from the outreach they are conducting to people with disabilities. Making Beneficiaries or Clients Aware of Program Services and Benefits Our past work highlighted challenges in making beneficiaries aware of services offered under federal disability programs. For example, we reported that SSA’s work incentives are ineffective in motivating people to work, in part, because many beneficiaries are unaware that the work incentives even exist. Our survey indicated that 69 percent of programs that disseminate information to people with disabilities reported that making beneficiaries or clients aware of their programs’ services was a challenge. The need to make people more aware of disability program services has also been noted by other entities. For example, in 1999, the Presidential Task Force on Employment of Adults with Disabilities suggested that the White House take more action to make people aware of programs that support people with disabilities. Communication and Coordination among Programs Serving Individuals with Disabilities Both our work and the work of others suggests some weaknesses in communication and coordination among various federal disability programs. In a 1996 report, we noted that programs helping people with disabilities do not work together as efficiently as they could to share information about their programs and to overcome obstacles posed by differing eligibility criteria and numerous service providers. We said that the lack of coordination among programs could result in duplication or gaps in services provided to people with disabilities. Others have also identified the need for greater coordination among federal disability programs. For example, in announcing the New Freedom Initiative—a federal effort to remove barriers and promote community integration for people with disabilities—the President identified policy areas, such as the provision of assistive technology, where better federal coordination was needed. Also, in a review of programs for low-income adults with disabilities, Urban Institute researchers described the safety net supporting such individuals as “a tangled web of conflicting goals and gaps in needed services.” In addition, officials at the National Council on Disability told us that although various interagency commissions exist to address issues faced by people with disabilities, most of these commissions have weak authority or have never met as a group. Our survey provides further evidence of the coordination and communication challenges facing federal programs serving individuals with disabilities. About one-third of these programs indicated that, in their efforts to support people with disabilities, they experienced challenges in obtaining information from or coordinating with other federal or nonfederal programs. Key Factors to Consider in Transforming Programs for the 21st Century Over the past several years, GAO, in reporting that the largest federal disability programs were mired in outdated concepts of disability, has identified the need to reexamine and transform these programs to better position the government to meet the challenges and expectations of the 21st century. In identifying the wide range of federal programs serving individuals with disabilities and some of the major challenges these programs face, this report raises several questions about whether other federal disability programs may also need to be reoriented and transformed. In particular, are the nearly two hundred programs that provide assistance to people with disabilities well-suited to address these challenges, and are they structured in a manner that collectively allows them to provide coherent and seamless support to people with disabilities? Also, in light of the nation’s large and growing structural deficit, do these programs represent the most cost-effective approaches to serving individuals with disabilities? On the basis of more than a decade of research focusing on the nation’s largest disability programs and our review of prior GAO reports examining efforts to reform federal programs and transform agencies, we have identified several key factors that are important to consider in assessing the need for, and nature of, program transformations. In particular, our prior work identifying shortcomings in the work incentives and supports provided by the largest federal disability programs indicates that these basic program design issues need to be addressed. Second, given the tight fiscal constraints facing both federal and state governments, programs will need to carefully consider the sustainability of current costs and the potential costs associated with transformation initiatives. Finally, programs will need to evaluate the feasibility of any transformation efforts, considering whether appropriate processes and systems— including those related to the planning and management of human capital and information technology—are in place to effectively carry out current operations or proposed changes. Figure 6 presents a list of questions that may serve as a guide for addressing these factors. In addition to addressing these questions, which will provide a basic framework for individually assessing existing programs and proposals for transforming them, it is also important that some mechanism be established for looking across programs to assess their overall effectiveness and integration and whether they are designed to achieve similar or complimentary goals. The diffusion of responsibility for federal programs serving people with disabilities across multiple agencies and the absence of any clear central authority for guiding a fundamental reassessment of federal disability policy will likely pose significant impediments to such action. However, a reexamination could serve to identify programs and policies that are outdated or ineffective while improving the targeting and efficiency of remaining programs through such actions as redesigning allocation and cost-sharing provisions and consolidating facilities and programs. Our recently issued report concerning “21st Century Challenges” identifies approaches—such as the use of special temporary commissions to develop policy proposals and the exercise of congressional oversight through hearings on the activities of federal agencies—that may be used for such a reexamination should the Congress choose to pursue this course of action. Addressing the individual program transformation questions we identify above in conjunction with a reexamination of how these programs work collectively represent key steps in efforts to meet 21st century social and economic expectations of individuals with disabilities and the general public. Copies of this report are being sent to: the Secretaries of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Housing and Urban Development, Interior, Justice, Labor, Transportation, Treasury, and Veterans Affairs; the Commissioner of SSA; the Director of the Office of Personnel Management; the Administrator of the Small Business Administration; the Chairman of the Railroad Retirement Board; the Chairperson of the Committee for Purchase from People who are Blind or Severely Disabled; the Chair of the Access Board; the Chair of the Equal Employment Opportunity Commission; the Librarian of Congress; appropriate congressional committees; and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology For our review, we defined a federal program as a function of a federal agency that provides assistance or benefits to a state or states, territorial possession, county, city, other political subdivision, or grouping or instrumentality thereof; or to any domestic profit or nonprofit corporation, institution, or individual, other than an agency of the federal government. We defined the scope of our review to include those federal programs meeting one of more of the following criteria: (1) people with disabilities are specifically mentioned in a program’s authorizing legislation as a targeted group, (2) people are eligible for the program wholly because of a disability, (3) people are eligible for the program partially because of a disability, (4) people with disabilities are given special consideration in eligibility determinations, or (5) people with disabilities are given priority in being served. Programs that serve individuals without respect to disability (i.e., disability is not an explicit criteria for program eligibility) but that serve some individuals with disabilities (such as Temporary Assistance for Needy Families) are beyond the scope of our review. In addition, we excluded programs whose principal focus is research, demonstrations, training for professionals who work with people with disabilities, technical assistance, or special transportation, as well as disability retirement programs for federal workers. To develop a list of programs that met these criteria, we first conducted a systematic search in the Catalog of Federal Domestic Assistance (CFDA) to identify programs that have some role in serving people with disabilities and the respective agencies responsible for administering each of these programs. In addition, we reviewed federal agency Web sites to identify additional programs that were not included in the CFDA. We then submitted the list of programs administered by each agency to that agency for verification. (The final list of programs along with some descriptive information on each program can be found in app. II.) In developing our list, we included federal programs regardless of how the benefit, service, or assistance is ultimately delivered to the individual (e.g., directly by the federal agency or indirectly by another entity, such as a state agency). To obtain information on federal programs supporting people with disabilities and the challenges they face, we conducted a Web-based survey, which collected basic information on each program, including the types of assistance provided, whether the assistance is provided directly to beneficiaries or indirectly through other entities, whether the program is partially or wholly targeted to people with disabilities, the number of beneficiaries served, program spending, and the challenges faced by these programs (i.e., obstacles that hindered a program’s ability to effectively and efficiently support people with disabilities). (A more complete tabulation of the survey results related to program challenges is available on the GAO Web site at www.gao.gov/cgi-bin/getrpt?GAO-05-695SP.) To identify the appropriate program officials to respond to the survey, we submitted the list of programs that we compiled to liaisons at each agency. These liaisons then identified the appropriate respondents at their respective agencies. We pretested the content and format of our survey with officials from eight programs to determine if it was understandable and if the information was feasible to collect, and we refined the survey as appropriate. We then sent e-mail notifications to the identified officials of 299 programs beginning on June 15, 2004, asking them to complete the survey by June 28, 2004. To encourage respondents to complete the survey, we sent e-mail messages to prompt each nonrespondent 1 and 2 weeks after the initial e-mail message. We closed the survey on August 16, 2004. We obtained survey responses from 258 programs, for an overall response rate of 86 percent. In addition, for 11 of the 41 programs that did not submit survey responses, we obtained descriptive information from the CFDA to answer a limited number of survey questions to the extent that such information was available. Based on responses to survey questions asking programs to identify the criteria they apply in serving people with disabilities and the primary type of assistance they provide, we identified 192 programs (comprising 64 percent of all programs surveyed) that met our criteria for defining programs as either wholly or partially targeted towards serving individuals with disabilities. Although our survey asked programs to provide spending data, because of limitations or inconsistencies in the spending information reported by survey respondents, we obtained spending data from the Consolidated Federal Funds Report (CFFR)—a database compiled by the Bureau of the Census—for all of the relevant programs listed in this database. For programs that did not have data reported in the CFFR, we used spending information from the survey data. In a few cases where spending data was not available from either the CFFR or survey data, we obtained this information from the CFDA. To verify the spending data that we present in this report, we sent each program an e-mail message asking them to confirm the amounts we had identified. While many programs confirmed the spending amounts that we listed in our message, others identified different amounts. The spending data we present in this report are based on the final verified spending amounts identified by programs in their response to our e-mail. These data are not entirely consistent across programs. For example, while most of these data represent spending for fiscal year 2003, some programs instead provided data for other fiscal years. Also, some programs included administrative costs in their spending figures while others did not include such costs. In addition, while the majority of the spending data we report represent program obligations, some of the data instead represent outlays. Of the 95 wholly targeted programs in our analysis, we were able to obtain some type of spending data for 85 programs. However, many partially targeted programs were unable to provide us with data pertaining to their programs’ spending on people with disabilities because they do not separately track or collect such data for these individuals. As a result, we do not present spending data in this report for partially targeted programs except for three programs (Supplemental Security Income, Medicare, and Medicaid) for which we were able to obtain a breakdown of spending on people with disabilities from agency documents. Because we relied extensively on program spending data derived from the 2003 CFFR data that are available on-line from the CFFR Web site (http://www.census.gov/govs/www/cffr.html), we conducted limited tests of the reliability of these data, including frequency analyses of critical data fields. We restricted our reliability assessment to the specific variables that were pertinent to our analysis. These tests indicated that the critical data fields were sufficiently complete and accurate for the purposes of our analysis. To obtain additional information on the challenges faced by programs, we conducted interviews with federal agency officials and officials from disability advocacy organizations, and reviewed pertinent agency documents, GAO reports, and academic research on disability issues. To identify questions that should be addressed in transforming federal disability programs, we reviewed the major findings and recommendations that have resulted from the substantial body of GAO research on federal disability programs over the past decade. We also examined past GAO reports on program reform and organizational transformation throughout the federal government. Because our questionnaire was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed, can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, social science survey specialists designed the questionnaire in collaboration with GAO staff with subject matter expertise. Then, as mentioned earlier, the draft questionnaire was pretested with program officials to ensure that the questions were relevant, clearly stated, and easy to comprehend. When the data were analyzed, a second, independent analyst checked all computer programs. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire. This eliminated the need to have the data keyed into a database, thus removing an additional source of error. We performed our work at various locations in Washington, D.C. We conducted our work between March 2004 and March 2005 in accordance with generally accepted government auditing standards. Appendix II: Federal Programs Serving People with Disabilities The following table presents an overview of the 192 federal programs that we identified as targeted to people with disabilities. The information presented in this table is based mostly on the programs’ survey responses, although it also presents data obtained from other sources. In particular, the spending information is derived from multiple sources, including programs’ survey responses and federal government reports on program spending. The spending data we present below represent either obligations, expenditures, or appropriations, as indicated by the table notes accompanying each reported amount. Due to the various sources that we used to identify program spending and possible inconsistencies in these data (e.g., differences in the fiscal years for which spending was reported by programs), we advise caution in efforts to compare or sum spending figures across programs. Also, given the significant limitations in the spending data available for partially targeted programs, we do not present such data in this table. (See app. 1 for a more detailed discussion of our methodology for collecting spending data and other information on these programs.) Spending (for fiscal year 2003 unless otherwise indicated) Committee for Purchase From People Who Are Blind or Severely Disabled Javits-Wagner-O Day Program (Committee for Purchase From People Who Are Blind or Severely Disabled) Assistive & Ergonomic Technology (Target Center, USDA, Washington, D.C.) Assistive and Ergonomic Technology (Midwest Target Center, St. Louis, Missouri) Spending (for fiscal year 2003 unless otherwise indicated) Spending (for fiscal year 2003 unless otherwise indicated) Spending (for fiscal year 2003 unless otherwise indicated) Disabilities Prevention (Disability and Health) Family Support Payments to States Assistance Payments (Adult Programs in the Territories) Spending (for fiscal year 2003 unless otherwise indicated) Maternal and Child Health Services Block Grant to the States (Title V) Other: Infrastructure and coordination Other: Multifaceted support systems Other: Outreach and case management. Special Programs for the Aging Title III, Part C Nutrition Services Special Projects of National Significance (Ryan White CARE Act) Spending (for fiscal year 2003 unless otherwise indicated) Housing Opportunities for Persons with AIDS Lower Income Housing Assistance Program Section 8 Moderate Rehabilitation Mortgage Insurance Rental Housing for the Elderly Non-Discrimination in Federally Assisted and Conducted Programs (on the Basis of Disability) Spending (for fiscal year 2003 unless otherwise indicated) Spending (for fiscal year 2003 unless otherwise indicated) Assistance for Indian Children with Severe Disabilities Capital and Training Assistance Program for Over-the-Road Bus Accessibility Capital Assistance Program for Elderly Persons and Persons with Disabilities FTA general activities and technical assistance related to disability issues Tax Deduction to remove barriers for the Elderly and Disabled Other: Tax Automobiles and Adaptive Equipment for Certain Disabled Veterans and Members of the Armed Forces Compensation for Service-Connected Deaths for Veterans’ Dependents Montgomery GI Bill Educational Assistance (Chapter 30) Spending (for fiscal year 2003 unless otherwise indicated) Pension to Veterans Surviving Spouses, and Children Post-Vietnam Era Veterans’ Educational Assistance Specially Adapted Housing for Disabled Veterans Survivors and Dependents Educational Assistance Veterans Compensation for Service-Connected Disability Veterans Dependency and Indemnity Compensation for Service-Connected Death; Compensation for Service Veterans Housing—Guaranteed and Insured Loans Vocational and Educational Counseling for Separating Service Members (Chapter 36) Vocational Rehabilitation for Disabled Veterans Vocational Training and Rehabilitation for Vietnam Veterans’ Children with Spina Bifida or Other Covered Birth Defects Employment Discrimination Section 501 of the Rehabilitation Act (federal employees) Spending (for fiscal year 2003 unless otherwise indicated) Other: Library service. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments The following individuals made important contributions to this report: Shelia D. Drake, Erin M. Godtland, Joel A. Green, Mark de la Rosa, David J. Forgosh, Mark Trapani, Stuart M. Kaufman, and Daniel A. Schwimer.
In 2003, GAO designated modernizing federal disability programs as a high-risk area requiring urgent attention and organizational transformation to ensure that programs function as efficiently and effectively as possible. GAO found that although social attitudes have changed and medical advancements afford greater opportunities for people with disabilities to work, the Social Security Administration and the Department of Veterans Affairs have maintained an outmoded approach that equated disability with inability to work. We have prepared this report under the Comptroller General's authority as part of a continued effort to help policymakers better understand the extent of support provided by federal programs to people with disabilities and to assist them in determining how these programs could be better aligned to more effectively meet the needs of individuals with disabilities in the 21st century. This report identifies (1) the wide array of federal programs that serve people with disabilities, and (2) the major challenges these federal programs face in the 21st century. In addition, GAO presents factors policy makers and program administrators should address in assessing whether, and how, they could be transformed to better meet 21st century challenges. More than 20 federal agencies and almost 200 programs provide a wide range of assistance to people with disabilities, including employment-related services, medical care, and monetary support. About half of these programs serve only people with disabilities while the rest serve people both with and without disabilities. In fiscal year 2003, more than $120 billion in federal funds was spent on programs that only serve people with disabilities, with over 80 percent of these funds spent on monetary support. In addition, considerable funds are spent on people with disabilities by programs that also serve people without disabilities, like Medicare and Medicaid. The program challenges cited most frequently in our recent survey of nearly 200 programs serving people with disabilities are largely consistent with several of the key findings from past reports that led GAO to place federal programs supporting people with disabilities on its high-risk list. Both our recent survey and our past work have identified challenges in (1) ensuring timely and consistent processing of applications; (2) ensuring timely provision of services and benefits; (3) interpreting complex eligibility requirements;( 4) planning for growth in the demand for benefits and services; (5) making beneficiaries or clients aware of benefits and services; and (6) communicating or coordinating with other federal disability programs. In light of the vital role federal programs play in providing assistance to people with disabilities and in helping to ensure an adequate national labor force, we have identified a number of factors that are important to consider in assessing the need for, and nature of, program transformations including (1) program design issues; (2) fiscal implications of proposed program changes; and (3) feasibility of implementing program changes.
DOD’s Growing Reliance On Contractors Contractors have an important role to play in the discharge of the government’s responsibilities, and in some cases the use of contractors can result in improved economy, efficiency, and effectiveness. However, in many cases contractors are used because the government lacks its own personnel to do the job. Long-standing problems with the lack of oversight and management of contractors are compounded by the growing reliance on them to perform functions previously carried out by government personnel. The government is relying on contractors to perform many tasks that closely support inherently governmental functions, such as contracting support, intelligence analysis, security services, program management, and engineering and technical support for program offices. We recently surveyed officials from 52 of DOD’s major weapons programs, who reported that over 45 percent of the program office staff was composed of individuals outside of DOD. Some program officials expressed concerns about having inadequate personnel to conduct their program office roles. In a prior review of space acquisition programs, we found that 8 of 13 cost estimating organizations and program officials believed the number of government cost estimators was inadequate and that 10 of those offices had more contractor personnel preparing cost estimates than government personnel. In general, I believe there is a need to focus greater attention on what type of functions and activities should be contracted out and which ones should not. Inherently governmental functions include activities that require either the exercise of discretion in applying government authority, or the making of value judgments in making decisions for the government; as such, they are required to be performed by government employees, not private contractors. The closer contractor services come to supporting inherently governmental functions, the greater the risk of contractors influencing the government’s control over and accountability for decisions that may be based, in part, on the contractor’s work. This situation may result in decisions that are not in the best interest of the government and American taxpayer, while also increasing overall vulnerability to waste, fraud, or abuse. The Federal Acquisition Regulation provides 19 examples of services and actions that may approach the category of inherently governmental because of the nature of the function, the manner in which the contractor performs the contracted services, or the manner in which the government administers contractor performance. These include acquisition support, budget preparation, engineering and technical services, and policy development. One way in which DOD has expanded the role of contractors is its use of a lead systems integrator for major-weapons development. This approach allows one or more contractors to define a weapon system’s architecture and then manage both the acquisition and the integration of subsystems into the architecture. In such cases, the government relies on contractors to fill roles and handle responsibilities that differ from the more traditional prime contractor relationship, a scenario that can blur the oversight responsibilities between the contractor and federal program management officials. For example, the Army’s Future Combat Systems program is managed by a lead systems integrator that assumes to some extent the responsibilities of developing requirements, selecting major system and subsystem contractors, and making trade-off decisions among costs, schedules, and capabilities. While this management approach has some advantages for DOD, we found that the extent of contractor responsibility in many aspects of the Future Combat Systems program management process is a potential risk. Moreover, if DOD uses a lead systems integrator but does not provide effective oversight, DOD is vulnerable to the risk that the integrator may not make its decisions in a manner consistent with the government’s and taxpayers’ best interests, especially when faced with potential organizational conflicts of interest. Potential Risks Associated with Use of Contractors When the decision is made to use contractors in roles closely supporting inherently governmental functions, additional risks are present. Defense contractor employees are not subject to the same laws and regulations that are designed to prevent personal conflicts of interests among federal employees. Moreover, there is not a departmentwide requirement for DOD offices to employ personal conflict of interest safeguards for contractor employees, although new governmentwide policy implemented in November 2007 requires that certain contractors receiving awards worth more than $5,000,000 and 4 months of work have an ethics program. A separate proposed rule was recently published at the request of the Justice Department to amend the regulation to require that companies holding certain types of contracts disclose suspected violations of federal criminal law in connection with the award or performance of contracts, or face suspension or debarment. Public comments are due in January 2008. We will be issuing a report on personal conflicts of interest, as they pertain to defense contractor employees, shortly. In addition, personal services contracts are prohibited, unless authorized by statute. The government is normally required to obtain its employees by direct hire under competitive appointment or other procedures required by the civil service laws. GAO bid protest decisions also have determined that a personal services contract is one that, by its express terms or as administered, makes the contractor personnel appear to be, in effect, government employees. Whether a solicitation would result in a personal services contract must be judged in the light of its particular circumstances, with the key question being whether the government will exercise relatively continuous supervision and control over the contractor personnel performing the requirement. The Federal Acquisition Regulation lists six elements to be used as a guide in determining the existence of a personal services contract, which are shown in table 1. When contractors work side by side with government employees and perform the same mission-related duties, the risk associated with such contracts can be increased. Contingency Situations Reveal Acquisition Workforce Shortfalls In July 2006, we reported that DOD’s acquisition workforce is subject to certain conditions that increase DOD’s vulnerabilities to contracting fraud, waste, and abuse, including growth in overall contracting workload, pending retirement of experienced government contracting personnel, and a greater demand for contract surveillance because of DOD’s increasing reliance on contractors for services. Fraud is any intentional deception taken for the purpose of inducing DOD action or reliance on that deception. Fraud can be perpetrated by DOD personnel—whether civilian or military—or by contractors and their employees. Trust and access to funds and assets that come with senior leadership and tenure can become a vulnerability if the control environment in an organization is weak. We also need to target waste in government spending. Government waste is growing and far exceeds the cost of fraud and abuse. Several of my colleagues in the accountability community and I have developed a definition of waste, which is contained in appendix II. Although waste does not normally involve a violation of law, its effects can be just as profound. In response to our July 2006 report, DOD’s Panel on Contracting Integrity reported this month that it has identified 21 initial actions for implementation in 2008 that it expects will address areas of vulnerability in the defense contracting system that allow fraud, waste, and abuse to occur. Some amount of vulnerability to mismanagement, fraud, waste, or abuse will always be present in contracting relationships, even with rules and regulations in place to help prevent it. These vulnerabilities are more dramatically revealed in contingency situations, such as the conflicts in Iraq and the aftermath of Hurricane Katrina, when large amounts of money are quickly made available and actions are hurried. One very significant weakness is the condition of the government’s acquisition workforce. We and others have reported for a number of years on the risks posed by a workforce that has not kept pace with the government’s spending trends. The Acquisition Advisory Panel, for example, recently noted the significant mismatch between the demands placed on the acquisition workforce and the personnel and skills available within that workforce to meet those demands. To put it another way, at the same time that procurement spending has skyrocketed, fewer acquisition professionals are available to award and—just as importantly—administer contracts. Two important aspects of this issue are the numbers and skills of contracting personnel and DOD’s ability to effectively oversee contractor performance. Acquisition Workforce Shortfalls In its January 2007 report, the Acquisition Advisory Panel stated that the government’s contracting workforce was reduced in size in the 1990s, with DOD’s declining by nearly 50 percent due to personnel reductions during that time. Despite recent efforts to hire acquisition personnel, there remains an acute shortage of federal procurement professionals with between 5 and 15 years of experience. This shortage will become more pronounced in the near term because roughly half of the current workforce is eligible to retire in the next 4 years. We have long noted that DOD’s acquisition workforce needs to be made a priority. We have reported that DOD needs to have the right skills in its acquisition workforce to effectively implement best practices and properly manage the acquisition of goods and services. We have also observed that the acquisition workforce continues to face the challenge of maintaining and improving skill levels to use alternative contracting approaches introduced by acquisition reform initiatives of the past few decades. Recent developments indicate that the tide may be turning, with actions underway to address what is generally agreed to be a problematic state of the acquisition workforce. For example, DOD’s Panel on Contracting Integrity, in its 2007 report to Congress, identified the following focus areas for planned actions, all of which focus on acquisition workforce issues: reinforce the functional independence of contracting personnel, fill contracting leadership positions with qualified leaders, determine the appropriate size of the contracting workforce and ensure that it has the appropriate skills, and improve planning and training for contracting in combat and contingency environments. Also, the Commission on Army Acquisition and Program Management in Expeditionary Operations issued a report in November 2007, entitled “Urgent Reform Required: Army Expeditionary Contracting.” The commission found that the acquisition failures in expeditionary operations require a systemic fix of the Army acquisition system and cited the lack of Army leadership and personnel (military and civilian) to provide sufficient contracting support to either expeditionary or peacetime operations. It noted that only 3 percent of Army contracting personnel are active duty military and there are no longer any Army contracting career general officer positions. It found that what should be a core competence— contracting—is treated as an operational and institutional side issue. One general officer told the commission that “this problem is pervasive DOD- wide, because workload continues to go up while contracting and acquisition assets go down–there is a cost to these trends that is paid in risk, and we don’t realize how big the bill is until there’s a scandal.” The commission recommended increasing the stature, quantity, and career development of military and civilian contracting personnel. In response to the commission’s report, the Army approved the creation of an Army Contracting Command, which will fall under the Army Materiel Command and be led by a two-star general. The Army also plans to increase its contracting workforce by approximately 400 military personnel and 1,000 civilian personnel. We believe that, while there is no way to completely prevent fraud, waste, abuse, or poor decision making, increasing the numbers and skills of the acquisition workforce is critical to lessening the likelihood of future problems and affecting positive change. We must address this soon in order to prevent additional waste and increased risk. Monitoring Contractor Performance The role of the acquisition function does not end with the award of a contract. It requires continued involvement throughout contract implementation and closeout to ensure that contracted services are delivered according to the schedule, cost, quality, and quantity specified in the contract. In DOD, oversight—including ensuring that the contract performance is consistent with the description and scope of the contract— is provided by both contracting officers and the contracting officers representative (COR), typically a government employee with technical knowledge of the particular program. We have reported wide discrepancies in the rigor with which CORs perform their duties, particularly in unstable environments. For example, in the aftermath of Hurricanes Katrina and Rita, the number of government personnel monitoring contracts was not always sufficient or effectively deployed to provide adequate oversight. Instability—such as when wants, needs, and contract requirements are in a state of flux—requires greater attention to oversight, which in turn relies on a capable government workforce. Unfortunately, attention to oversight and a capable government workforce have not always been evident in a number of instances, including during the Iraq reconstruction effort. We have reported that, particularly in the early phases of the conflict, the Army lacked an adequate acquisition workforce in Iraq to oversee the billions of dollars for which it was responsible. Further, Army personnel who were responsible for overseeing the performance of contractors providing interrogation and other services were not adequately trained to properly exercise their responsibilities. Contractor employees were stationed in various locations around Iraq, with no COR or assigned representative on site to monitor their work. An Army investigative report concluded that the lack of training for the CORs assigned to monitor contractor performance at Abu Ghraib prison, as well as an inadequate number of assigned CORs, put the Army at risk of being unable to control poor performance or become aware of possible misconduct by contractor personnel. DOD’s Panel on Contracting Integrity raised similar concerns, noting that contracting personnel in a combat/contingent environment do not always have functional independence. Contracting personnel, including CORs, are sometimes placed in positions where their direct supervisor is not in the contracting chain of command, thus possibly injecting risk into the integrity of the contracting process. The report found that CORs are not sufficiently trained and prepared, and sometimes lack support from their operational chain of command, to perform effectively. The Commission on Army Acquisition and Program Management in Expeditionary Operations also expressed concern about this issue, stating that after contract award there are “no resources trained” to monitor and ensure that the contractor is performing and providing the services needed by the warfighter. It stated that the inability to monitor contractor performance and enforce contracts are critical problems in an expeditionary environment and cited an example: “When the critical need is to get a power station running, and there are no resources to monitor contractor performance, only the contractor knows whether the completed work is being sabotaged nightly.” In December 2006, we reported that while DOD has taken some steps to improve its guidance on the use of contractors to support deployed forces, addressing some of the problems we have raised since the mid-1990s, it continues to face long-standing problems that hinder its management and oversight of contractors at deployed locations. DOD has not allocated the organizational resources to review and oversee issues regarding contractor support to deployed forces. While DOD’s new guidance is a noteworthy step, a number of problems we have previously reported on continue to pose difficulties for military personnel in deployed locations. Lack of visibility by senior leaders into the number and location of contractors and services provided at deployed locations. Inadequate number of oversight personnel at deployed locations. No systematic collection and sharing of DOD’s institutional knowledge on using contractors to support deployed forces. Limited or no training for military personnel on the use of contractors as part of their pre-deployment training or professional military education. Cost of Contractors A key assumption of many of the federal management reforms of the 1990s was that the cost-efficiency of government operations would be improved. In addition to a desire for cost savings, the need to meet mission requirements while contending with limitations on government full-time equivalent positions and a desire to use contractors’ capabilities and skills in particular situations were factors in increasing the use of contractors. We recently reported that sufficient data are not available to determine whether increased service contracting has caused DOD’s costs to be higher than they would have been had the contracted activities been performed by uniformed or DOD civilian personnel. To learn more about the role and cost of contractors providing contracting support services, we have recently undertaken new work to look at contractors providing contract specialist services to the Army Contracting Agency’s Contracting Center for Excellence (CCE). This agency currently provides contracting support to 125 DOD customers in the National Capitol Region, including the Joint Chiefs of Staff, Tricare Management Activity, Defense Information Systems Agency, DOD Inspector General, Pentagon Renovation Office, and Office of the Judge Advocate General. During fiscal year 2007, the agency awarded about 5,800 contract actions and obligated almost $1.8 billion. CCE is one of many government agencies that have turned to contractors to support its contracting functions. As a part of our review, we examined how the costs of CCE’s contractor contract specialists compared to those of its government contract specialists. Our analysis indicates that the government is paying more for the contractors. At CCE, the contractors are performing the same duties as their government counterparts and have been used in this role since 2003. We compared the costs of the government employees at the GS-12 and GS- 13 levels to their equivalent contractor counterparts (referred to as contract specialists II and III) and found that, on average, the Army is paying up to 26 percent more for the contractors, as depicted in table 2. Key elements of our analysis were: The loaded hourly cost of a government employee includes their salary, costs of the government’s contributions to the employee’s benefits, the costs to train the employee, the employee’s travel expenses, and the costs of operations overhead—which are the costs of the government employees that provide support services, such as budget analysts or human capital staff. Government employee salaries and benefits were based on actual data from one pay period. These data were then compared to the hourly cost of contractors ordered during the month of that pay period. The cost of a contractor employee is the fully loaded hourly rate the government pays for these services. We reported the weighted average of those hourly rates because the agency used two contractors at two different rates during the pay period. We excluded the costs that the government incurs for both government and contractor-provided specialists. These include the costs of supplies, facilities, utilities, information technology, and communications costs. This example is one illustrative case. In another example, officials at the Missile Defense Agency told us last year that, according to their calculations, the average cost of their government employees was $140,000, compared with an average cost of $175,000 for their contractors—who accounted for 57 percent of their 8,186 personnel positions. We will continue to do work in this area. Concluding Points In closing, I believe that we must engage in a fundamental re-examination of when and under what circumstances we should use contractors versus civil servants or military personnel. This is a major and growing concern that needs immediate attention. Once the decision to contract has been made, we must address challenges we have observed in ensuring proper oversight of these arrangements—especially considering the evolving and enlarging role of contractors in federal acquisitions. And we must elevate the acquisition function within the department. I would like to emphasize the critical need for actions to be taken to improve the acquisition workforce. The acquisition workforce’s workload and complexity of responsibilities have been increasing without adequate agency attention to the workforce’s size, skills and knowledge, and succession planning. DOD is experiencing a critical shortage of certain acquisition professionals with technical skills related to systems engineering, program management, and cost estimation. Without adequate oversight by and training of federal employees overseeing contracting activities, reliance on contractors to perform functions that once would have been performed by members of the federal workforce carries risk. As a final note, we are continuing to explore acquisition workforce issues in ongoing work and we hope to be making recommendations on these issues. Mr. Chairman and Members of the subcommittee, this concludes my statement. I would be happy to answer any questions you might have. Appendix I: Systemic Acquisition Challenges at the Department of Defense 1. Service budgets are allocated largely according to top-line historical percentages rather than Defense-wide strategic assessments and current and likely resource limitations. 2. Capabilities and requirements are based primarily on individual service wants versus collective Defense needs (i.e., based on current and expected future threats) that are both affordable and sustainable over time. 3. Defense consistently overpromises and underdelivers in connection with major weapons, information, and other systems (i.e., capabilities, costs, quantities, schedule). 4. Defense often employs a “plug and pray approach” when costs escalate (i.e., divide total funding dollars by cost per copy, plug in the number that can be purchased, then pray that Congress will provide more funding to buy more quantities). 5. Congress sometimes forces the department to buy items (e.g., weapon systems) and provide services (e.g., additional health care for non- active beneficiaries, such as active duty members’ dependents and military retirees and their dependents) that the department does not want and we cannot afford. 6. DOD tries to develop high-risk technologies after programs start instead of setting up funding, organizations, and processes to conduct high-risk technology development activities in low-cost environments (i.e., technology development is not separated from product development). Program decisions to move into design and production are made without adequate standards or knowledge. 7. Program requirements are often set at unrealistic levels, then changed frequently as recognition sets in that they cannot be achieved. As a result, too much time passes, threats may change, or members of the user and acquisition communities may simply change their mind. The resulting program instability causes cost escalation, schedule delays, smaller quantities, and reduced contractor accountability. 8. Contracts, especially service contracts, often do not have definitive or realistic requirements at the outset in order to control costs and facilitate accountability. 9. Contracts typically do not accurately reflect the complexity of projects or appropriately allocate risk between the contractors and the taxpayers (e.g., cost plus, cancellation charges). 10. Key program staff rotate too frequently, thus promoting myopia and reducing accountability (i.e., tours based on time versus key milestones). Additionally, the revolving door between industry and the department presents potential conflicts of interest. 11. The acquisition workforce faces serious challenges (e.g., size, skills, knowledge, succession planning). 12. Incentive and award fees are often paid based on contractor attitudes and efforts versus positive results (i.e., cost, quality, schedule). 13. Inadequate oversight is being conducted by both the department and Congress, which results in little to no accountability for recurring and systemic problems. 14. Some individual program and funding decisions made within the department and by Congress serve to undercut sound policies. 15. Lack of a professional, term-based chief management officer at the department serves to slow progress on defense transformation and reduce the chance of success in the acquisitions/contracting and other key business areas. Appendix II: Definition of Waste Several of my colleagues in the accountability community and I have developed a definition of waste. As we see it, waste involves the taxpayers in the aggregate not receiving reasonable value for money in connection with any government-funded activities due to an inappropriate act or omission by players with control over or access to government resources (e.g., executive, judicial or legislative branch employees; contractors; grantees; or other recipients). Importantly, waste involves a transgression that is less than fraud and abuse. Further, most waste does not involve a violation of law, but rather relates primarily to mismanagement, inappropriate actions, or inadequate oversight. Illustrative examples of waste could include the following: unreasonable, unrealistic, inadequate, or frequently changing proceeding with development or production of systems without achieving an adequate maturity of related technologies in situations where there is no compelling national security interest to do so; the failure to use competitive bidding in appropriate circumstances; an over-reliance on cost-plus contracting arrangements where reasonable alternatives are available; the payment of incentive and award fees in circumstances where the contractor’s performance, in terms of costs, schedule, and quality outcomes, does not justify such fees; the failure to engage in selected pre-contracting activities for contingent events; and congressional directions (e.g., earmarks) and agency spending actions where the action would not otherwise be taken based on an objective value and risk assessment and considering available resources. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Defense's (DOD) spending on goods and services has grown significantly since fiscal year 2000, to well over $314 billion annually. GAO has identified DOD contract management as a high-risk area for more than decade. With awards to contractors large and growing, DOD will continue to be vulnerable to contracting fraud, waste, or misuse of taxpayer dollars, and abuse. Prudence with taxpayer funds, widening deficits, and growing long-range fiscal challenges demand that DOD maximize its return on investment, while providing warfighters with the needed capabilities at the best value for the taxpayer. This statement discusses (1) the implications of DOD's increasing reliance on contractors to fill roles previously held by government employees, (2) the importance of the acquisition workforce in DOD's mission and the need to strengthen its capabilities and accountability, and (3) assumptions about cost savings related to the use of contractors versus federal employees. This statement is based on work GAO has ongoing or has completed over the past several years covering a range of DOD contracting issues. DOD has increasingly turned to contractors to fill roles previously held by government employees and to perform many functions that closely support inherently governmental functions, such as contracting support, intelligence analysis, program management, and engineering and technical support for program offices. This trend has raised concerns about what the proper balance is between public and private employees in performing agency missions and the potential risk of contractors influencing the government's control over and accountability for decisions that may be based, in part, on contractor work. Further, when the decision is made to use contractors in roles closely supporting inherently governmental functions, additional risks are present. Contractors are not subject to the same ethics rules as government even when doing the same job, and the government risks entering into an improper personal services contract if an employer/employee relationship exists between the government and the contractor employee. DOD's increasing reliance on contractors exacerbates long-standing problems with its acquisition workforce. GAO has long reported that DOD's acquisition workforce needs to have the right skills to effectively implement best practices and properly manage the acquisition of goods and services. Weaknesses in this area have been revealed in recent contingency situations, but they are present in nonemergency circumstances as well, with the potential to expose DOD to fraud, waste, and abuse. It is important to note that the role of the acquisition function does not end with the award of a contract. Continued involvement of the workforce throughout contract implementation and closeout is needed to ensure that contracted services are delivered according to the schedule, cost, quality, and quantity specified in the contract. GAO has in the past several years reported wide discrepancies in the rigor with which contracting officer's representatives perform these duties, particularly in unstable environments such as the conflict in Iraq and the aftermath of Hurricane Katrina. A key assumption of many of the federal management reforms of the 1990s was that the cost-efficiency of government operations could be improved through the use of contractors. GAO recently reported that sufficient data are not available to determine whether increased service contracting has caused DOD's costs to be higher than they would have been had the contracted activities been performed by uniformed or DOD civilian personnel. GAO recently probed, in-depth, the cost of contractor versus government contract specialists at the Army's Contracting Center for Excellence and found that the Army is paying up to 26 percent more for the contractors as compared to their government counterparts.
Background A member of the uniformed services—including the Air Force, Army, Coast Guard, Marine Corps, National Oceanic and Atmospheric Administration, Navy, and Public Health Service—who is entitled to basic pay is also eligible to receive the Basic Allowance for Housing, subject to certain exceptions. The Secretary of Defense—through the Defense Travel Management Office within the Office of the Under Secretary of Defense (Personnel and Readiness)—sets the housing allowance rates for all personnel who receive the allowance. According to the Defense Travel Management Office, senior executives and flag officers from the Coast Guard, Public Health Service, and the National Oceanic and Atmospheric Administration Corps, in addition to the three military departments, provide oversight of the housing allowance program through the Per Diem Travel and Transportation Allowance Committee. The legislation that created the Basic Allowance for Housing program, Section 603 of the National Defense Authorization Act for Fiscal Year 1998, among other things, consolidated two authorities for providing housing allowances—the Basic Allowance for Quarters program and the Variable Housing Allowance program—and changed the way DOD calculates housing allowances to be based on adequate housing for civilians with comparable income levels in the same area, rather than on service members’ reported housing expenditures, which was a major factor in calculating the Variable Housing Allowance. According to DOD, housing allowance rates based on the market costs of rental housing ensure a better correlation between allowance payments and rental costs. In January 2000, the Secretary of Defense announced a quality-of-life initiative to increase housing allowances gradually over a 5-year period to eliminate a service member’s average out-of-pocket housing costs from an average of more than 18 percent in 2000. Figure 1 shows the amounts DOD obligated for the housing allowance and the number of service members who received the allowance from fiscal years 2000 through 2010. Housing allowance rates vary based on a service member’s pay grade, dependency status, and geographic location. DOD established six housing profiles, ranging from a one-bedroom apartment to a four-bedroom single- family detached house, and associated each profile with a military pay grade. Service members with dependents receive a higher housing allowance than those in the same pay grade and location without dependents. To set housing allowance rates by geographic area, DOD established 364 housing areas within the United States. These areas are generally within a 20-mile or 1-hour commute from military installations. In total, DOD calculates nearly 20,000 separate allowance rates each year. To set these rates, DOD uses a yearlong multistep process that involves hundreds of officials from installation housing offices, the Defense Travel Management Office, compensation offices in each military service, and a contractor that is a recognized leader in the field of collecting cost-of- living data. Each year, installation housing officials submit rental data on the six housing profiles in the 364 housing areas to the contractor. The contractor then verifies the data; collects additional rental data on its own; and determines average rental, utility, and renter’s insurance costs for each housing profile in the 364 housing areas. The contractor then provides the housing cost data to the Defense Travel Management Office, which calculates housing allowance rates for each pay grade for service members with and without dependents in each housing area. Figure 2 shows the annual housing allowance rate-setting process. (See appendix II for a more detailed description of the annual housing allowance rate- setting process.) Housing allowance rates in a housing area can fluctuate from year to year since local housing costs change over time. If housing allowance rates in an area increase, then a service member stationed in that area will receive the increased rate. However, if housing allowance rates in an area decrease from one year to the next, the service member retains the higher housing allowance rate, known as “rate protection,” as long as their location and dependency status remain unchanged and their pay grade does not decrease. This protects service members already committed to a lease. For example, at Nellis Air Force Base near Las Vegas, Nevada, housing allowances decreased between 2010 and 2011 for all pay grades and dependency statuses. The monthly housing allowance for an enlisted service member in the E-7 pay grade without dependents decreased from $1,200 to $1,107. If a service member stationed at Nellis Air Force Base in 2010 with this pay grade and dependency status remained at the installation in 2011 with the same pay grade and dependency status, then the service member’s housing allowance would remain $1,200. However, a service member at the same pay grade and dependency status that relocated to Nellis Air Force Base in 2011 would receive a monthly housing allowance of $1,107. DOD policy is to rely on the private sector as the primary source of housing for personnel normally eligible to draw a housing allowance. While DOD may require certain service members to live on base, such as key personnel and most junior-enlisted personnel without dependents, about two-thirds of service members and their families in the United States choose to live off base in the local community. If a service member chooses to live on base in privatized family housing, the service member pays the privatization developer rent that is usually equal to the housing allowance. While DOD calculates the housing allowance based on rental market costs, service members may choose to apply their allowance toward purchasing a home or renting a housing unit that could be more or less than their housing allowance. Service members are permitted to keep any portion of their housing allowance not spent on housing and conversely will have to use other funds to pay housing costs that exceed their allowance. Several DOD initiatives are contributing to changes in housing needs in the local communities due to the relocation of military personnel, including: Grow the Force: In January 2007, the President announced and Congress approved an increase in the Army end strength by more than 74,000 active duty, National Guard, and reserve personnel and the Marine Corps end strength by 27,000 Marines through the Grow the Force initiative. The services met these increased end strength goals by 2009. BRAC: Several installations are experiencing growth due to implementation of the 2005 BRAC round. Under the 2005 round, DOD is implementing 182 recommendations which must be completed by the statutory deadline of September 15, 2011. These recommendations include a large number of realignments, prompting significant personnel movements among installations. Army Modularity: The Army is restructuring its force as it implements force modularity, which entails converting units to brigade combat teams, resulting in some installations receiving one of more of these brigade combat teams. Global Defense Posture and Realignment: DOD began to realign its overseas basing structure in 2004 and planned to relocate about 44,500 Army personnel from overseas to domestic installations by 2013. Iraq Drawdown: DOD is relocating many troops from Iraq to domestic installations, although the net growth at these installations may be offset by troops deploying to Afghanistan. As a result of these initiatives, DOD’s Office of Economic Adjustment has identified 26 domestic installations significantly impacted by the growth in military populations. This growth has raised several concerns, one of which is the availability of housing on base and in the communities near installations. We have previously reported on the growth-related challenges at growth installations and in the communities surrounding them. Specifically, we found that many communities will face growth- related challenges in the short term, including challenges to identify and provide additional infrastructure—such as schools, roads, housing, and other services—to support the expected population growth. Figure 3 shows the location of growth installations as defined by DOD’s Office of Economic Adjustment as of January 2011. The National Defense Authorization Act for Fiscal Year 2010 required the Secretary of Defense to conduct a review of two aspects of the housing allowance program and submit a report by July 1, 2010. DOD hired a contractor with expertise in human services consulting to undertake the study and perform the analyses that served as the basis for DOD’s report. DOD submitted its report to Congress in June 2010. DOD’s report contained a review of the housing profiles used to determine housing allowance rates and a review of the process and schedule for collecting housing data that provide the basis for setting DOD’s housing allowance rates. DOD’s 2010 report to Congress states that overall housing allowance rates are generally comparable to civilian housing expenditures for most pay grades but are not identical. Also, data the contractor provided to DOD for its use in preparing its report to Congress do not show a clear trend in housing choices by civilians that would support changing the profiles. Defense Travel Management Office officials said that they study the relationship between housing choices of civilians and the housing allowance rate about every 3 years, but have not made changes to the housing profiles since implementing the current rate-setting process. Although the contractor analyzed possible alternatives to improve the rate- setting process, neither the contractor nor DOD’s report to Congress recommended any changes to the current process. DOD’s Data-Intensive Process Helps to Ensure the Accuracy of Housing Allowance Rates, and Some Enhancements May Further Strengthen the Process DOD uses a data-intensive process to set housing allowance rates that officials said generally meets the goals of the program, although enhancements related to providing information to installation officials and service members, defining a key term for data collection, and developing more accurate cost estimates for the allowance to use in budget requests, could further strengthen the process. DOD Uses a Data-Intensive Process to Set Housing Allowance Rates DOD uses a data-intensive process to set housing allowance rates that includes a number of quality assurance steps designed to help ensure the reasonable accuracy of the rates, such as: Involving installation officials in the data collection process: The housing office and command leadership at each installation have the opportunity to submit properties for inclusion in the data used to set the rates and identify areas for exclusion from the data. Data collection efforts involve numerous installation officials, with officials from the five installations we reviewed estimating that they spent from 12 to 275 staff days per year on data collection tasks. By involving installation officials in the data collection process, DOD benefits from local expertise to help ensure that the properties used to set the housing allowance rates are adequate in terms of the quality of the properties and appropriate for military personnel of the designated rank. Reviewing the data before data collection is complete: After installations submit their first round of housing cost data, representatives from each of the military services meet with the Defense Travel Management Office and the data collection contractor to review the submitted data. The service representatives generally check that each of the installation housing offices submitted data and that the data submitted are reasonable when compared to past rental rates. If a service representative identifies an installation that has not submitted data or anomalies in the data, the service representative typically contacts the installation to address the situation. The service representatives and officials from the Defense Travel Management Office said that these reviews have been effective at verifying that the installations are following DOD’s data submission guidance and determining whether the data appear reasonable to include in the rate-setting analysis. Verifying the rental data: The contractor hired by DOD to analyze the data contacts landlords of installation-submitted properties to verify that the rental rates are current and accurate and that the property is located within the boundaries of the military housing area. This verification process also helps to ensure the accuracy of the data. Officials we interviewed generally stated that DOD’s rate-setting process is an effective process that meets the purpose and goal of the program, which is to provide fair housing allowances to service members and to help service members cover the costs of housing in the private sector. These officials identified few potential changes to the rate-setting process, in part since DOD has implemented several changes to the rate-setting process in the decade since establishing the program. For example, in 2003, the contractor started comparing the rental data submitted by the installation housing offices to the data the contractor collected as an additional quality assurance step. In 2011, the data collection contractor began a comprehensive review of the housing area boundaries to verify that the housing areas are accurate. Along the same lines, DOD’s 2010 report to Congress noted that their review uncovered relatively few complaints or concerns with the rate- setting process, that participants believe the current process works well, and that problems have been addressed through refinements to the process. Additionally, our 2001 review found that the contractor followed reasonable procedures to ensure that the housing data collected were accurate. DOD still uses the same contractor for data collection and the fundamental procedures that we reviewed in 2001 are still in place or have been enhanced. In appendix II of this report, we have summarized DOD’s data-intensive process for setting housing allowance rates. DOD sets its housing allowance rates for an area based, in part, on current market rental cost data, which DOD collects annually for each housing area. Thus, any cost increases—due to changes in the supply of or demand for housing or any other reason—should be captured through the annual rate-setting process, according to Defense Travel Management Office and service compensation officials. These officials noted that DOD does not explicitly consider the supply of or demand for housing, including changes due to planned population changes at an installation, when determining housing allowance rates, noting that revising housing allowance rates to attempt to account for installation population changes would likely lead to inaccurate rates. From 2006 through 2009, DOD had the authority to temporarily increase housing allowance rates in disaster areas or areas with installations that experienced a sudden population increase. Defense Travel Management Office officials stated that three installations—Fort Riley, Kansas; Cannon Air Force Base, New Mexico; and Fort Drum, New York—inquired about the authority, but the regular rate-setting process was able to address the changes in housing costs and the authority was not used. According to these officials, population changes to date have not occurred so rapidly that they could not be addressed through the regular rate-setting process, and they did not expect to need to implement the provision in response to population changes. However, they noted that they cannot speculate on the effects of a natural disaster on housing costs, so having the authority to react to such an event would be desirable. Installation Officials and Service Members Do Not Have Access to All Three Housing Allowance Rate Cost Components Installation officials and service members do not have access to information on the amount or proportion of the housing allowance rate derived from each of the three costs that comprise the housing allowance. As part of the process to determine housing allowance rates, the contractor calculates the median monthly rental costs, average monthly renter’s insurance costs, and average monthly utility costs for each of the six housing profiles, based on local rental market costs. DOD sums these figures to determine the total housing allowance rate for each of the housing profiles, and then uses that data to determine a single figure for the housing allowance for each pay grade. Because DOD issues a single figure for the housing allowance rate for each pay grade, installation officials and service members do not know the amounts of the three costs that comprise the total housing allowance rate. Without access to information on the three costs that comprise the housing allowance rate, installation officials cannot help ensure the accuracy of the total housing allowance rates. The data collection guidance provided to the military installations states that the installations’ expertise and knowledge of the local market is crucial to the rate-setting process. Installation officials participate in the rate-setting process by submitting data on rental costs in the area. However, DOD is not taking full advantage of the installations’ expertise and knowledge of the local market to help ensure the accuracy of the total housing allowance rates, and particularly the utility and renter’s insurance cost data. Rather, the data collection contractor determines the average utility and renter’s insurance costs in each housing area for each housing profile through databases. Furthermore, the contractor collects additional data on rental costs in each housing area to supplement the data that installation officials submit. Officials from the Defense Travel Management Office said that installation officials do not have access to the final calculations of median rent, average utilities, and average renter’s insurance costs since they believe most of the officials’ questions about housing allowance rates can be addressed without providing such detail. While we did not identify specific concerns with the accuracy of these databases or the rental data collected by the contractor, installation officials we interviewed raised concerns that they do not have access to information that would allow them to help ensure the accuracy of the costs and the resulting housing allowance rates. Officials we interviewed at the five installations said that the total housing allowance rates in their area generally appeared to be accurate for most of the housing profiles, but said that they could not fully confirm the accuracy of the rates without additional information on the three components—rent, utilities, and renter’s insurance—used to calculate the rate. For example, an official at one installation noted that the housing allowance for the area appeared slightly lower than the average housing costs in the area and originally questioned the accuracy of the utility costs for the area. When notified that utility costs comprised about 25 percent of the total housing allowance in 2011 for that housing area, the housing official said the utility cost used in the rate calculation appeared reasonable for the amount that service members are paying for utilities, but noted that the remaining amount of the allowance was significantly lower than the rental data the installation submitted and the rental costs in the area. While DOD’s report to Congress does not mention issues related to providing additional information to installation officials or service members, the contractor’s report that served as the basis of DOD’s report noted the need for a feedback mechanism to allow installations to see the average cost data prior to housing allowance rates being calculated. Additionally, without access to information on the three costs that comprise the housing allowance rate, service members cannot take such costs into full consideration when choosing off-base housing, particularly when moving into a new area. Overall, rental costs comprise the majority of the housing allowance rate, averaging more than 75 percent of the rate across all housing areas and profiles, and the utility costs averaged more than 20 percent of the housing allowance rate with renter’s insurance costs comprising the remaining portion. However, these averages vary by housing area and profile, as is to be expected given the unique local housing markets. Our analysis shows that the local utility costs DOD used to calculate the 2011 housing allowance rates are within 5 percent of the housing profile’s average in more than two-thirds of areas, but the utility costs ranged from nearly 8 percent to nearly 40 percent of the total housing allowance, which could be a significant cost difference when moving between housing areas and could affect service members’ decision-making process for choosing affordable housing. For example, if an enlisted service member with dependents in the E-6 pay grade relocated from Schofield Barracks, Hawaii, to Fort Knox, Kentucky, the percentage of the housing allowance rate calculated from the area’s utility costs would increase from about 15 percent of the total housing allowance at Schofield Barracks to about 26 percent at Fort Knox. Similarly, if a Marine with dependents in the same pay grade relocated from Camp Pendleton, California, to Marine Corps Air Station Cherry Point, North Carolina, the percentage of the housing allowance calculated from local utilities would increase from about 15 percent of the total housing allowance at Camp Pendleton to about 24 percent at Marine Corps Air Station Cherry Point. Without knowledge of the average utility costs as a percentage of the housing allowance in the new area, the service member may make decisions on where to live and how much of the housing allowance to spend on rent, utilities, and renter’s insurance based on his or her experience at the previous duty location. In that case, the service member in either of the above examples would underestimate the amount needed to pay the average utilities at the new duty location by more than $100 per month, or about 10 percent of the total housing allowance at the new locations, and would have to pay the excess amount from other income sources. Housing officials at four of the five installations we interviewed said that without information on the breakdown of estimated costs for utilities and renter’s insurance, some landlords view the overall housing allowance rate as the market rental rate and set rental rates equal to the full housing allowance rate for a specific pay grade without regard to utility expenses that would also need to be paid. Also, some service members choose housing in which the rental cost is equal to the full housing allowance rate without fully understanding the financial implications when rent does not include the additional costs of utilities or renter’s insurance. A service member paying more than the allowance rate to obtain housing does not necessarily mean that the housing allowance rate in an area is not accurate. The housing allowance rate is set based on the average housing costs in an area and most service members in an area will not have housing costs exactly equal to the average. A service member who chooses housing in which costs exceed these averages will have to pay more than the housing allowance for some housing costs and, conversely, a service member with costs below the averages can keep the remaining amount. We have previously reported on the importance of educating service members on their compensation, specifically noting that past studies suggest that revealing more information about components of compensation has a greater impact on the component’s satisfaction rate than the actual amount itself. Officials from the Defense Travel Management Office said they believe that publishing information on the three costs that comprise the total housing allowance may be distracting to the service members or may lead to service members’ feeling that their choices are restricted, as few service members have housing costs that exactly match the costs used to calculate the allowance. Installation housing officials and an official from one military service that we talked to generally disagreed with this view and said that the additional information would allow service members to make better-informed decisions rather than constraining service members’ housing choices. During our review, DOD began to make available some high-level information about utility costs to service members and installation officials upon request. Specifically, DOD’s data collection contractor updated its information sheet on the methodology for calculating utility costs, which each of the military services’ housing allowance representatives have and can distribute when asked about utilities. DOD’s service housing allowance representatives said that they plan to provide the utilities information sheet when responding to installation officials’ questions on utility costs. The Army representative said that the information sheet could be distributed to installation officials, service members, family members, or the general public in response to questions. The Air Force representative said the Air Force plans to distribute the information sheet along with the data collection guidance to all of its installation housing offices. The updated information sheet states that a nationwide percentage of the portion of the housing allowance for utilities does not exist, but provides a range for expected monthly utility costs ($120 to about $600) and an average ($294) across all of the housing profiles and geographic areas, noting that nearly one-quarter of housing profiles are within 10 percent of the average. However, we believe that providing such a wide range of expected costs, as opposed to information more tailored to a specific geographic area and housing profile, does not provide installation officials with information that would allow them to help ensure the accuracy of the rates and does not provide service members with information that would help them make informed and fiscally responsible choices. Definition of “Available” May Limit the Number of Properties Submitted in the Rate-Setting Process Officials at four of the five installations we interviewed said that, in areas with low vacancy rates, it can be difficult to find rental properties for some housing profiles that are adequate and meet the definition of currently available housing used in the data-gathering process. These officials noted that rental properties that meet the definition of available in such markets tend to be inadequate or undesirable for a variety of reasons, including high rental costs, poor physical condition of the property, or located in a high-crime area and, therefore, are not representative of housing costs in the area. The data collection guidance provided to military installations defines “available” properties to include properties that are currently on the rental market or have been on the market within 4 to 6 weeks prior to data submission. The law governing the housing allowance program requires that rates be based on the costs of adequate housing for civilians with comparable incomes in the same area. However, because the definition of “available” used in the data collection process limits data submission to only those properties that were available for rent within 4 to 6 weeks prior to data submission, the properties that some installations submit may not be as fully representative of current market costs for adequate housing for comparable civilians in the same area or properties that are representative of such costs may be excluded, increasing the possibility of inaccurate rates for the area. While some Defense Travel Management Office and military service housing allowance officials questioned whether revising the definition of “available” would lead to additional properties submitted during the data collection process, officials involved in the data collection process at four out of the five installations we interviewed and one of the military services indicated that extending the definition of available—up to 90 days, for example—would allow installations to submit cost data on additional rental properties, which could improve the accuracy of the housing allowance rates. For example, housing officials at Fort Drum, New York, told us that low vacancy rates in the area make it difficult to collect enough housing cost data on properties available only within a 4- to 6-week window. As a result, they questioned the accuracy of the data they submitted stating that if they were allowed to include housing cost data spanning a longer availability timeframe, they would have more assurance that the data they submitted would result in a more accurate cost estimate. We recognize that revising the definition of “available” for data collection has some potential drawbacks; however, it is unclear to us whether these drawbacks would outweigh the potential benefits of improved accuracy of the rates from the submission of additional adequate properties. If DOD expanded the definition of “available” used in the data collection process then rental cost data might not be as current. Using the current definition, rental rates for properties available 6 weeks prior to the first data submission are more than 9 months old when the housing allowance rates become effective. Revising the definition of “available” to 90 days would mean that rental rates for the earliest properties would be nearly a year old when rates became effective. However, the extent to which rental costs would significantly change in an additional 6 weeks is unclear. Additionally, Defense Travel Management Office officials and a representative of the data collection contractor noted that as rental rates get older, it becomes increasingly difficult to verify the rental rates with landlords for properties available more than 6 weeks prior to data submission. If the contractor cannot verify the rental rates, then the property cannot be included in the data used to set the housing allowance rates, which could lessen the benefit gained from submitting additional properties. DOD Has Consistently Underestimated Costs of Housing Allowances in Its Budget Estimates Since fiscal year 2006, DOD has consistently underestimated the total costs of paying the housing allowance to service members by $820 million to $1.3 billion each year—or about 6 to 11 percent of the amount estimated—meaning that DOD has spent more on the housing allowance than estimated. Figure 4 shows the difference between the amount that DOD estimated in its budget submission it would cost to pay the housing allowance and the actual amount DOD obligated for the housing allowance for fiscal years 2006 through 2010. A difference of $0 would signify that DOD estimated the exact amount of funding it needed to pay housing allowances. Positive amounts signify that DOD’s estimates were higher than the actual amount needed to pay housing allowances. Negative amounts signify that DOD’s estimates were lower than the actual amount needed to pay housing allowances. The military services generally use a four-step process to develop housing allowance cost estimates for budgeting purposes. First, using current year data, the services calculate the percentage of service members who received the housing allowance for each pay grade and dependency status, referred to as “participation rates.” Second, the services apply the participation rates to the projected force structure to determine the number of people that will receive the housing allowance at each pay grade for the budgeted year, which is usually 2 years in the future. Third, the Office of the Under Secretary of Defense (Comptroller) provides the military services with an “inflation factor” to determine the housing allowance rates for each pay grade for budget purposes. Fourth, the services multiply the number of service members projected in a pay grade by the projected housing allowance rate to determine the estimated cost of the housing allowance. While the services have processes in place to develop housing allowance cost estimates, budget officials in the Office of the Under Secretary of Defense (Comptroller) and the military services, as well as our analysis, indicated that the services have consistently underestimated the total cost of the housing allowance in part because the services’ processes do not allow them to accurately estimate the number of service members who will receive the housing allowance. A number of factors have affected the services’ ability to accurately estimate the cost of the housing allowance. A key underlying factor is the timing of developing the budget estimates. The military services begin their process to develop budget estimates about 18 months before the housing allowance rates for the calendar year take effect, and the President submits the budget request to Congress almost a year before the new housing allowance rates take effect and about 2 months before DOD begins collecting the data for the rates, leading to challenges in accurately estimating the number of service members and housing allowance rate for each pay grade. Other key factors that have influenced the services’ ability to accurately estimate the cost of the housing allowance include: Changes in planned force structure. In recent years, the military services have made changes in their planned force structure between the time that the service developed the estimate and when the allowances were paid to service members. For example, the Marine Corps reached its end strength goals for Grow the Force 2 years ahead of budget estimates, leading to more Marines than estimated actually receiving the housing allowance. Increased use of mobilized reserve personnel. Budget officials said that an increase in the number of mobilized reserve personnel has made it difficult to accurately estimate the number of personnel that will receive the housing allowance. The Tenth Quadrennial Review of Military Compensation report also identified this as a challenge to accurately estimating housing allowance costs, noting that the number of reservists serving on active duty since 2001 and the higher proportion of reservists with dependents compared with the active duty force makes it difficult to estimate the number of service members who will be eligible to receive a housing allowance. That report recommended that DOD continue to improve its population estimating procedures to ensure that the housing allowance budget is as accurate as possible. Changes to the housing allowance rates. DOD does not set its housing allowance rates until December of each year, about 10 months after the President’s budget is submitted to Congress and more than 2 months after the new fiscal year begins. DOD budget officials said that the rate estimates have been a factor in underestimating the housing allowance costs to a lesser degree than other factors. Based on our analysis, as well as the Tenth Quadrennial Review of Military Compensation, errors in estimating the numbers of service members that actually received the housing allowance were generally larger than errors in estimating the actual housing allowance rates, although errors in estimating the housing allowance rates did affect the accuracy of the total cost estimates. Changes in housing policies. Budget officials noted that changes in housing policies that allow service members to receive the housing allowance who previously were not eligible for the allowance, changes in the number of privatized housing units, or other changes to housing or housing allowance policies affect the accuracy of the services’ estimates for the number of personnel and total cost of the housing allowance. The military services have taken some actions that they said should help improve the accuracy of the housing allowance cost estimates. For example, the Army is developing a methodology to account for rate protection for service members if the rates decrease after being stationed at an installation. Officials expect to start using the methodology with estimates developed later this year. Since rate protection allows service members to retain their higher housing allowance rate in areas where rates decrease, the ability to better account for rate protection could improve the accuracy of housing allowance cost estimates. Additionally, the Marine Corps recently developed tools that allow them to gather dependency rates monthly. DOD budget officials provided suggestions for further improving estimates, such as coordinating with the service budget office before implementing housing policies that lead to increases in the number of service members who receive the housing allowance. We have previously reported that when full funding information is not included in the President’s annual budget submission or provided during the congressional appropriations process, it understates the true cost of government to policymakers at the time decisions are made and steps can still be taken to control funding, which is even more important in a time of constrained resources. While we recognize the difficulties in accurately estimating the costs of the housing allowance, consistently underestimating the amount needed to pay the housing allowance affects other DOD programs. The housing allowance is an entitlement for service members. As such, DOD must pay the allowance to service members at the specified rates and, therefore, has had to find another source of funding when underestimating the amount needed to pay the allowance. This can include shifting funds that Congress has appropriated for other purposes, including other budget activities within the military personnel appropriation or other defense appropriations, in accordance with applicable laws and policies, or requesting additional funding in a supplemental request. However, shifting funds from another program could disrupt the funding of the other program. Additionally, while an official from the Office of the Under Secretary of Defense (Comptroller) said that DOD’s budget provides the best estimates available, as a result of consistently underestimating the amount needed to pay the housing allowance, DOD’s budget does not provide decision makers in Congress and DOD with the full picture of housing allowance costs, limiting the ability of both Congress and DOD to make more fully informed funding decisions. Service Members Have Encountered Housing Challenges at Some Growth Installations and DOD Does Not Have a Formal Information- Sharing Process for Tools to Address Such Challenges Some service members have encountered challenges in obtaining off-base housing near some installations that are increasing in size due to several major defense initiatives, such as BRAC, Grow the Force, Army Modularity, and Global Defense Posture and Realignment. DOD officials have used a number of tools to address challenges in obtaining off-base housing, but DOD does not have a formal process that allows installation officials to share information on these tools. Housing Deficits Exist at Most DOD Growth Installations and Are Expected to Continue or Worsen According to the military services’ data, demand exceeds the supply of housing at 19 of the 26 growth installations, resulting in housing deficits. Current housing deficit estimates range from about 1 percent of the total estimated demand at Fort Polk, Louisiana, to more than 20 percent of estimated demand at Cannon Air Force Base, New Mexico, according to service data. Economic conditions in recent years, among other factors, have made it difficult for developers to obtain funding for new construction projects in the communities, particularly for multifamily rental housing projects. This has contributed to the estimated housing deficits, according to installation housing and community officials we interviewed. In addition, these officials said that the high number of deployments in recent years, among other issues, has led to concerns among lenders about anticipated demand for newly constructed units, potentially making lenders more reluctant to provide loans for construction. Housing and community officials from four of the five installations we reviewed—Fort Riley, Kansas; Cannon Air Force Base, New Mexico; Fort Drum, New York; and Fort Bliss, Texas—noted that service members are currently experiencing challenges in obtaining adequate and affordable housing in the communities surrounding the installation and expected that these challenges will continue or worsen in the future. Fort Riley has a current estimated deficit of about 700 family housing units (about 4 percent of family housing demand at the installation), based on Army data. Fort Riley officials stated that based on current plans, all but one of Fort Riley’s brigades—about 80 percent to 90 percent of the population assigned to the installation—will be at the installation starting in October 2011. The return of most of the brigades, combined with longer periods of time at the duty station, will further increase the demand for housing on and around Fort Riley. Installation housing officials said that due to the limited amount of housing, service members have had to look further away from the installation to find adequate housing. Community officials noted that in recent years families have not relocated immediately with service members due to continuous deployment, which has led to difficulty in estimating the amount of family housing needed in the future. Cannon Air Force Base has a projected deficit of about 530 family housing units (about 20 percent of projected demand at the installation), based on Air Force data. In addition to the planned population increase at the installation, installation and community officials expect additional demand for housing in the area from the labor force expected to construct projects on the installation in support of the planned growth and a large energy project in the community. Installation officials said that occupancy rates for rental housing in the community have exceeded 99 percent in 2010 and 2011. Due to the limited availability of housing in the community, installation officials said there is a high demand for even inadequate family housing units on base, which are expected to be privatized in 2012. Additionally, some service members are purchasing homes in the area and others are paying more than the housing allowance for rent or renting in less desirable areas. Fort Drum has a current estimated deficit of about 1,700 family housing units (nearly 20 percent of family housing demand at the installation), based on Army data. Fort Drum officials stated that the lack of available housing in the community surrounding the installation, among other issues, has led an increasing number of service members to relocate to the installation without their families. By relocating to the installation unaccompanied, these service members can find smaller housing units than they would need for their family or share housing with another service member. Alternatively, depending on the availability of housing, some service members that relocate with their families obtain housing 30 to 40 miles away from the installation. Installation and community officials stated they expect housing availability to be further limited starting in 2012 when all but about 1,000 of Fort Drum’s deployed soldiers are expected to be at the installation for the first time since their recent growth occurred. Having most of the units return is expected to exacerbate current housing demand. Fort Bliss has a current estimated deficit of about 2,900 family housing units (about 15 percent of family housing demand at the installation), based on Army data. Due to the limited amount of housing near the installation that is affordable to junior enlisted personnel, Fort Bliss officials stated that junior enlisted personnel typically obtain housing on the outskirts of El Paso and experience long commutes to the installation. Officials noted that growth in the civilian population of El Paso due to families relocating there from Mexico has further limited the supply of housing available in the community for service members, and as more soldiers return from deployment over the next year, the community’s housing supply will be further strained. Camp Lejeune and Marine Corp Air Station New River, North Carolina, have an estimated deficit of nearly 3,500 family housing units (nearly 20 percent of family housing demand at the installation), according to Marine Corps data. Despite the estimated shortfalls, installation housing officials said that service members have not encountered challenges in obtaining housing in the community, in part due to the number of mobile homes in the area. While DOD considers mobile homes as inadequate housing and does not include these units in its housing market analyses, some service members have chosen to live in these homes, which helped mitigate the projected housing deficit. DOD Uses Several Tools to Address Housing Challenges Service members are encountering challenges obtaining adequate housing at some installations due to the limited supply of housing in the area, but DOD’s policy is to rely on the private sector as the primary source of housing for personnel normally eligible to draw a housing allowance and DOD is limited in its ability to increase the supply of housing in the community. However, installation housing officials we interviewed use or have plans to use several tools to help service members and their families obtain housing either on base or in the community, many of which could be replicated and used in other areas. Selected tools include: Housing privatization: Since 1996, the military services have been obtaining private sector financing and management to repair, renovate, construct, and operate military family housing on the installations—also known as housing privatization. In a typical privatized military housing project, a military department leases land to a developer for a term of 50 years. The developer is responsible for constructing new homes or renovating existing homes and leasing them, giving preference to military service members and their families. Service members who choose to live in the privatized housing then use their housing allowance to pay rent. Housing officials at each of the installations we interviewed are developing and implementing plans to negotiate with privatization partners to increase the supply of adequate housing on base. For example, Fort Bliss officials stated that their privatization partner has agreed to build an additional 800 to 1,000 privatized homes on the installation to help address the housing deficit. An installation official expected that the homes would not be completed until 2012, at the earliest. Additionally, the Army and Navy have privatized housing for unaccompanied senior enlisted personnel and officers at five installations: Fort Irwin, California; Naval Station San Diego, California; Fort Stewart, Georgia; Fort Drum, New York; and Fort Bragg, North Carolina. The Navy also privatized unaccompanied housing for junior enlisted personnel at Naval Station San Diego, California, and Naval Station Norfolk, Virginia. The Army and Navy selected these sites due to projected deficits in housing for unaccompanied personnel. Domestic leasing program: The domestic leasing program provides temporary housing for military families pending availability of permanent housing through DOD payment of rent and other housing costs of privately owned housing units that are assigned to military families as government quarters. For example, Army officials stated they are using the program as a short term bridging strategy for housing service members and their families until local communities respond to the increasing housing demand near installations. The program is currently in use at two growth installations—Fort Drum and Fort Bliss. Military Family Housing Leasing Program (commonly referred to as the Section 801 housing program): Starting in 1984, a number of DOD installations contracted with developers to build new rental housing on or near military installations through the Section 801 housing program—a forerunner to the current Military Housing Privatization Initiative. DOD used the Section 801 Housing program as a means for improving and expanding military family housing through private developers’ investment. The leases at four of the installations within our scope have expired or will expire within the next 2 years and will not be renewed, according to housing officials at these installations. While the existing contracts at Cannon Air Force Base will expire in 2012 and 2013, installation housing officials stated that the installation is attempting to develop a “bridge lease” that will allow service members to continue renting the units with some revisions to the current lease agreement to help meet the increased housing demand. In addition, as we previously reported, Fort Hood, Texas, extended its Section 801 housing lease to 2029 and renegotiated the lease terms to retain priority use of the units for military personnel and DOD civilians. Low-Income Housing Credit: The Housing and Economic Recovery Act, which Congress enacted in 2008, contained a provision that altered the way the Basic Allowance for Housing was treated for the purposes of determining eligibility under the Low Income Housing Tax Credit Program. The provision, which is effective through January 2012, applies only to certain military installations, but according to installation officials it can, in some cases, effectively expand the supply of available housing. Nine military installations qualified for the program, including three installations expecting significant growth—Fort Riley, Fort Bliss, and Fort Hood. Fort Riley and Fort Bliss officials said that the provision can allow more service members to qualify for low-income housing. One growth installation—Fort Drum—did not qualify for the program, but Fort Drum officials estimated that if the installation had qualified an additional 200 tax credit housing units would likely have been constructed near the installation. Housing requirements and market analyses: The military services routinely conduct housing requirements and market analyses to determine projected housing surpluses or deficits based on the number of personnel expected to be stationed at the installation in a given year and to determine housing requirements and the community’s ability to meet those requirements. Based on the results of these analyses, the services can determine whether to use housing tools such as housing privatization, government-owned housing, or leasing at an installation. Officials at all five of the installations we interviewed indicated they use the housing analyses as a tool to determine current and projected housing deficits and how to address the deficits. However, officials we interviewed at a few installations raised concerns about the process to develop the analyses and the accuracy of the results, noting issues with the data used to establish the estimates and the lack of input from housing officials at the installation. Extension of lodging allowance: The Temporary Lodging Expense Allowance is designed to partially offset expenses when a service member occupies temporary quarters in the continental United States while relocating from one installation to another. The Army has extended the use of this allowance at two growth installations—Fort Drum and Fort Bliss—from 10 days to up to 60 days. While Fort Bliss officials stated that service members have generally been able to find housing within 10 days, the installation requested the extension in anticipation of future growth at the installation when officials expect that it will take longer for service members to find housing. Installation-community collaboration: Among other responsibilities, DOD’s Office of Economic Adjustment assists growth communities affected by DOD actions, such as BRAC, that have expressed a need for planning assistance. The Office of Economic Adjustment has encouraged the communities near growth installations to establish “growth management organizations” that are designed to work on issues associated with community growth and typically include high-level installation officials. The Office of Economic Adjustment has provided grants to assist some of the organizations to plan to accommodate the expected population increases and undertake studies to identify gaps in local infrastructure, such as housing. In addition, the growth management organizations provide a forum for community and installation officials to communicate about challenges, including housing, and develop plans to mitigate the challenges. For example, community officials from the Fort Drum Regional Liaison Organization said the organization has plans to host an event this year to bring together installation officials, developers, financers, and state and local officials to encourage new housing development around the installation. Housing allowance waiver: The Navy and Coast Guard have identified “critical housing areas” where there is a short supply of housing on base and in the community. In such areas, a service member may choose to leave their dependents at their previous duty location and relocate to the new duty location unaccompanied while continuing to receive the housing allowance at the rate for the prior location. By relocating to an area unaccompanied, the service member may have more housing choices, such as living in a smaller unit than the family needs or sharing housing with another member. However, the service member has to pay for housing for himself or herself in one location and his or her family in another location, which could be costly. The Navy designated six critical housing areas in 2009, but did not designate any critical housing areas in 2010. The Coast Guard designated 23 critical housing areas in 2010. While the Army and Air Force have not identified critical housing areas, officials told us that service secretaries can authorize the housing allowance to be paid based on a dependent’s location or previous duty station on an exception basis if circumstances require dependents to reside separately from the service member or other circumstances deemed acceptable by the secretary. Rental Partnership Program: The Rental Partnership Program helps service members obtain housing at a reduced cost. Installations negotiate deals with local housing management companies to enter into written agreements to make adequate housing available to service members. Installations develop their own unique aspects of the program. For example, Camp Lejeune uses the program to reduce move-in costs for junior enlisted service members trying to obtain housing in the community. Automated Housing Referral Network: The Automated Housing Referral Network is an Internet-based rental database used by service members to find housing. The database contains information on housing on base and in the community, as well as temporary lodging, shared rentals, and housing units for sale by owner. The network is widely used across the services, including the Coast Guard, according to officials in DOD’s Directorate of Housing and Competitive Sourcing. DOD Does Not Have a Formal Process for Sharing Information on Tools to Address Housing Challenges Installation housing officials we interviewed generally share information on tools they use or plan to use to address housing-related challenges on a regular, but ad hoc basis. For example, Fort Drum, Fort Riley, and Fort Bliss officials stated that most of their information sharing is done through informal email communication with other Army housing officials. In addition, housing officials we interviewed at each of the five installations said that they communicate informally with installations from other services at the Professional Housing Management Association’s annual conference, where officials from all of the military services discuss, among other topics, housing tools and challenges at their installations. Installation housing officials we interviewed generally stated that having a repository with information about tools, their use, and their impact at addressing housing challenges would be beneficial as the installations continue to plan for current and future growth. According to the Standards for Internal Control in the Federal Government, information should be communicated to the individuals within an organization that need it to carry out their responsibilities. Among other responsibilities, the Deputy Under Secretary of Defense (Installations and Environment), which is part of the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics), is responsible for providing guidance and general procedures about housing, including community housing and DOD housing, and communicating and coordinating with the military departments, including through regular meetings about housing policy and other housing issues. DOD’s Housing Management Manual states that, subject to the authority and direction of their respective DOD components, installation commanders are responsible for ensuring that service members have access to suitable housing. However, installation housing officials do not readily have access to information about certain tools and their use by other installations and services that could help service members obtain suitable housing because DOD does not have a formalized information-sharing process to store and share this information. Without such a process, DOD cannot ensure that installations that are currently facing housing challenges or may encounter such challenges in the future have access to the necessary information on what tools have worked elsewhere to best position installations to mitigate or solve the challenges. While information shared through informal networks is useful to those who receive the information, there is no assurance that information shared and learned through these communications can be of use to others if the information is not stored and available for others to readily access. We identified instances where installation housing officials were generally unaware of some tools available to address housing challenges. For example, of the five growth installations we spoke to, three installations were unaware of the authority DOD previously had to prescribe temporary increases in housing allowance rates in areas that are experiencing a sudden increase in the number of service members assigned to the installation. Officials at one installation in an area with low vacancy rates noted that the installation did not become aware of the authority until after it expired and noted that an increase in the housing allowance rates would have increased service members’ ability to obtain housing. In addition, officials we interviewed at another installation were not aware of the Rental Partnership program. We also found an instance where officials at one installation said it would be helpful to have information from other installations implementing the domestic leasing program to get the program started at their installation. Conclusions DOD spends billions of dollars each year to pay the housing allowance to over a million service members so that they can obtain housing for themselves and their families. DOD’s housing allowance rate-setting process is generally viewed as effective and DOD has made improvements to the process over the past decade. Nevertheless, there are opportunities for DOD to further enhance its rate-setting process and improve the accuracy of the housing allowance rates. Accurate housing allowance rates are critical to meeting DOD’s goals for the housing allowance program. Rates that are lower than the average housing costs in the community limit service members’ ability to obtain adequate housing in the community, while rates that are higher than the average housing costs risk DOD spending more money than needed for the allowance. Providing additional information to installation officials about the costs that comprise the housing allowance rate—rent, utilities, and renter’s insurance—would enable those officials to help review the accuracy of the local market-based rates, given their expertise in the local housing area. Similarly, if DOD provided such information to service members, it could help them to make more informed decisions about their housing choices. Additionally, analyzing the benefits and drawbacks of revising the definition of “available” rental properties for data collection—and revising the definition, as needed—could enable DOD to increase the sample of adequate and appropriate properties used to determine the median rental cost in an area, potentially improving the accuracy of the housing allowance rates. Furthermore, until DOD develops a process that results in more accurate estimates of the total costs of the housing allowance, DOD may continue to shift funds from other programs, potentially affecting the success of the other programs and limiting the ability of key decision makers in Congress and DOD to make more informed funding decisions, which is particularly critical in the current fiscal environment. Population increases and other factors have increased the demand for housing on and near installations, leading some service members to encounter challenges obtaining off-base housing near some installations. DOD officials expect the problem to worsen in the near future as some initiatives, such as BRAC, are completed and as service members return home from overseas deployments. Installations have used a number of tools to help service members find housing, either in the community or on the installation. However, until DOD institutes a more widespread communications process that allows sharing of these tools across military installations and services, DOD cannot ensure that all installation officials will have access to valuable information on addressing housing challenges due to growth or other causes—both now and in the future—that could help improve the quality of life for service members and their families. Recommendations for Executive Action We recommend that the Secretary of Defense take the following four actions: To enhance the transparency of the housing allowance rates, direct the Director of the Defense Travel Management Office to revise policies to provide information on the three costs that comprise the housing allowance rate (rent, utilities, and renter’s insurance) by geographic area and housing profile to installation housing officials to better ensure local- market-based accuracy and to service members to increase understanding of the rate when selecting housing. To enhance the accuracy of the housing allowance rates, direct the Director of the Defense Travel Management Office to more fully assess the benefits and drawbacks of revising the definition of “available” rental properties used for data collection purposes, either for all military housing areas or only those military housing areas that meet a certain low vacancy threshold. To promote more accurate budgeting by DOD, direct the Under Secretary of Defense (Comptroller) and the military services to more fully identify the causes of inaccurate cost estimates for the Basic Allowance for Housing program and develop and implement procedures to improve these estimates. At a minimum, these procedures should include processes to more accurately estimate the number of service members who will receive the allowance. To ensure that current or future growth installations that experience housing challenges have access to information on tools to address these challenges, direct the Under Secretary of Defense (Acquisition, Technology and Logistics) and the Office of the Deputy Assistant Secretary of Defense (Installations and Environment) to develop a communications process so that installations can more routinely share best practices and their use of tools and mechanisms to address housing challenges. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD generally concurred with all four of our recommendations. DOD’s response to our recommendations is printed in its entirety in appendix III. DOD also provided technical comments, which we incorporated, as appropriate. The Department of Homeland Security reviewed a draft of this report and did not have comments. DOD partially concurred with our recommendation to provide service members with information on the three elements that comprise the allowance (rent, utilities, and renter’s insurance). In its response, DOD said that it will provide the cost elements as a percentage range of total costs across all profiles by 2012. We believe that this meets the intent of our recommendation. DOD concurred with our second recommendation to assess the benefits and drawbacks of revising the definition of “available” rental properties used for data collection purposes. DOD said that it has already done so and plans to expand the definition of available properties to include those properties that will be available at a future date. DOD concurred with our third recommendation to identify the cause of inaccurate cost estimates of the allowance program and improve procedures to address this problem. DOD plans to establish a working group, led by the Office of the Under Secretary of Defense (Comptroller), to better understand how the services budget for the housing allowance and document and share best practices for estimating the amount needed to pay the allowance. DOD concurred with our fourth recommendation to develop a communications process to share best practices among the installations and plans to use the Office of the Secretary of Defense’s Housing Policy Panel and other resources to share information on tools to address housing challenges. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense; the Under Secretary of Defense (Comptroller); the Under Secretary of Defense (Acquisition, Technology and Logistics); the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Secretary of Homeland Security; and the Commandant of the Coast Guard. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. Appendix I: Scope and Methodology To determine whether the Department of Defense (DOD) could enhance its housing allowance rate-setting process, we reviewed reports about DOD’s process, including DOD’s 2010 report to Congress, and laws and guidance governing the program. Additionally, we spoke with officials responsible for overseeing the rate-setting process in the Defense Travel Management Office; housing officials in the Directorate of Housing and Competitive Sourcing within the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics); a budget official in the Office of the Under Secretary of Defense (Comptroller); and military compensation and budget officials responsible for housing allowances from the Army, Navy, Marine Corps, and Air Force. We also met with representatives from the Coast Guard to discuss their role in the process for setting basic housing allowances. Furthermore, we contacted installation housing officials from Fort Riley, Kansas; Cannon Air Force Base, New Mexico; Fort Drum, New York; Camp Lejeune, North Carolina; and Fort Bliss, Texas, to discuss how they collect and submit housing cost data that feed into DOD’s rate-setting process. Our rationale for selecting these installations is discussed below. Additionally, we spoke with a representative from DOD’s contractor for the data collection efforts. We discussed the technical aspects of a draft of this report with a representative for the contractor. To help ensure that we identified a wide range of potential enhancements to DOD’s current rate-setting process, we also spoke with representatives from six other organizations: the Center for Naval Analyses, the Lewin Group, the RAND Corporation, the Fleet Reserve Association, the Military Officers Association of America, and the National Military Family Association. We selected these organizations because representatives have knowledge about military compensation generally or the Basic Allowance for Housing program specifically, as shown in published reports or through testifying before Congress and the three associations represent interests of military service members and their families. In addition to publishing other work on military compensation, the Lewin Group performed the research on which DOD based its 2010 report to Congress on housing standards and the allowance rate-setting process. We considered a number of potential enhancements to DOD’s current rate- setting process and performed further analyses to determine the benefits and drawbacks of each, including potential financial savings or costs. Through the additional analyses of each of the enhancements, we determined whether the alternative was viable and could enhance DOD’s rate-setting process without significantly increasing program costs. For example, with regard to the enhancement of providing more information to service members about the three costs that comprise the housing allowance rate, we obtained and analyzed data from the Defense Travel Management Office on the three costs that comprise the rate for the six housing profiles in each of the 364 military housing areas. We assessed the reliability of the data by performing electronic testing for obvious errors in the accuracy and completeness of the data and reviewing documentation on how the data are collected and determined that the data were sufficiently reliable for our purposes. Additionally, we discussed the enhancement with officials from the Defense Travel Management Office, the service Basic Allowance for Housing representatives, and five selected installations. Similarly, with regard to DOD’s process to budget for the housing allowance, we analyzed budget justification data for the Army, Navy, Marine Corps, and Air Force for personnel receiving the housing allowance at the “with” and “without” dependents rates by comparing the amount DOD estimated to the amount obligated. We reviewed the annual budget, supplemental requests, and funding for housing allowances requested in support of Overseas Contingency Operations. We could not compare DOD’s estimates to its actual obligations for the housing allowance prior to 2006, as the supplemental budget requests prior to 2006 did not provide sufficient detail for us to determine the amount estimated for the housing allowance and neither the Office of the Under Secretary of Defense (Comptroller) or the military services could provide this information. The Coast Guard’s budget justification documents to Congress are not in enough detail compared to similar budget justification documents to Congress from the other military services so we could not perform a thorough analysis of the Coast Guard’s cost estimates; however, Coast Guard officials provided similar information that allowed us to compare the Coast Guard’s overall estimates to obligations. To determine whether service members relocating to installations that DOD projects to experience significant growth have encountered challenges in obtaining off-base housing and the extent to which DOD is using and sharing information on tools to address these challenges, we reviewed and analyzed applicable documentation and interviewed knowledgeable officials. Specifically, we analyzed data from the Housing Requirement and Market Analyses for the 26 growth installations to determine DOD’s housing deficit projections at these installations. While we recognize that there are some shortcomings of the data, including concerns raised by installation housing officials about the process to develop the analyses and the accuracy of the results, we used the data to provide context on projected housing deficits and determined that the data were sufficiently reliable for this purpose. To better understand the tools available to address housing challenges, we reviewed the relevant legislation; DOD’s Joint Federal Travel Regulations; service-level policies and other documentation on tools; and past GAO reports that discuss military housing privatization, the Domestic Leasing Program, and Section 801 housing. Additionally, we interviewed housing officials in the Directorate of Housing and Competitive Sourcing within the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics); the Army, Navy, Marine Corps, Air Force, and Coast Guard headquarters; and five domestic military installations—Fort Riley, Kansas; Cannon Air Force Base, New Mexico; Fort Drum, New York; Camp Lejeune, North Carolina; and Fort Bliss, Texas—to obtain information on whether service members have encountered challenges in obtaining housing, tools the installations are using to address these challenges, and processes for sharing information on the tools. Our rationale for selecting these installations is discussed below. To better understand the housing issues in the communities, we interviewed officials who work with growth communities in DOD’s Office of Economic Adjustment and contacted community organization representatives in each of the communities near the 26 growth installations identified by the Office of Economic Adjustment. To obtain installation officials’ perspectives on both DOD’s rate-setting process and housing challenges and tools, we interviewed housing officials from a nonprobability sample of five domestic military installations: Fort Riley, Cannon Air Force Base, Fort Drum, Camp Lejeune, and Fort Bliss. We selected installations that met criteria that address both of these issues. Specifically, we began our selection with DOD’s list of 26 significantly impacted growth installations. We narrowed the list to the five we selected to obtain a sample of installations with a range of the following characteristics: communities that had identified housing as a challenge in the growth profiles published by the Office of Economic Adjustment, installations with different geographic and population concentrations, installations from different military services, and installations with officials that the Defense Travel Management Office identified as particularly knowledgeable about the housing allowance rate- setting process. Not all installations met all of the criteria. Our selection of three Army installations reflects that the majority of significantly impacted growth installations are Army installations. Because we selected a nonprobability sample of installations, the information obtained from interviews with officials from these five installations cannot be generalized to other installations. We conducted this performance audit from August 2010 through May 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summary of DOD’s Process to Set Housing Allowance Rates By law, housing allowance rates are to be based on the costs of adequate housing for civilians with comparable income levels in the same areas. To do this, the Department of Defense (DOD) identified six housing profiles, ranging from a one-bedroom apartment to a four-bedroom single-family detached house and identified a pay grade for each profile—also referred to as an anchor—that matches the type of housing normally occupied by civilians with comparable incomes. Using the housing profiles and the local costs of each profile in the geographic areas of the country, DOD establishes the allowance rates each year. DOD established separate housing profiles for members with and without dependents and established a method to ensure that allowance rates would increase with each pay grade. For example, the one-bedroom apartment profile corresponds to an E-4 without dependents and the four bedroom single family home profile corresponds to an O-5 with dependents. DOD sets the housing allowance rates for pay grades that are not anchors based on the last anchor plus a percentage of the difference between the last anchor and the next anchor. For example, an E-7 with dependents receives the same rate as an E-6—the anchor for a three- bedroom townhouse—plus 36 percent of the difference between the anchor for a three-bedroom townhouse and three-bedroom single-family detached house. Table 1 and table 2 show the housing profiles for each pay grade and the method DOD uses to calculate the allowance rates for service members with dependents and without dependents, respectively. DOD calculates nearly 20,000 separate allowance rates each year: for each of the 27 military pay grades, ranging from E-1 (junior enlisted) to O-10 (general or flag officer); for personnel with and without dependents (spouse, children, or other dependents); and in each of the 364 DOD-established military housing areas. The housing allowance is intended to cover the average costs of rent, utilities, and renter’s insurance in private sector housing. Rent, as the largest of these three expenses, is the focus of most data collection efforts during DOD’s annual rate-setting process. DOD personnel in installation housing offices and DOD’s housing allowance contractor collect rental information for the six housing profiles in spring and summer, when the rental market is most active and when most service members traditionally relocate to new installations. Starting in January of each year, the contractor provides training and guidance to installation officials who will be collecting rental data. Installation officials, who are generally familiar with local housing markets, then begin collecting rental information on the six housing profiles and send this information to the contractor in three submissions in May, June, and July. The units selected for inclusion in the sample must be both available, meaning they must be currently available to rent or were available on the market within 4 to 6 weeks prior to the data submission, and adequate. While the data collection guidance does not define adequate, it does provide a list of examples of inadequate types of housing, including mobile homes, efficiency apartments, weekly or seasonal rentals, and housing that is in poor physical condition, extremely expensive, or located in high-crime areas. Although there are guidelines for housing that is inadequate, service members ultimately choose what type of housing and where they want to live. While a degree of subjectivity is involved in determining whether a property is adequate since the quality of housing varies across areas, the installation officials with whom we spoke said that they inspect housing units before they are included in the rental data submission to help ensure that they are suitable for military personnel. Installation officials also submit census tracts that are in high- crime or otherwise unsuitable areas so that units in these areas are not included in the rental data sample. All units included in the sample must also fall within the established military housing area. Simultaneously, the contractor also collects rental data for the six housing profiles within the housing areas by using local newspaper classified advertisements, rental listings, and consultations with real estate professionals. DOD’s goal is for installations to collect about 60 percent of rental data while the contractor collects about 40 percent, but this varies between installations and housing areas. The contractor establishes target sample sizes for each housing profile in each housing area. Sample sizes can range from several hundred units per housing profile where there is a large inventory of available housing to as few as five where certain types of housing are not as readily available. Each of the services can request site visits each year, during which officials from the Defense Travel Management Office and the military service and a representative from the contractor discuss the rate-setting process with installation officials. Also, they view a sample of the available housing stock to better ensure that housing allowance rates are accurate for the area, educate installation officials on the process, and answer questions from installation officials. Data collection on rental units stops in August to give the contractor adequate time to analyze the data and finalize calculations of the median monthly rent for each housing profile in each housing area. For quality assurance purposes, the contactor reviews submitted data and eliminates data errors, any duplicate units submitted, and extreme rent outliers. For example, installation officials and the contractor collected a total of nearly 60,000 data points in 2010, but about 12,800 were excluded as part of the quality control process. The contractor also calculates the average utility costs for each housing profile by analyzing data collected annually by the U.S. Census Bureau’s American Community Survey. Utilities factored in the calculation include electricity, gas, oil, water, and sewage. The third cost element of the housing allowance is renter’s insurance; the contractor calculates average renter’s insurance premiums based on rates from leading insurance carriers. In early September, the contractor sends the median rental costs and the average costs for utilities and renter’s insurance to the Defense Travel Management Office, which calculates housing allowance rates for each pay grade in each of the housing areas, and for personnel with and without dependents. The Defense Travel Management Office also reviews the information and rate calculations and makes adjustments, as appropriate. For example, if rates are 10 percent above or below the previous year’s rates, the data sample is reviewed with more scrutiny to determine if it is representative of the rental market. Housing allowance representatives in the compensation offices of each of the services then review the rate calculations through October and November. These representatives have an opportunity to discuss any concerns with the Defense Travel Management Office. Following service review, the rates are reviewed and approved by the Office of the Deputy Under Secretary of Defense (Military Personnel Policy). The approved rates are provided to the Defense Finance and Accounting Service and DOD begins paying the new housing allowance rates on January 1. Once approved, the Defense Travel Management Office posts the housing allowance rates on its Web site and the rates are available for service members, as well as the public, to view. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Laura Talbott, Assistant Director; Steven Banovac; Hilary Benedict; Joel Grossman; Brandon Jones; Ron La Due Lake; Charles Perdue; Richard Powelson; and Michael Willems made key contributions to this report.
The Department of Defense (DOD) paid active duty military personnel over $18 billion in housing allowances in fiscal year 2010. DOD sets housing allowance rates annually based on market costs of rent, utilities, and renter's insurance. Also, DOD has identified 26 installations significantly impacted by expected growth in personnel due to various rebasing actions. The Senate report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2011 (S. 3454) directed GAO to review DOD's rate-setting process, among other issues. GAO determined (1) whether there are enhancements to strengthen DOD's rate-setting process and (2) whether service members have encountered challenges in obtaining off-base housing. GAO reviewed program documents, including a 2010 DOD report to Congress, analyzed data, and interviewed DOD officials and subject matter experts. DOD uses a data-intensive process to set housing allowance rates that officials said generally meets program goals. Key quality assurance steps in DOD's process include involving installations in the rental data collection process and verifying data prior to calculating allowance rates. However, some enhancements related to (1) providing additional information to installation officials and service members, (2) defining a key term for data collection, and (3) developing more accurate cost estimates for budget requests could further strengthen the process. First, installation officials and service members do not have access to information on the three costs that comprise the allowance--rent, utilities, and renter's insurance--because DOD issues a single rate for each pay grade. As a result, installation officials cannot help ensure the accuracy of the rates and service members are not fully informed of potential housing costs. Second, in areas with low vacancy rates, officials said it can be difficult to find enough rental properties that meet the definition of available because the definition is limited to rentals on the market within 4 to 6 weeks prior to data collection. As a result, properties that some installations submit may not be fully representative of rental costs in the area or representative properties may be excluded, increasing the possibility of inaccurate rates in an area. Third, the military services have consistently underestimated the amount needed to pay the allowance by $820 million to $1.3 billion each year since 2006 when preparing budget requests, in part because the services' processes do not allow them to accurately estimate the number of service members who will receive the housing allowance. GAO recognizes the difficulties in developing accurate housing allowance cost estimates. However, as a result of consistently underestimating the amount needed to pay the allowance--which is an entitlement for service members and must be paid--DOD has had to shift funds that were budgeted for other programs, which could disrupt the funding of the other programs. Also, DOD's budget does not provide the full picture of housing allowance costs, limiting the ability of Congress and DOD to make fully informed funding decisions. Some service members have encountered challenges in obtaining off-base housing at some growth installations. Military service data show current housing deficits, ranging from about 1 percent of total demand to more than 20 percent, at 19 of 26 installations DOD identified as significantly impacted by growth. Installation officials GAO interviewed expect such housing challenges to continue or worsen. DOD uses a number of tools to address these housing challenges that could be used at other installations, such as expanding housing privatization projects and encouraging collaboration between installations and communities. GAO found that installations share information on these tools on an ad hoc basis, such as through e-mail messages or at conferences, because DOD does not have a formal communications process that would allow them to store and share such information. As a result, DOD cannot ensure that installations that are currently experiencing housing challenges or may experience such challenges in the future will have the needed information on various tools that can be used to address these challenges.
Background The role of for-profit private companies in managing public schools is a fairly recent phenomenon. Until the early 1990’s, school districts contracted with private companies largely to provide noninstructional services, such as transportation, building maintenance, or school lunches. By the 1994-95 school year, however, the role of private companies had expanded to include instructional services in four school districts, as we reported in a 1996 GAO study. These early decisions by school districts to contract with private companies often followed years of frustration with low student achievement in these schools. Since that time, the growth of private for-profit educational management companies has been aided by financial support from the business community and by the opportunities states have offered for greater flexibility in the provision of education services. Private for-profit management companies supply a wide array of educational and management services that may include providing the curriculum, educational materials, and key staff as well as payroll processing, busing, and building maintenance. The range and type of services vary by company, and to some extent by school within the company, as some companies have adapted their educational programs to the needs and interests of local areas. According to a study of for-profit educational management companies by Arizona State University, three- quarters of schools operated by private for-profit management companies in school year 2002-03 served elementary grade students in kindergarten through fifth grade and in some cases continued to serve students in higher grades. The size of schools operated by private management companies varied from an enrollment of fewer than 100 students to more than 1,000 students, but averaged about 450. Several of the major companies reportedly served a predominantly low-income, urban, and minority student population. Private companies operate both traditional public schools and public charter schools. Some states or districts contract with companies to manage traditional public schools—often poorly performing public schools. These schools are generally subject to the same set of requirements that govern traditional schools within the district. More commonly, companies manage charter schools —public schools that operate under agreements that exempt them from some state and district regulations but hold them accountable for improving pupil outcomes. Enrollment in charter schools generally is not limited to defined neighborhoods, but may draw from larger geographic areas than is the case for most traditional schools and must be open to all, without discrimination, up to enrollment limits. Like traditional public schools, charter schools receive public funds and may not charge tuition for regular school programs and services, but may charge for before- and after-school services, extended day kindergarten, or pre-kindergarten classes. Public schools operated by private management companies, both traditional and charter, are subject to requirements of the NCLBA, including expanded testing requirements. Under this law, states must establish standards for student achievement and goals for schools’ performance. Results must be measured every year by testing all students in each of elementary grades three through five and middle school grades six through eight, starting in school year 2005-06, and by assessing how schools have progressed in terms of improving the performance of their students. Information from these tests must be made available in annual reports that include the performance of specific student subgroups, as defined by certain demographic and other characteristics. During the school years covered in our study, states were only required to test students in one elementary, one middle school, and one high school grade. Table 1 identifies the different state testing schedules and instruments for the elementary grades in school year 2001-2002 in the cities where we made test score comparisons. Infrequent state testing is one of several factors that have hampered efforts to evaluate the impact of privately managed public schools on student achievement. To assess the impact of school management, researchers must isolate the effects of private management from the effects of other factors that could influence students’ test scores, such as school resources or student ability. Ideally, this would be accomplished by randomly assigning students to either a privately managed school or a traditionally managed school, resulting in two groups of students generally equivalent except for the type of school assigned. However, random assignment is rarely practical, and researchers usually employ less scientifically rigorous methods to find a generally equivalent comparison group. For instance, in some cases, schools may be matched on schoolwide student demographic characteristics such as race or socioeconomic status. When such characteristics can be obtained for individual students in the study, validity is improved. In addition, validity is further improved when the progress of students can be followed over several years. However, if the data on individual student characteristics are unreliable or unavailable, as has often been the case, researchers experience difficulties developing valid comparison groups. Similarly, if individual test scores are available only for one grade rather than successive grades, researchers cannot reliably track the progress of student groups over time and compare the gains made by the two groups. In our 2002 report that examined research on schools managed by some of the largest education management companies, we found that insufficient rigorous research existed to clearly address the question of their impact on student achievement. Part of the reason that so few rigorous studies are available may stem from the difficulties inherent in this research. Number of Schools Managed by Education Management Companies Is Increasing; Descriptive Information on Achievement Widely Available Although the number of public schools operated by private, for-profit management companies has risen rapidly in recent years, these schools still comprise a very small proportion of all public schools nationwide. Largely charter schools, the 417 privately managed schools were located in 25 states and the District of Columbia in school year 2002-03, with about one-half in Arizona and Michigan. These schools were operated by 47 private management companies. Descriptive information about achievement in these schools was widely available in the form of individual school report cards that often provided comparisons with state or district averages, but often not with similar traditional schools. Three management company reports summarized achievement gains over time for all their schools in one or more states, using various methodologies to illustrate student performance. School and company reports provided useful information on student achievement, but generally were not designed to answer research questions about the effectiveness of privately managed schools compared with traditional schools. While Numbers Are Increasing, the Percentage of Public Schools Managed by Private Companies Remains Small In school year 2002-03, at least 417 public schools were operated by private for-profit management companies, according to Arizona State University researchers. This figure was three times greater than the number of schools operated by private management companies just 4 years earlier, when there were only 135 schools, as shown in figure 1. Over three-quarters of the 417 schools were charter schools, and they comprised about 12 percent of charter schools nationwide. Despite the sharp rise in the number of public schools operated by management companies, they represented a small proportion of all charter and traditional schools in 2002-03. About one-half of 1 percent of all schools nationwide were privately managed schools. Over the same 5 years, public schools operated by private management companies have also become more geographically widespread, according to data from the Arizona State University study. Figure 2 shows that in school year 1998-99, private management companies operated public schools in 15 states. By school year 2002-03, the companies had schools in 25 states and the District of Columbia, with about 48 percent of the privately managed schools in Arizona and Michigan. Florida, Ohio, and Pennsylvania also had large numbers of schools as indicated by the map in figure 2, which shows the location of public schools operated by private management companies in school year 2002-03. The number of private management companies identified by the Arizona State University researchers also increased over the same period, but the companies varied greatly in terms of the number of schools they operated. As shown in figure 3, the number of companies increased from 13 in school year 1998-99 to 47 in school year 2002-03. Most of these companies were founded in the decade of the 1990’s, but since their founding, some companies have been consolidated or have gone out of business and have been succeeded by newly formed companies. In school year 2002-03, most of the companies were small, operating 15 or fewer schools each. Five medium-sized companies—Chancellor Beacon Academies; The Leona Group; Mosaica Education, Inc.; National Heritage Academies; and White Hat Management—operated from 21 to 44 schools each. The single largest company, Edison Schools, operated 116 schools. According to the Arizona Sate University report, 43 of the 47 companies operating in school year 2002-03 managed only charter schools. Charter schools have greater autonomy and decision-making ability in such areas as purchasing and hiring compared with traditional schools that are generally subject to district requirements, including labor agreements. Arizona researchers noted that state charter school laws have provided opportunities for private management that were not present earlier, and Western Michigan University researchers indicated that the growth of private educational management companies occurred soon after charter schools reforms were enacted in that state. They explained that some charter holders started their own private management companies and other charter holders sought the acumen and financial resources of management companies already established in the business. Individual School Reports Describe Achievement Levels, and Some Company Reports Describe Gains Compared to State or District Averages Two kinds of reports available to the public —school reports and company reports — described student achievement at privately managed schools relative to national, state, or district averages in school year 2002-03. Referred to as school report cards, the detailed individual school reports generally provided a snapshot of how well students attending the school did in meeting state achievement standards for the year. These report cards were issued by states, school districts, and by some of the larger companies, like the Leona Group for its schools in Michigan. Often available through the Internet, the report cards for individual schools generally described results of state tests in terms of the proficiency levels or achievement scores for the school overall, by grade level, subject matter, or in some cases, minority group or other subgroup. Some report cards also provided historical information on the school’s performance over several preceding years. School characteristics, such as the size, demographics, staffing, and finances, were included in many cases along with the proficiency levels or achievement scores. Figure 4 is an example of the test score section of Colorado’s school report card for a hypothetical school. As in Colorado, many school report cards compared results to the average in the state or school district, which allowed parents to see how well their children’s school was doing—not just in relation to state standards but also in relation to the performance of all other public schools in the state or district. However, these report cards were primarily designed to provide descriptive information for parents and to give an indication of school performance, not to evaluate the relative effectiveness of one school versus another. Report cards usually did not directly compare the performance of one school against other similar schools, and when they did, the comparison schools selected were, by necessity, matched at the school level, rather than the individual student level. Thus, differences in school performance at any particular grade might be due to differences in the students in that grade, as the reports released by the Leona Group warned, rather than due to factors related to the management or educational strategies of the school. For this reason, report cards, while useful to parents, are not the best source of information if the goal is to evaluate the effectiveness of one school compared with another. Company reports, a second source of school performance information, tended to provide a summary of how well students at all the company’s schools in one or more states were doing over a period of several years. Generally available through the Internet, reports from three companies— Mosaica Education, Inc.; the National Heritage Academies; and Edison Schools – emphasized broad patterns, such as gains in achievement test scores or proficiency levels that were averaged across schools, grades, and subjects tested. Our descriptions of the companies’ findings are based on their public reports and not on our independent review of their methodologies or conclusions. Both the Mosaica and National Heritage Academies reports compared student performance to national norms or state averages. The Mosaica Education, Inc., report summarized student gains on tests administered from the fall of school year 1999-2000 through the spring of 2001-02 at its 18 schools in 5 states and the District of Columbia. According to the report, there was sustained growth in average achievement scores over time, with an increase in the proportion of Mosaica students scoring as well or better than the average student on a nationally normed test and a commensurate decrease in the proportion scoring at or below the 25th percentile. On the basis of these test results, the report stated that about a third of Mosaica’s students ranked in the top one-half of the nation’s students in school year 2001-02. The National Heritage Academies report used individual student performance on the state’s achievement tests to compare two groups of students attending the company’s 22 schools in Michigan in school year 2000-01—veteran students who took the test at least 2 years after they applied to the school and newcomers who took the test less than 2 years after they applied. The study found a relationship between time associated with the company’s schools and higher performance, with veteran students outperforming newcomers across all subjects and grades tested and also outperforming state averages on 8 out of 10 tests. The report cautioned, however, that such evidence is not proof of causation and that some other factors not accounted for in the study might be responsible for the results. The Mosaica and National Heritage Academies reports both provided a broad view of overall company performance that, along with school report cards, could give parents more information on which to base their decisions about their children’s schooling. However, like school report cards, these two company studies were not designed to more directly assess school effectiveness. Neither company report included comparisons with students at similar traditional schools or addressed the question of whether the patterns of achievement that they identified might also be found in other schools as well. Edison’s annual report for 2001-02 used a methodology that went further toward assessing school effectiveness than other company reports we examined. In addition to providing a summary of how well its students were doing over time, Edison compared some of its schools with traditional schools. Generally, the report summarized trends in performance at 94 of Edison’s 112 school sites in multiple states over several years, compared to state and district averages. According to the report, most schools had low levels of achievement at the time Edison assumed management, but achievement levels subsequently increased at most of its school sites. Trends were also provided for several subsets of its schools, including a comparison of 66 of the 94 Edison schools that could be matched with 1,102 traditional schools on two demographic variables. Traditional schools selected as matches were those considered similar in terms of the percentages of students who were African- American and/or Hispanic and who were eligible for the free and reduced- price school lunch program, an indicator of low income. Edison compared the average scores of students in Edison schools with average scores of students in the traditional schools and found that its schools averaged gains that were about 2 percentage points or 3 percentiles higher per year than those of traditional schools and that about 40 of its 66 schools outperformed the traditional schools. However, the Edison analysis was limited by the fact that it was conducted using aggregated, school-level data and did not control for differences in the individual students being compared. Edison noted that it has taken steps to strengthen the way it evaluates the progress of its students and schools by commissioning a study by RAND, a nonprofit research organization that has evaluated educational reforms. The study began in 2000 and is scheduled for release in the summer of 2004. Where possible, RAND plans to compare the scores of individual Edison students to those of traditional public school students with similar characteristics. No Consistent Pattern of Differences in Scores on State Tests Found between Public Schools Managed by Private Companies and Comparable, Traditional Elementary Schools Differences in student performance on state assessments between privately managed public schools and comparable, traditional public schools varied by metropolitan areas for the grade levels in our study.Average student scores were significantly higher in both reading and math for fifth graders in 2 privately managed schools, 1 in Denver and 1 in San Francisco, compared with similar traditional public schools, as were gains over time when we examined a previous year’s scores for these students. However, fourth grade scores in the privately managed school in Cleveland and fifth grade scores at 2 privately managed schools in St. Paul were significantly lower compared with scores in the similar traditional schools. In Detroit, average fifth grade reading scores were significantly lower in 6 of the 8 privately managed schools, and math scores were lower in all but 1 privately managed school. No significant differences in reading or math scores were found between the privately managed school and comparison schools in Phoenix. Scores on State Tests Were Higher in Privately Managed Schools in Denver and San Francisco Average scores on state tests for fifth grade students attending privately managed schools in Denver and San Francisco were significantly higher compared with students attending similar, traditional public schools. Table 2 shows the characteristics used in matching privately managed and traditional schools in Denver and San Francisco and how the selected schools compared on these characteristics. As shown, schools generally had high proportions of minority and low-income students (as measured by free/reduced-lunch program eligibility) and students with limited English proficiency (LEP). For our test score analyses, we were able to obtain data on characteristics shown in table 2 for individual students in our study, as well as data on student mobility. We used these data in the test score analyses to further control for student differences in the grade level we studied. (See app. II, where tables 5 and 6 show detailed results of these analyses.) As shown in figure 5, in Denver the average reading score of 572 for fifth grade students in the privately managed public school is higher, compared with the average of 557 for students in similar traditional public schools. The average math score of 467 at the privately managed school is also higher than the 440 average score in the comparison traditional schools. For both reading and math, differences in scores remained significantly higher after we controlled for factors representing differences in the student populations. Figure 5 also shows the difference in reading performance, controlling for other factors, between the typical student at the privately managed school and the average student at the same grade level in the similar traditional schools in Denver. The bell curve represents the distribution of combined student scores in the traditional schools, with the lighter figure representing the student scoring at about the 50th percentile. The shaded figure represents the average student from the privately managed school. Although this student’s score is at about the 50th percentile in the privately managed school, the same score would place him or her at about the 60th percentile when compared against the scores of students in the traditional schools. The difference in math scores suggests a similar outcome—that is, the average student in the privately managed school would score at about the 60th percentile in the comparison traditional schools. In San Francisco, fifth grade reading scores averaged 636 for students in the privately managed school and 627 for students in the comparison traditional schools. Performance in mathematics of 640 was also higher for fifth grade students at the privately managed school, compared with 623 for students in the similar traditional schools. (See fig. 6.) As in Denver, these differences were significant when controlling for other factors. This analysis suggests that an average student in the privately managed school would likely exceed about 60 percent of students in the traditional comparison schools in reading and about 65 percent of those students in math. In both Denver and San Francisco, we were able to examine student performance over time, and our findings of achievement over time were similar to the findings described above. Students attending the privately managed schools showed significantly greater gains over time than the students in the comparison traditional schools. Specifically, fifth-grader students in our study who had attended their privately managed schools since the third grade demonstrated significantly higher achievement gains between grades 3 and 5 than did such students in the traditional comparison schools. Scores on State Tests Were Lower in Privately Managed Schools in Cleveland and St. Paul Average scores on state tests for fourth grade students attending privately managed schools in Cleveland and fifth grade students attending privately managed schools in St. Paul were significantly lower compared with scores of students attending similar traditional public schools. One privately managed school in Cleveland and 2 privately managed schools in St. Paul were examined, and as in Denver and San Francisco, the schools in our study from these cities were high minority and low-income schools. Table 3 shows the characteristics used to match schools in Cleveland and St. Paul and how the schools selected compared on these characteristics. For our test score analyses in Cleveland, we were able to obtain data on characteristics shown in table 3 for individual students in our study, as well as data on student mobility. In St. Paul, we obtained data on all characteristics shown in table 3 for individual students, except special education. In addition, we were able to obtain data on limited English proficiency. We used these data in the test score analyses for both cities to further control for student differences in the grade level we studied. (See app. II, where tables 7, 8, and 9 show detailed results of these analyses.) Figure 7 shows average reading scores for the privately managed school in Cleveland and its set of comparable schools. The average scores were significantly lower for students attending the privately managed school in both reading and math for the school years examined after controlling for other factors. The magnitude of the difference in reading scores is shown in the same figure 7. As can be seen in the figure, the score of the average student in the fifth grade in the privately managed school falls at about the 20th percentile when compared with student scores in the comparison traditional schools. Similarly, the difference in math scores implies that the average student in the privately managed school would score at about the 20th percentile in the traditional comparison schools. In St. Paul, we studied 2 privately managed schools (labeled school A and school B in figure 8) and used a different set of comparison traditional schools for each privately managed school. The average scores in both reading and math were significantly lower for students at both privately managed schools studied compared with similar traditional schools. The differences for the first privately managed school suggest that an average student at that school would score at about the 30th percentile in reading and the 20th percentile in math if attending the comparison traditional schools. The differences in scores at the second privately managed school imply that the score of an average student would be at about the 30th percentile in the comparison traditional schools in both reading and math. Scores on State Tests in Privately Managed Schools Varied in Detroit and Were Similar to Traditional Schools in Phoenix Average scores for fourth grade students in Detroit varied, but tended to be lower in both reading and math for students attending privately managed schools than for students attending similar traditional schools.As in other locations, student populations in schools we studied in Detroit tended to be minority and low income. (See app. III for other school characteristics.) Except for race/ethnicity, we did not use individual student demographic data in the Detroit test score analyses because the demographic data we received on individual students did not appear to be accurate. In spite of these missing data, we believe the analyses provide useful information, given the degree of similarity among the matched schools. As shown in figure 9, reading scores were significantly lower for students in six of the privately managed schools compared with students in similar traditional schools in Detroit. The size of these differences generally suggested that an average student attending the privately managed schools would score at about the 30th percentile in the similar traditional schools. In one comparison (labeled C in fig. 9), reading scores were significantly higher in the privately managed school compared with similar traditional schools. Students at this privately managed school would likely perform at about the 70th percentile in the traditional schools. For one other privately managed school (comparison B), differences in scores were not significantly different. Math scores followed a similar pattern, with student scores significantly lower at 7 of the 8 privately managed schools when compared with similar traditional schools. Scores for average students in the privately managed school would range from about the 15th percentile to about the 35th percentile in the traditional schools, depending on the particular set of schools compared. In the one higher-performing privately managed school (comparison B in fig. 10), an average student in this privately managed school would score at the 70th percentile in similar traditional schools. In Phoenix, scores of fifth grade students at the privately managed school did not differ significantly from scores at similar traditional schools. As in the other locations studied, both the privately managed and similar traditional schools had high percentages of minority and low-income students. Table 4 shows the characteristics of the schools in our study in Phoenix. For test score analyses, we were able to obtain reliable data for minority status for individual students. Additionally, we obtained reliable data on student mobility, and these were included in our analysis. Data on special education and limited English proficiency for individual students were not believed to be accurate and were not included. Individual student data on free and reduced-lunch eligibility were not available. Figure 11 shows average student scores for reading and math in the privately managed school and in the comparison traditional schools for Phoenix. Scores were not significantly different in either reading or math. We also analyzed changes in reading and math scores between third and fifth grade for those students who had tested in the same school in both years. Again, we found no significant difference between students attending the privately managed school and those attending traditional schools. Concluding Observations As opportunities increase for parents to exercise choice in the public education arena, information on school performance, such as that found in school report cards produced by many states, becomes more important. Such information can be useful to parents in making school choices by providing a variety of information about schools, including how they are performing in terms of students meeting state achievement standards or relative to statewide averages. However, educators and policymakers often want to know not only how well schools are performing but also the factors that contribute to their high or low performance so that successful strategies can be emulated. Answering this kind of evaluative question requires a different kind of methodology and more complex analyses to isolate the effects of the particular strategies of interest—educational practices, management techniques, and so on— from the many other factors that could affect student achievement. Although not a comprehensive impact evaluation, our study investigates the effect of school management by comparing traditional and privately managed schools and by controlling for differences in the characteristics of students attending the schools. In this way, our study provides a different type of information than that typically found in school report cards. While our study explores the role of school management, it has certain important limitations, as discussed earlier and in appendix I. Among these are data issues commonly encountered by educational researchers, for instance, lack of test score data for successive years and unreliable demographic data for individual students in some sites. However, with the implementation of NCLBA, more rigorous studies should be possible, as annual testing of all grades is phased in and with expected improvements in the quality of demographic data resulting from requirements to report progress for various subpopulations of students, based on such characteristics as race and low-income status. Finally, our mixed results may be evidence of the complexity of the factor under study. Our study analyzed differences between 2 categories of schools, grouped by whether they were traditional, district-managed schools or managed by a private company. However, these schools may have differed in other ways not included in our study—for example curricula, staff composition and qualifications, and funding levels—and these factors may also have affected student achievement. Any of these factors or combination of factors could account for the differences we found or may have masked the effects of differences we otherwise would have found. Agency Comments We provided a draft of this report to the Department of Education for review and comment. Education’s Executive Secretariat confirmed that department officials had reviewed the draft and had no comments. We are sending a copy of this report to the Secretary of Education, relevant congressional committees, appropriate parties associated with schools in the study, and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7215. See appendix IV for other staff acknowledgments. Appendix I: Scope and Methodology To compare achievement of public elementary schools in large cities operated by private management companies with similar traditional public schools, we analyzed individual student scores on state assessments in reading and mathematics. We matched each privately managed public school with 2 to 4 traditional public schools located in the same city that were similar in terms of size, grade span, and student characteristics. To confirm the reasonableness of the matches, we spoke with principals in all of the privately managed schools in our study and visited most of the schools. We also spoke with principals and visited many of the traditional schools selected. For selected grade levels, we compared the individual student scores of students attending the privately managed schools with those of students in the similar traditional public schools. We also compared changes in individual student performance over time where such data were available. This appendix describes the scope and school selection, outcome measures and analytic methods, and the limitations of the analysis. Scope and School Selection Using available public information, we attempted to identify all privately managed public elementary schools in large urban areas that had been in continuous operation by the same management company since the 1998-99 school year. We defined a large urban area for this study as a central city with a population of at least 400,000 in a standard metropolitan statistical area with a population of at least 2,000,000. We identified 17 public elementary schools managed by private companies meeting these criteria. The 17 schools were located in Cleveland, Ohio; Denver, Colorado; Detroit, Michigan; Phoenix, Arizona; St. Paul, Minnesota; and San Francisco, California. We matched each of these privately managed schools with 2-4 similar traditional public schools in the district where the privately managed school was located. To select similar traditional public schools, we employed a “total deviation” score procedure. For each public elementary school in the defined public school district and the privately managed school, we determined the following school characteristics: (1) racial and ethnic percentages, (2) percent special education, (3) percent eligible for free and reduced lunch, (4) percent limited-English proficient, and (5) student enrollment. We calculated z-scores (the statistic that indicates how far and in what direction the value deviates from its distribution’s mean, expressed in units of its distribution’s standard deviation) for each characteristic, and then calculated the absolute value of the difference between the z-score of the privately managed school and the z-score of each traditional public school on that characteristic. For each school, we summed the absolute difference in z-scores into a total deviation score. The total deviation score represents the sum of the differences between the privately managed public school and the candidate traditional public schools. Traditional public schools were considered a close match if the total deviation score divided by the number of characteristics for which we computed z-scores was less than or equal to 1.0. A score less than or equal to 1.0 indicates that the traditional school did not deviate from the privately managed school by more than 1 standard deviation when averaging across all variables considered in the match. For example, if 8 variables were used to calculate the total deviation score and the total deviation score was 7.8, the amount that the candidate school deviated from the privately managed school would be, on average, less than 1 standard deviation. All comparison schools selected for our analyses met this criterion for a close match. After mathematically selecting close matches, we consulted with public school district officials about the schools selected. These considerations led to adjustments to our final selection of matches as follows. In St. Paul, traditional public schools closely matching the privately managed schools included magnet schools and neighborhood, that is, attendance-zone, schools. The two “best” matching magnet schools and the two “best” neighborhood schools were selected as matches for the analysis. Similarly in Cleveland, traditional public schools closely matching the privately managed schools included former magnet schools and traditional neighborhood schools. For balance in matching, the two “best” matching former magnet schools and two “best” matching neighborhood schools were selected as matches for the analysis. In Denver, the five closest matching schools were all located in a distinct neighborhood, geographically distant from the privately managed school. In consultation with local school district personnel, the two “best” matching schools from this area and the two “best” matching schools from outside this area were selected for the analysis. In San Francisco, one of the three traditional school matches was discarded because it had a special teacher training program, resulting in only two matches with the privately managed school. In Detroit, the best three matching traditional schools were selected except in one instance where one of the matching schools was discarded because a subsequent site visit determined that the school had selection criteria for attendance based upon prior achievement. In Phoenix, there were 21 elementary school districts located in the city, and 13 of these districts comprise the Unified Phoenix High School District. Since the privately managed schools were located within the Unified Phoenix High School District, we selected matches from among the 13 school districts in the Unified Phoenix High School District using the “best” matching school of each elementary school district as a pool from which we selected the best four matches, each from a different school district. Two privately managed schools in Phoenix and one privately managed school in Cleveland were dropped from the analysis because no matching traditional schools were found using our methodology. This resulted in a total of 14 privately managed schools included in the study, 8 of which were located in Detroit. Schools selected were managed by Designs for Learning, Inc.; Edison Schools; The Leona Group; Mosaica Education, Inc.; Schoolhouse; and White Hat Management. Measures and Analytic Methods We used student reading and math scale scores on routinely administered state assessments as measures of academic achievement. At the time of our study, the most recent data available were for school year 2001-2002. Test scores and student characteristic data were obtained from either the school district or state education agency. We used a variety of approaches to verify the accuracy of these data. In most cases, we verified data by comparing a sample of the data received against school records examined at the school site. In Detroit, data verification indicated student low- income, special education, and mobility data provided by the state were unreliable, and we decided not to use these data in our final analyses. In Phoenix, data verification indicated that student limited-English proficiency and special education data provided by the state for the privately managed school were unreliable and this was confirmed with diagnostic analysis. Therefore, we were unable to include these control variables in our final analyses. For each privately managed school and its set of matched, comparison schools, we selected the highest elementary grade for which test scores were available. We collected test score information for 2 school years, 2000-01 and 2001-02, except in Detroit where only 2001-02 scores were used due to difficulties obtaining data and changes in the test given. For each site, we compared reading and math student scores in the privately managed school(s) with the scores of same-grade students in the set of matched, comparison schools. The scores for the 2000-01 and 2001-02 school years were combined in the analysis. In addition, in three locations where testing occurred more frequently, Denver, Phoenix, and San Francisco, we obtained third grade scores for students who had taken the state assessment in the same school and examined the difference in scores over time. For each site, we conducted multivariate ordinary least squares (OLS) regression analysis to quantify differences in student achievement while controlling for school type and student characteristics. Specific independent variables included in the regression model were as follows: School type, with the traditional public school being given a value of 1 and the privately managed school a value of 0. Mobility, with a value of 1 given to students not attending for 2 years the same school at which he or she took the state assessment. Limited English proficiency (LEP), with a value of 1 given if the child was designated as limited-English proficient. Special education, with a value of 1 given if the student was enrolled in special education. Low-income, with a value of 1 indicating the student was eligible for free or reduced lunch. Race and ethnicity, with a value of 1 given for the child’s appropriate minority racial/ethnic identity. Each child was placed in only one racial category, and the number of racial categories used varied from place to place. When numbers for a particular racial group in a city were small, they were combined collectively as “other minority.” (Specific racial and ethnic identities employed in each city are set out in the results in app. II.) where, (1) i is the individual student, (2) low-income is determined by eligibility for free and/or reduced lunch, and (3) race and ethnicity are distinct codes dependent upon the geographical area. We also performed analyses on different groupings of the comparison schools in Denver, Cleveland, and St. Paul. In Denver, 2 of our matched schools were in a distinct neighborhood that school district personnel believed might be atypical; in Cleveland and St. Paul several of the matched schools were magnet or former magnet schools. We re-analyzed the data in each of these cities using these groupings as factors. The overall results were unchanged, with the exception that in Denver, reading scores were not significantly different when the privately managed school was compared with the 2 schools not in the distinct neighborhood. In conducting these analyses, we performed certain diagnostic and analytic tests to confirm both the appropriateness of aggregating categories in our analyses and the reasonableness of assumptions pertaining to normality and homogeneity of variance. In addition, we determined the extent of missing data and performed sensitivity analyses to assess the effect on our results. We determined that missing case level data had a negligible effect on our results. To illustrate the magnitude of differences found, we computed effect sizes based on standardized mean differences. Using the OLS regression results, we divided the unstandardized coefficient associated with school type by the pooled standard deviation to obtain z-scores for average students in the privately managed and traditional schools. The reported percentile was the area of the normal curve associated with the z-scores. Tables 5-12 in appendix II list the regression results and independent variables included in our analyses. The size and significance of the differences we report were derived from OLS regression models. We obtained results that were almost identical to the OLS results when we used robust estimation procedures to calculate the standard errors associated with the estimated differences. We also considered robust regression models that allowed for the clustering, and lack of independence, of students within schools. These models yielded somewhat fewer differences that were statistically significant at the 95-percent confidence level. We do not focus our reporting on the results of the models that account for clustering, however, since the statistical properties and validity of such models when applied to data with a very small number of clusters (in this case, 3 to 5 schools) is questionable.However, changes to significance levels of the school type coefficients due to robust standard errors and robust standard errors with clustering are noted in appendix II. Limitations of the Analysis The findings in this study are subject to typical limitations found in quasi- experimental designs. We examined the highest elementary grades tested for school years 2000-01 and 2001-02, and student achievement in these grades and years may not be indicative of student achievement in other grades and years in those schools. In addition, our matching process may not have produced equivalent groups for comparison. We mitigated this potential problem by using individual student characteristics in our analyses. However, reliable and complete student demographic data were not available in all sites, which resulted in the elimination of important factors from the model in several sites. In addition, other factors such as student ability, prior achievement, operating environment, reasons students enrolled in privately managed schools, and parental involvement, may be related to student achievement and are not accounted for in the study. Finally, our examination of student performance over time, that is, changes in achievement between grades, also has some limitations. First, the data allowed a study of achievement over time in only 3 of the 6 sites. In addition, the analyses included only students who continuously attended the school over the time period studied, and this in some cases eliminated more than half of the subjects from the analyses. We were unable to determine whether those students who remained in the school for this period were different in some important way from those who left. Appendix II: Tables of Regression Results for Differences in Student Achievement Scores on State Assessments Tables 5-12 in this appendix show the variables used in the OLS regression models and the results of those analyses. The results are presented separately by city and for each privately managed school and its particular set of matching traditional schools, with reading and math presented within the same table in all cases, except Detroit. The number of observations, shown as N, is the total of the observations in the privately managed school and its set of comparison schools used in each regression analysis. We also ran similar regression analyses using robust estimation procedures with and without clustering, as discussed in appendix I. In most cases, effects of school type remained significant at the 95-percent confidence level. Exceptions are indicated by table notes. Appendix III: Characteristics of Privately Managed Schools and Comparable Traditional Public Schools in Detroit Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, Peter Minarik, Mark Braza, Douglas M. Sloane, and Shana Wallace made key contributions to this report. Deidre M. McGinty and Randolph D. Quezada also provided important support. Related GAO Products Title I: Characteristics of Tests Will Influence Expenses; Information Sharing May Help States Realize Efficiencies. GAO-03-389. Washington, D.C.: May 8, 2003. Public Schools: Insufficient Research to Determine Effectiveness of Selected Private Education Companies. GAO-03-11. Washington, D.C.: October 29, 2002. School Vouchers: Characteristics of Privately Funded Programs. GAO-02-752. Washington, D.C.: September 10, 2002. Title I: Education Needs to Monitor States’ Scoring of Assessments. GAO-02-393. Washington, D.C.: April 1, 2002. School Vouchers: Publicly Funded Programs in Cleveland and Milwaukee. GAO-01-914. Washington, D.C.: August 31, 2001. Charter Schools: Limited Access to Facility Financing. GAO/HEHS-00-163. Washington, D.C.: September 12, 2000. Charter Schools: Federal Funding Available but Barriers Exist. GAO/HEHS-98-84. Washington, D.C.: April 30, 1998. Charter Schools: Recent Experiences in Accessing Federal Funds. GAO/T-HEHS-98-129. Washington, D.C.: March 31, 1998. Charter Schools: Issues Affecting Access to Federal Funds. GAO/T-HEHS-97-216. Washington, D.C.: September 16, 1997. Private Management of Public Schools: Early Experiences in Four School Districts. GAO/HEHS-96-3. Washington, D.C.: April 19, 1996.
Over the last decade, a series of educational reforms have increased opportunities for private companies to play a role in public education. For instance, school districts have sometimes looked to private companies to manage poorly performing schools. The accountability provisions of the No Child Left Behind Act of 2001 may further increase such arrangements because schools that continuously fail to make adequate progress toward meeting state goals are eventually subject to fundamental restructuring by the state, which may include turning the operation of the school over to a private company. GAO determined the prevalence of privately managed public schools and what could be learned about student achievement in these schools from publicly available sources. To do so, GAO examined existing data on the number and location of privately managed schools and reviewed a variety of reports on student achievement. In addition, GAO compared standardized test scores of students attending privately managed public schools with scores of students attending similar traditional public schools. GAO identified privately managed schools that had been in operation for four years or more in 6 large cities and matched these schools with a group of traditional schools serving similar students. GAO then analyzed student scores on state reading and math tests at selected grade levels, controlling for differences in student populations. The number of public schools managed by private companies has tripled in the last 5 years according to data compiled by university researchers, although such schools comprise less than 0.5 percent of all public schools. In the 2002-03 school year, nearly 50 private companies managed over 400 public schools nationwide. These companies managed schools in 25 states and the District of Columbia, with about one-half of the schools located in Arizona and Michigan. Information on student achievement at these schools was available in the form of state- or district-issued school report cards and annual reports issued by the management companies. Although these reports provided valuable descriptive information, they were generally not designed to answer research questions about the relative effectiveness of privately managed schools compared with traditional schools in raising student achievement. Consequently, GAO conducted test score analyses that provide further insight into student achievement in these schools. GAO's analyses of student test scores in 6 cities yielded mixed results. Scores for 5th grade students in Denver and San Francisco were significantly higher in both reading and math in two privately managed schools when compared with traditional schools serving similar students. However, 4th grade scores in reading and math were significantly lower in a privately managed public school in Cleveland, as were 5th grade scores in two privately managed schools in St. Paul. In Detroit, where eight privately managed schools were studied, reading and math scores of 5th graders in privately managed schools were generally lower. In Phoenix, GAO found no significant differences. GAO's results are limited to the schools and grade levels examined and may not be indicative of performance at other schools.
Army Modularity Is a Significant Undertaking The Army’s modular force initiative, which has been referred to as the largest Army reorganization in 50 years, encompasses the Army’s total force--active Army, National Guard, and Army Reserve—and directly affects not only the Army’s combat units, but related support and command and control. Restructuring its units is a major undertaking and requires more than just the movement of personnel or equipment from one unit to another. The Army’s new designs are equipped and staffed differently than the units they replace. Therefore, successful implementation of this initiative will require many changes such as new equipment and facilities, a different mix of skills and occupational specialties among Army personnel, and significant changes to training and doctrine. The foundation of Army modularity is the creation of brigade combat teams—brigade-sized units that will have a common organizational design and will increase the pool of available units for deployment. The Army believes a brigade-based force will make it more agile and deployable and better able to meet combatant commander requirements. Not only does the Army expect to produce more combat brigades after its restructuring, it believes the brigades will be capable of independent action by the introduction of key enablers, such as enhanced military intelligence capability and communications, and by embedding various combat support capabilities in the brigade itself instead of at a higher echelon of command. The Army’s goal is for each new modular brigade combat team, which will include about 3000-4000 personnel, to have at least the same combat capability as a brigade under the current division-based force, which ranges from 3000 to 5000 personnel. Since there will be more combat brigades in the force, the Army believes its overall combat capability will be increased as a result of the restructuring, providing added value to combatant commanders. By the end of fiscal year 2006, the Army plans to reorganize its 10 active divisions, expanding from the current 33 to 43 modular, standardized brigade combat teams and creating new types of command headquarters to replace the current division headquarters structure. According to Army officials, this is a very quick pace for a restructuring of this magnitude. The Army has already begun the conversion with 4 divisions: the 3rd Infantry and the 101st Airborne Divisions, which we have visited, the 4th Infantry Division which we plan to visit this spring, and the 10th Mountain Division. The 3rd Infantry Division has redeployed back to Iraq in its new configuration, and the 101st is scheduled to redeploy later this year. The Army’s organizational designs for the brigade combat teams have been tested by its Training and Doctrine Command’s Analysis Center at Fort Leavenworth against a variety of scenarios and the Army has found the new designs to be as effective as the existing brigades in modeling and simulation. During the next few years, the Army plans to collect lessons learned from deployments and major training exercises and make appropriate refinements to its unit designs, equipment requirements, and doctrine. By fiscal years 2009-10, the Army plans to complete the creation of modular, standardized supporting brigades as well as a reorganization of its Corps and theater-level command and support structures. Ninety-two support brigades and five higher echelon headquarters will be included in this initiative—yet another indication of the far-reaching nature of the Army’s modularity plan. Although our work has focused on the active component, restructuring of the reserve component into modular units will also be a major undertaking. The Army plans to convert the National Guard’s existing 38 brigades into 34 modular brigade combat teams by fiscal year 2010. However, the Army is considering accelerating this schedule, according to Army officials. In addition, the Army Reserve will have to realign its support units in accordance with new modular designs. Like the active component, the reserves will have to manage these conversions to the new modular organizations while continuing to provide forces to Iraq. Because of the high degree of complexity associated with establishing a modular force while managing deployments to ongoing operations, the Army has developed a number of plans and processes, such as the Army Campaign Plan and has held periodic meetings within the Army headquarters and its components and major commands, to manage these changes. The Army’s senior leadership is playing a key role in these processes. Army May Face Challenges in Staffing and Equipping Modular Brigade Combat Teams The Army is likely to face a number of challenges in fully staffing and equipping modular combat brigades as designed. Although somewhat smaller in size, the new modular brigades are expected to be as capable as the Army’s existing brigades because they will have different equipment, such as advanced communications and surveillance equipment, and a different mix of personnel and support assets. Although the Army has an approved and tested design for the new modular brigades, it has also established a modified list of equipment and personnel that it can reasonably expect to provide to units undergoing conversion based on its current inventory of equipment, planned procurement pipelines, and other factors such as expected funding. The Army expects to use this modified list of equipment and personnel to guide the conversion of existing divisions to modular brigades for the foreseeable future. Our preliminary work indicates significant shortfalls in the Army’s capacity to equip and staff units, even at modified levels. For example, according to Army officials, modular brigade combat teams will require additional soldiers in personnel specialties such as military intelligence, truck drivers, civil affairs, and military police to achieve the planned capability. Military intelligence is one of the most critical of these specialties because military intelligence enables brigade combat teams to conduct 24-hour combat operations, cover highly dispersed battlespaces, and increase force protection. According to Army officials, the Army needs to add 2800 military intelligence specialists by the end of fiscal year 2005 to meet near-term military intelligence shortages. Moreover, the Army needs an additional 6200 military intelligence specialists through fiscal year 2010 to meet modular force requirements. Providing additional military intelligence specialists, particularly at the more senior levels, may take several years because of the extensive training required. At the time of our visit, the 3rd Infantry Division’s four brigade combat teams each had less than 50 percent of their military intelligence positions filled. Although the Army was later able to fill the division’s needs by reassigning military intelligence specialists from other units prior to its deployment to Iraq in January 2005, many of these soldiers were redeployed after just having returned from overseas. Moreover, transferring soldiers from other units may make it more difficult for the Army to fill positions in the remaining divisions scheduled to be restructured. We are continuing to follow up on Army actions to address these shortages. Similarly, modular brigade combat teams require significant increases in the levels of equipment, particularly command, control, and communications equipment; wheeled vehicles; and artillery and mortars. Examples of command, control, and communications equipment that are key enablers for the modular brigade combat teams include advanced radios, Joint Network Node systems, ground sensors such as the Long-Range Advanced Scout Surveillance System, and Blue Force Tracker, among others. This critical equipment makes possible the joint network communications, information superiority, and logistical operations over a large, dispersed battlespace in which modular forces are being designed to effectively operate. Although the Army has some of this equipment on hand, the levels being fielded to brigade combat teams are well below the levels tested by the Training and Doctrine Command. As a result, officials from both divisions we visited expressed concern over their soldiers’ ability to train and become proficient with some of this high-tech equipment because the equipment is not available in sufficient numbers. Moreover, it is not clear yet how the Army plans to bring brigades that have already undergone modular conversion up to Training and Doctrine Command tested levels of personnel and equipment following their deployments. For example, the design requires a division with four modular brigade combat teams to have approximately 28 tactical unmanned aerial vehicle systems. These systems provide surveillance and reconnaissance for soldiers on the battlefield and enable them to more safely carry out their missions. However, because of current shortages, the 3rd Infantry Division and the 101st Airborne Division are only authorized to have 4 systems, and at the time of our visits, the 3rd Infantry Division had 1 and the 101st Airborne had none on hand. The Army requested funding for only 13 of these systems in the fiscal year 2005 supplemental appropriation request to the Congress; thus, it remains unclear as to when the 3rd Infantry Division or the 101st Airborne Divisions will receive their full complement of tactical unmanned aerial vehicle systems. Also, the Army may continue to provide other divisions undergoing conversion with limited quantities that fall short of the design requirement. Army Faces a Number of Key Decisions That Could Affect Modular Force Requirements According to Army modularity plans, the Army is continuing to assess its requirements and may make some key decisions in the future that will affect the size and composition of the modular force as well as its cost. First, the Army’s Campaign Plan calls for a potential decision by fiscal year 2006 on whether to create 5 additional modular brigade combat teams. Adding 5 brigades would provide additional capability to execute the defense strategy but would require additional restructuring of people and equipment. Second, according to the 2004 Army Transformation Roadmap, the Army is evaluating whether to add a third maneuver battalion to brigade combat teams in fiscal year 2007 to prepare for the fielding of the Future Combat Systems Units of Action, which are designed with three maneuver battalions. Additionally, according to the Army’s Training and Doctrine Command, early testing demonstrates that brigade combat teams with three maneuver battalions offer distinct advantages over two battalion formations because they provide robust, flexible, full-spectrum capability. The command is conducting additional analysis to assess the value and cost of adding a third combat maneuver battalion to the modular brigade combat teams. If the Army later decides to add a battalion to some or all of the 43 or potentially 48 modular brigade combat teams, it will need to assign thousands of additional soldiers and field additional equipment. The Army also faces a number of decisions in finalizing its plans for creating modular support brigades. Modular support brigades that will replace the current division-based combat service and support structure are not scheduled to be fully in place until fiscal years 2009-10. The Army has finalized the designs and requirements for three of the five types of support brigades, but has not yet made final design decisions for the other two. The support brigades are key components of the Army’s concept of modular forces being more responsive and expeditionary than current forces. Until the modular support brigades are fully organized, equipped, and functional, the Army’s modular forces would not have these capabilities, and in the interim, combat service and combat service support would need to be provided by existing division-based support organizations. This means that for some time to come, even as the Army makes progress in achieving greater uniformity across the force, there will be a number of variations in the size and capability of available support units. Also, as with the decision to add additional battalions, until the Army completes all of its force structure designs for support brigades, it will not have a total picture of its personnel and equipment requirements. Finally, by fiscal year 2010 the Army plans to complete a reorganization of its corps and theater-level command and support structure. The Army’s plans would eliminate an entire echelon of command, moving from four levels to three and freeing additional personnel spaces that can help meet some of its modular force personnel requirements. While the Army expects to achieve efficiencies resulting from the reduction of command and support structures, their magnitude is not yet known and they may not be realized for several years. Moreover, while potentially somewhat more efficient, the new command-level designs are likely to require new command, control, and communications equipment to enable them to function in their updated roles, such as providing the basic structure for a joint headquarters. Cost Estimates for Fully Implementing Modularity Have Increased Significantly and Are Still Evolving The costs of modularity are substantial and are likely to grow. Since 2004, the Army’s cost estimates have increased significantly. In January 2004, the Army estimated that increasing the number of active modular brigade combat teams from 33 to 48 would cost $20 billion from fiscal years 2004 through 2011 based on a “rough order of magnitude estimate.” As of July 2004, the Army added $8 billion to address costs for reorganizing the reserve component, bringing the total estimated cost for reorganizing the entire force to $28 billion. Our preliminary work highlighted several limitations in this estimate. For example, the July 2004 estimate: included costs of adding 15 light infantry brigades for the active component to bring the total number of active brigades to 48, but these costs were based on the current brigade structure, not the tested modular design; did not take into account the costs for upgrading existing active brigades, or other support and command elements; and accounted for construction of temporary, relocatable facilities, but did not allow for permanent upgrades to facilities or increases to other services provided at Army installations to accommodate the increase in modular units. As of March 2005, the Army has revised its earlier estimate, now estimating that modularity will cost a total of $48 billion from fiscal years 2005 through 2011—an increase of 71 percent over its earlier $28 billion estimate. According to the Army, this estimate includes costs for a total of 43 active component brigades—covering upgrades to the existing 33 brigades and the creation of 10 new brigades—as well as 34 brigades in the reserve component. During our preliminary work we discussed and obtained summary information on the types of cost and key assumptions reflected in the Army’s estimates. However, we were unable to fully evaluate the estimates because the Army did not have detailed supporting information. According to Army officials, the Army used the modular design, which has been informed by combat operations in Iraq, as the basis for developing the March 2005 revised estimate. They noted the estimate includes costs for the creation of new brigades as well as upgrades to existing brigades, costs for support and command elements, and costs for permanent facilities. However, unlike the original estimate, the current estimate does not include any personnel costs. According to Army officials, an increase in personnel endstrength is needed to simultaneously conduct operations and reorganize into a modular force. They told us these costs were excluded from the current estimate because it was difficult to differentiate between endstrength increases associated with conducting operations and those needed for modularity. Based on our preliminary review of the Army’s revised estimate and potential costs associated with modularizing the active component, we believe there are certain factors that could affect the overall cost for modularity, including some that will likely make it grow higher than the current estimate of $48 billion. First, the Army’s current cost estimate does not use the tested design as the basis for determining equipment costs. Rather, the estimate reflects costs for a lesser amount of equipment than called for in the tested design. According to Army officials, they estimated equipment costs in this manner because some equipment is not currently available or in production in sufficient quantities to meet modularity requirements. Second, if the Army decides to add 5 brigade combat teams to the current plan and/or an additional maneuver battalion to some or all brigades, the cost for modularity will increase significantly. For example, each modular brigade combat team, under the current design, would require 3,300 to 3,700 soldiers, for a potential total of up to 18,500 soldiers. While at least some of these personnel requirements could be offset with existing force structure, it is unclear how many additional soldiers, if any, would be needed. Nonetheless, adding these brigades to the force structure would add costs for equipment, facilities, and training. Finally, the Army’s current cost estimate includes costs for permanent facilities needed to accommodate the modularized brigade combat teams. However, according to Army officials, plans for constructing facilities are uncertain because of pending decisions related to the Base Realignment and Closure process and the planned restationing of forces from overseas. The Army anticipates obtaining funds to pay for this restructuring through supplemental and annual appropriations. To cover the $48 billion estimate, current DOD budget plans indicate the Army would receive a total of $10 billion from supplemental appropriations in fiscal years 2005 and 2006, and a total of $38 billion from DOD’s annual appropriation for the period of fiscal years 2006 through 2011. As part of our ongoing work, we will continue to review the Army’s estimates, cost implications, and funding plans for modularity. Concluding Remarks The Army views modularity as critical to improving the combat and support capability of its forces. Restructuring the entire force while continuing to support ongoing operations poses significant challenges and will require substantial funds. The magnitude of achieving modularity, coupled with other ongoing major transformation initiatives, raises long-term affordability issues for DOD. Until the Army more fully defines the requirements and potential costs associated with modularity, DOD will not be well positioned to weigh competing priorities and make informed decisions, and the Congress will not have all the information it needs to evaluate funding requests for modularity. Mr. Chairman and Members of the Committee, this concludes our prepared remarks. We would be happy to answer any questions you may have. Contacts and Staff Acknowledgments For future questions about this statement, please contact Sharon Pickup at (202) 512-9619, Janet St. Laurent at (202) 512-4402, or Gwendolyn Jaffe at (202) 512-4691. Other individuals making key contributions to this statement include Margaret Best, Alissa Czyz, Kevin Handley, Joah Iannotta, Harry Jobes, Joseph Kirschbaum, Eric Theus, Jason Venner, and J. Andrew Walker.
Modularity is a major restructuring of the entire Army, involving the creation of brigade combat teams that will have a common design and will increase the pool of available units for deployment. The Army is undertaking this initiative at the same time it is supporting the Global War on Terrorism, and developing transformational capabilities such as the Army Future Combat Systems. To achieve modularity, the Army currently estimates it will need $48 billion. The Department of Defense's (DOD) request for fiscal year 2005 supplemental funds includes $5 billion for modularity. The Army plans for another $5 billion to be funded from fiscal year 2006 supplemental funds and the remaining $38 billion from DOD's annual appropriation from fiscal years 2006 through 2011. Our testimony addresses: (1) the Army's goals and plans for modularity, (2) challenges the Army faces in staffing and equipping its modular combat brigades, (3) key decisions that could affect requirements, and (4) the Army's cost estimates and funding plans. This testimony is based on ongoing GAO work examining Army modularity plans and costs. Our work has been primarily focused on the Army's active forces. The Army has embarked on a major initiative to create modular units to better meet the near-term demand for forces and improve its capabilities to conduct full-spectrum operations. Modularity is a major undertaking because it affects both the active and reserve components as well as combat and support forces. Successfully implementing this initiative will require many changes such as new equipment and facilities, a different mix of skills among Army personnel, and significant changes to training and doctrine. By the end of fiscal year 2006, the Army plans to reorganize its 10 active divisions, expanding from 33 brigades to 43 modular brigade combat teams, and by fiscal year 2010, create new types of command headquarters. The Army has completed or is in the process of establishing modular brigades in four of its active divisions. While the Army has made progress in establishing modular brigades, it is likely to face several challenges in providing its new modular units with some required skilled personnel and equipment that are needed to achieve planned capabilities. For example, the Army has not provided its new modular brigades with required quantities of critical equipment such as unmanned aerial vehicles, communications equipment, and trucks because they are not currently available in sufficient quantities. Moreover, it may take years to meet increased requirements for critical skills such as military intelligence analysts because they are in high demand and take years to train. In addition, the Army has not yet made a number of key decisions that could further increase requirements for equipment and personnel. First, the Army has not yet decided whether to recommend an increase in the number of active brigade combat teams from 43 to 48. Also, it is assessing the costs and benefits of adding one more combat maneuver battalion to its new modular brigades. Finally, the Army has not yet finalized the design of higher echelon and support units. Until designs are finalized and key decisions are reached, the Army will not have a complete understanding of the equipment and personnel that are needed to fully achieve its goals. The costs associated with modularizing the entire Army are substantial, continuing to evolve, and likely to grow beyond current estimates. As of March 2005, the Army estimated it will need about $48 billion to fund modularity--representing an increase of 71 percent from its earlier estimate of $28 billion in 2004. However, this estimate may not reflect all potential costs, such as for fully equipping the modular force as designed. Also, if the Army decides to add additional brigades or make other design changes, additional costs may be incurred. Furthermore, some costs are uncertain. For example, it will be difficult for the Army to determine facility requirements and related costs until DOD finalizes plans for restationing forces from overseas. Until the Army provides a better understanding of the requirements and costs associated with modularity, DOD will not be well positioned to weigh competing priorities and make informed decisions nor will the Congress have the information it needs to evaluate funding requests.
Introduction USDA affects the lives of all Americans and millions of people around the world. Created 133 years ago to conduct research and disseminate information, USDA’s role has been expanded to include, among other things, providing billions of dollars annually to support farm incomes; developing agricultural markets abroad to boost domestic farm production and exports; ensuring a safe food supply; managing and conserving the nation’s forests, water, and farmland; and providing education and supplemental resources to the needy to improve diet and nutrition. USDA’s challenge is to meet its responsibilities as it also adapts to a rapidly changing global marketplace. During 1994, USDA delivered services through 43 agencies and a network of more than 14,000 field offices. Pursuant to Public Law 103-354, the Secretary of Agriculture reorganized the Department by reducing the number of component agencies from 43 to 29. The Secretary has also announced plans to reduce the number of county field offices by about 1,200 over the next 3 years. To carry out its missions, the Department and its component agencies reported budget outlays of about $61 billion in fiscal year 1994, according to the President’s fiscal year 1996 budget request. Telecommunications: A Vital but Costly Resource Like other federal agencies, USDA’s agencies rely on telecommunications networks and systems to accomplish missions and serve customers. The Department and its agencies deliver USDA services through thousands of field offices in states, cities, and counties. These offices acquire and use various types of telecommunications services and equipment to meet mission needs. Because telecommunications plays a vital role at USDA, it is imperative for the Department to plan and manage all its telecommunications resources effectively and prudently. According to the Department’s January 1993 Information Resources Management (IRM) Strategic Plan, telecommunications systems that provide quick and reliable voice and data communication throughout the Department are critical to USDA’s success in carrying out its many missions and necessary for building a network infrastructure capable of sharing information whenever and wherever it is needed. The effective and prudent use of telecommunications technology is also critical to the success of USDA’s efforts to streamline and consolidate its field office structure and reduce operational costs. USDA reports show that it spends about $100 million annually for telecommunications. This includes about $37 million for FTS 2000 services in fiscal year 1994. USDA is required to use FTS 2000 network services for basic long-distance communications (i.e., the inter-Local Access and Transport Area (LATA) transport of voice and data communications traffic). Under the federal government’s FTS 2000 contract, USDA agencies and offices use basic switched service for voice, packet switched service for data, video transmission service, and other types of services to support their communications needs. In addition to FTS 2000, USDA estimates that during fiscal year 1994 it spent another $50 million on local telecommunications and other services obtained from about 1,500 telephone companies. USDA agencies and offices use these services to meet their local telephone and data communications needs within LATAs. Other telecommunications services obtained from commercial carriers that are not available under the FTS 2000 contract, such as satellite communications, are also included in these costs. USDA also estimates that between $10 million and $30 million is spent annually on telecommunications equipment, such as electronic switches and telephone plant wiring, and support services, such as maintenance for acquired telecommunications equipment. The Federal Information Resources Management Regulation and USDA’s Telecommunications Policy (DR-3300-1) require that USDA’s agencies maximize use of all government telecommunications resources to achieve optimum service at the lowest possible cost. In addition, Section 215 of the Department of Agriculture Reorganization Act of 1994, requires USDA to reduce expenses by jointly using resources at field offices where two or more agencies reside. This includes sharing telecommunications services and equipment. Also, section 216 of this act requires that whenever USDA procures or uses information technology it should do so in a manner that promotes computer information sharing among its agencies. OIRM Is Responsible for Ensuring USDA’s Telecommunications Are Managed and Planned Cost-Effectively The senior USDA IRM official—the Assistant Secretary for Administration—has delegated responsibility for managing all aspects of the Department’s telecommunications program to the OIRM Director. According to federal regulations, this responsibility includes the following to ensure that telecommunications resources are maximized at the lowest possible cost: develop departmental telecommunications guidelines and regulations necessary to implement approved principles, policies, and objectives, review and evaluate telecommunications activities for conformance with all applicable federal and USDA telecommunications policies, plans, procedures, and guidelines, develop and implement a telecommunications planning system that integrates short- and long-term objectives and coordinates agency and staff office initiatives in support of these objectives, and monitor agencies’ network systems acquisition and development efforts to ensure effective and economic use of resources and compatibility among systems of various agencies. At USDA, component agencies manage the acquisition and use of telecommunications services and equipment on a day-to-day basis. Because of this, OIRM is principally responsible for providing departmentwide telecommunications policy and direction and monitoring the agencies’ activities to ensure their compliance. For example, in December 1993, OIRM’s Telecommunications Policy Division consolidated all existing telecommunications policy into a comprehensive directive—Departmental Directive 3300-1—which is USDA’s current policy in this area. Also, in September 1993, OIRM and the Office of Assistant Secretary for Administration developed USDA’s first departmentwide Strategic Telecommunications Plan. According to USDA policy, this Plan shall serve as guidance to the agencies for developing their respective agency telecommunications plans. With respect to monitoring telecommunications, USDA established a IRM Review Program, as required by federal law, to periodically review component agencies’ information and telecommunications management activities. According to the Federal Information Resources Management Regulation, such a program is intended to, among other things, (1) ensure agencies comply with governmentwide and departmentwide telecommunications policies, regulations, rules, standards, and guidelines, (2) ensure agencies efficiently acquire and effectively use resources, and (3) determine whether agencies’ controls over and reviews of their telecommunications resources provide effective management oversight. To do this, USDA policy requires OIRM to conduct periodic reviews at each of USDA’s agencies. In addition, OIRM established its Agency Liaison Officer (ALO) Program in late 1992 to, among other things, strengthen coordination of the agencies’ telecommunications projects and planning to ensure that there are not unnecessary barriers to information exchange. In doing so, OIRM obtained additional staff and made them responsible for (1) analyzing IRM programs to ensure that they are consistent with Department goals and objectives and (2) maintaining an understanding of an agency’s plans for information and telecommunications technology investments to ensure there is adequate departmentwide coordination. Moreover, under its technical approval authority, OIRM reviews and approves component agency requests for procurements of telecommunications resources. Telecommunications Management Practices Differ Among USDA Agencies Telecommunications management practices vary widely across USDA agencies because the agencies independently plan, acquire, operate, and manage telecommunications resources—equipment and services—in accordance with their own organizational and mission needs. In this regard, commercial telecommunications services that USDA agencies obtain from over 1,500 vendors across the nation are acquired and managed locally, regionally, or centrally depending on the agency. For example, the Consolidated Farm Service Agency (CFSA) has nearly 3,000 county offices that individually acquire commercial telecommunications services from private vendors. This contrasts with the Forest Service, whose 9 regional offices acquire commercial telecommunications services for about 725 local offices, and the Agriculture Marketing Service, whose headquarters office acquires commercial telecommunications services centrally for its field offices. With some exceptions, bills for commercial telephone calls, leased equipment, and other services for USDA’s component agencies are paid centrally by USDA’s National Finance Center (NFC) in New Orleans, Louisiana. NFC is reimbursed for these costs by the agencies after the bills are paid. USDA’s component agencies also manage FTS 2000 services differently. For example, some agencies, such as CFSA, have a few Designated Agency Representatives (DARs) responsible for centrally acquiring FTS 2000 services for the entire agency. However, others, such as the Forest Service and the Rural Economic and Community Development (RECD) agency, have numerous DARs that order FTS 2000 services for offices in specific geographical areas. Bills for all FTS 2000 services acquired and used by USDA’s component agencies are paid directly to the General Services Administration. Just as USDA component agencies acquire and manage telecommunications resources differently, they also plan and develop telecommunications networks separately in support of their agency-specific missions. These networks include telecommunications systems that support local office communications, regional communications between agency offices, and nationwide networks. Information Sharing: a Long-standing Problem at USDA Historically, USDA agencies have had difficulty sharing information electronically because they independently acquired information technology and networks that were not intended to address the organizational sharing needs of the Department. As far back as October 1989, we reported that while many USDA agencies shared responsibility for policy issues, such as food safety or water quality, they often were incapable of sharing information electronically due to their stovepipe systems. Because of this, we noted that USDA managers had difficulty carrying out programs to effectively address issues that cut across traditional agency boundaries. For example, nine separate USDA agencies shared responsibility for water quality. However, agencies could not easily share information across the separate network systems these agencies had installed. Therefore, USDA’s water quality programs suffered because critically important information, necessary to effectively carry out these programs, often remained inaccessible outside an agency and was under utilized throughout the Department. In late 1993, USDA surveyed its employees and received over 8,000 suggestions for operating more efficiently. Many respondents said the information sharing problems adversely affected program delivery and was a significant problem for the Department. Specifically, many respondents reported that USDA’s information systems and networks have too often developed along program and agency lines, causing information “islands” to develop across the Department. Objectives, Scope, and Methodology At the request of the Chairman, Senate Committee on Agriculture, Nutrition, and Forestry and the Ranking Minority Member, Subcommittee on Government Management, Information and Technology, House Committee on Government Reform and Oversight, we reviewed the effectiveness of USDA’s management and planning of telecommunications. Our objectives were to determine whether USDA is (1) managing existing telecommunications resources cost-effectively and consolidating services to maximize savings and (2) effectively planning future communications networks to meet the Department’s information sharing needs. To determine whether USDA is cost-effectively managing telecommunications resources, we reviewed federal laws, regulations, and guidance as well as USDA policies and guidance for establishing telecommunications management controls. We interviewed OIRM management, agency managers, and field personnel to discuss USDA’s telecommunications policy and guidance. We also discussed OIRM’s IRM review program, the ALO Program, and the technical approval process to obtain USDA officials views on the effectiveness of these programs. We evaluated reports documenting IRM reviews completed by OIRM since 1990 and assessed the completeness and effectiveness of these reviews. In addition, we interviewed senior-level representatives from 10 USDA agencies that account for about 70 percent of USDA’s telecommunications costs to identify management practices they adopted for telecommunications. Specifically, we discussed their management controls for establishing telecommunications inventories, monitoring acquisitions, and reviewing and verifying bills. We reviewed users’ internal policies and guidelines to determine the type and extent of management controls that these agencies have instituted over the use of telecommunications resources. We visited three locations where USDA installed consolidated telecommunications systems, obtained their telephone bills from NFC, and reviewed them to assess whether telecommunications resources were managed cost-effectively. We conducted our review of telephone bills for USDA agencies at these locations because we had observed telecommunications activities at each of these sites. We also discussed the bill payment process with officials from NFC and obtained additional information from commercial vendors on these bills. In particular, when reviewing bills, we determined whether agencies (1) used FTS 2000 services as required by GSA to make long-distance calls and for other available services and (2) obtained the most cost-effective services available. In addition, we obtained and reviewed telephone bills for USDA’s Rural Development Agency regional offices that closed during the past year to determine whether telephone services had been properly disconnected at these sites. To determine whether USDA is planning its future communications networks to effectively support its information sharing needs, we reviewed agency plans to develop new network systems and discussed these planned systems with agency management and OIRM officials. We also reviewed USDA’s strategic telecommunications plan to assess whether it provides guidance to the agencies on what departmentwide information sharing needs must be met and how to go about doing this. To evaluate the effectiveness of USDA’s strategic plan in defining the Department’s telecommunications requirements, we interviewed agency IRM and program officials and reviewed OIRM files and other supporting documentation. In addition, we visited field offices engaged in ongoing network development projects to assess project planning and management and determine the effectiveness of project results. We also interviewed OIRM officials responsible for oversight of agencies’ telecommunications plans and acquisitions to ascertain how these officials review agencies’ plans to ensure that they, along with the subsequent acquisitions, meet departmentwide goals and objectives. In addition, we reviewed OIRM documentation of its oversight activities to determine to what extent agencies’ telecommunications projects are coordinated across the department. We performed our audit work from March 1994 through July 1995, in accordance with generally accepted government auditing standards. Our work was primarily done at USDA headquarters in Washington, D.C.; USDA’s NFC in New Orleans, Louisiana; and USDA’s Telecommunications Services Division in Fort Collins, Colorado. We also visited component agency offices where telecommunications and network planning activities are administered. They included state offices of USDA farm service agencies in Lexington, Kentucky; Richmond, Virginia; and Columbia, Missouri; district and county offices of USDA farm service agencies in Mount Sterling, Kentucky and Pendleton, Oregon. In addition, we visited Forest Service headquarters in Arlington, Virginia; the Service’s Northwestern Region in Portland, Oregon; and the Service’s National Forest offices in Corvallis and Pendleton, Oregon; Food and Consumer Service headquarters in Alexandria, Virginia; Agricultural Research Service, Greenbelt, Maryland; APHIS headquarters in Hyattsville, Maryland, and regional office in Fort Collins, Colorado; Consolidated Farm Service Agencies’ office in Kansas City and Rural Economic and Community Development office in St. Louis, Missouri. We requested written comments on a draft of this report from the Secretary of Agriculture. In response, we received written comments from the Assistant Secretary for Administration. These comments are discussed in chapter 4 and are reprinted in appendix I. USDA’s Telecommunications Resources Are Not Managed Cost-Effectively OIRM has not fulfilled its management responsibility to provide the guidance and oversight necessary to ensure that USDA’s agencies maintain basic management data on their telecommunications resources, obtain telecommunications equipment and services cost-effectively, verify the accuracy of telecommunications charges, and make proper use of government-provided resources and services. Without sufficient guidance and oversight, many USDA component agencies have not instituted sound management practices necessary to effectively manage the telecommunications resources they control. As a result, these agencies waste millions of dollars each year paying for (1) unnecessary telecommunications services and equipment, (2) leased equipment that is not used and services billed for but never provided, and (3) commercial carrier services that are more expensive than those provided under the FTS 2000 contract. Although USDA has some initiatives underway to improve telecommunications management, its actions do not fully resolve these inadequacies. USDA Agencies Lack Telecommunications Inventories and Sufficient Management Controls Federal laws and regulations require agencies to manage telecommunications resources cost-effectively. One of the most fundamental steps is maintaining current and complete inventory information on all telecommunications services and equipment. Without this, agencies lack the basic information they need to manage these resources cost-effectively. In addition to maintaining inventories, agencies also need to have appropriate management controls to ensure that all government-provided telecommunications resources are properly used. However, OIRM has not required USDA’s component agencies to maintain inventories of telecommunications resources and has not provided guidance to the agencies for establishing effective telecommunications management controls. Department and Agencies Lack Basic Data Necessary to Manage Telecommunications To ensure the cost-effective use of telecommunications equipment and services, the Federal Information Resources Management Regulation requires each agency to establish inventories of telecommunications resources and annually survey existing telecommunications systems to ensure that information on these systems is current, accurate, and complete. These surveys and inventories are fundamental to sound telecommunications management. According to the Federal Information Resources Management Regulation, inventories and surveys are necessary to, among other things, identify telecommunications resources that are outdated or no longer used and ensure that agencies pay for only those resources that they use. USDA’s telecommunications policy does not require agencies to maintain inventories or conduct surveys of all their telecommunications resources. OIRM officials also acknowledge that USDA does not have a departmentwide inventory for telecommunications equipment and services and has not done surveys to collect such information. Although USDA has a directive requiring agencies to maintain inventories on property and has a property management system, the system lacks information on many types of telecommunications systems and services. Specifically, it does not record information on the types of voice, data, and video services used by the Department and where these services are located. It also lacks information on circuits, communications software, and many types of equipment, such as on-premises wiring, interface cards, modems, and other communications devices. Even though OIRM officials agree that USDA telecommunications policy does not require agencies to maintain inventories or conduct annual surveys, they told us that agencies nonetheless should be doing this as part of their telecommunications management activities. However, agencies we contacted do not maintain agencywide inventories or conduct annual surveys of telecommunications resources, and OIRM has not followed up with these agencies to ensure they do so. A lack of inventory information severely impairs USDA’s ability to ensure that resources are properly acquired, used, and maintained. OIRM does not know basic information, such as how much USDA pays for telecommunications, the type of services and equipment being used, and communications traffic volumes. Because of this, OIRM cannot effectively plan the future use of telecommunications resources, help agencies avoid acquiring redundant and overlapping equipment and services, and identify and eliminate systems and services that are not cost-effective. For example, for years, USDA wasted thousands of dollars paying for numerous FTS 2000 Service Delivery Points (SDPs) within its headquarters office in Washington, D.C. Because USDA does not maintain a telecommunications inventory, OIRM did not know that headquarters had over 27 SDPs, many of which were redundant and unnecessary, until after the Secretary of Agriculture announced in November 1993 that the Department would reduce telecommunications costs at USDA headquarters by $1 million. In response to the Secretary’s direction, OIRM began collecting data on SDPs at headquarters and began eliminating these duplicate services. As a result, OIRM records show that USDA has achieved several hundreds of thousands of dollars in savings. Also, a lack of inventory information hinders USDA’s effort to cost-effectively consolidate farm service agency offices. After the enactment of the Department of Agriculture Reorganization Act of 1994, the Secretary announced that 1,274 field offices would be closed, USDA personnel would be reduced by 11,000, and about 2,500 new field service centers would be established by September 1997. However, because basic inventory information is not available, USDA must now devote valuable time obtaining this information. Until this is done, USDA cannot effectively plan how to make the best use of existing equipment from offices that will close, what services need to be disconnected at these offices, and what additional equipment and services will need to be acquired for the new Service Centers. Agencies Lack the Guidance Needed to Establish Adequate Management Controls OIRM has not provided agencies with guidance on establishing management controls that are necessary for ensuring the proper planning and use of telecommunications resources. Specifically, OIRM has not provided the agencies with guidance for (1) monitoring acquisitions to ensure that telecommunications services and equipment are obtained cost-effectively, and (2) reviewing bills to verify the accuracy of telecommunications charges and ensure the proper use of government-provided resources and services. Without such guidance, USDA agencies lack sufficient telecommunications management controls. For example, USDA has hundreds of field office sites where multiple USDA agencies, located in the same building or geographic area, obtain or use separate and often redundant commercial carrier services. This situation exists because agencies often acquire telecommunications services and equipment to meet their own needs without first determining what already exists and whether there are opportunities to share resources. Even within some agencies, telecommunications resources are sometimes purchased separately by different offices, and these purchases are not tracked agencywide to identify opportunities for sharing telecommunications resources. Because of this, as we reported in April 1995, USDA is wasting millions on redundant FTS 2000 services. We reported that OIRM officials estimate that USDA could save between $5 million and $10 million annually by sharing FTS 2000 services. USDA has an even larger problem acquiring redundant commercial telecommunications services and equipment because agencies do not monitor and coordinate these purchases. According to OIRM officials, USDA could save as much as $15 million to $30 million annually by eliminating these redundancies and by sharing resources. USDA also wastes millions more because many agencies do not verify whether they pay accurate charges for FTS 2000 and commercial telecommunications services and leased equipment and do not determine whether these resources are properly and cost-effectively used. According to the Federal Information Resources Management Regulation, agencies should establish call detail programs to verify usage of government-provided FTS 2000 and commercial long-distance services for which they are charged and deter or detect possible misuse of long-distance services. The regulation also requires agencies to pay for only those telecommunications resources being used and to cancel leases of underutilized resources. To its credit, in October 1993, OIRM issued a policy establishing a program to review call detail reports for FTS 2000 services, and the office currently provides USDA agencies with these reports. However, many USDA agencies have not yet established an automated billing process for distributing FTS 2000 bills to each of their offices for the timely verification of the more than $36 million USDA pays annually for FTS 2000 services. In addition, the Department also pays another $50 million each year for commercial telecommunications services and leased equipment that are not obtained under the FTS 2000 program. Currently, USDA pays over 23,000 bills each month for services obtained and equipment leased from over 1,500 private vendors across the country. However, very few of these commercial bills are ever reviewed because the Department and its agencies have not established sufficient procedures, such as those for reviewing call detail records, to verify charges by private vendors and ensure cost-effective use of telecommunications resources. Consequently, USDA is paying unnecessary and inappropriate charges. For example, our review of bills for the agencies at three locations we visited found that: OIRM and USDA agencies pay tens of thousands of dollars each year to lease telephone equipment that is either no longer used or cannot be located. In some cases, we noted that fees for unused equipment have been paid for many years. For example, one commercial carrier’s bill for March 1995 showed that OIRM pays $6,262 a year to lease three unused 4800 baud modems at USDA headquarters. Although USDA has leased these modems since 1985, OIRM staff working at OIRM’s headquarters office told us that no one has used the modems for several years. In this same bill, we found hundreds of cases where USDA agencies continue to pay exorbitant fees to lease outdated equipment. For example, we noted that USDA agencies pay about $7,800 dollars each year to lease 214 out-of-date rotary telephones. More serious is that agencies were unable to locate some of this equipment. For example, although one agency pays over $10,000 a year to lease 16 2400 baud modems, telecommunications staff were unable to find any of them. The staff stated that, because no one uses this type of equipment any longer, it is likely that the equipment was disposed of many years ago. USDA agencies often pay more than twice the cost charged under FTS 2000 by using commercial carriers to place toll calls within Local Access Transport Areas (LATAs) in states where such practices are allowed.Federal agencies have had the ability to use FTS 2000 service for intra-LATA calls since August 1993. In this particular instance, OIRM had notified agencies about this opportunity but agencies continued placing commercial calls within the LATAs. By not using FTS 2000 for intra-LATA toll calls, OIRM officials estimate that USDA is losing as much as $2 million each year. Agencies pay about three times the amount charged under the FTS 2000 program for making long-distance calls. For example, according to March 1995 billing data, one USDA office in Fort Collins, Colorado, paid about $186 for long-distance calls that would have cost about $63 using FTS 2000 service. We noted many similar instances in which USDA agencies do not comply with the government’s mandatory FTS 2000 use. As a result, agencies pay significantly more than necessary for long-distance calls. USDA also pays more than necessary for facsimile transmissions by obtaining these services from commercial vendors rather than FTS 2000. A March 1995 commercial carrier’s bill showed that one agency paid over $728 a month for commercial facsimile service. This represents more than 3 times the amount that is charged under the FTS 2000 program. USDA agencies pay more for international calls than necessary because many agencies do not use the services available to USDA under the Department of Defense’s contract for international telephone service. According to OIRM officials, this contract offers a 34-percent savings over commercial rates. We noted many instances where agencies obtain such services outside of this contract. Failure to Terminate Telecommunications Services at Offices Being Closed Results in Further Waste USDA also wastes thousands of dollars paying for telecommunications services at field offices that have closed. This situation exists because office staff sometimes do not terminate vendor-provided services when they close offices. Since USDA does not generally review telephone bills, charges incurred after an office closes are not identified and USDA will continue to pay fees for vendor-provided services at these locations. For example, in one case, USDA has continued to pay $483.78 each month for telephone services at a Rural Development Administration office in Levelland, Texas, even though the office closed in March 1994—over a year ago. According to staff who worked at the office until it closed, USDA’s lease on the building was discontinued in March 1994 and all telecommunications devices, such as telephones, were removed. However, no one terminated the telephone service at this office or followed up with the vendor to be sure that the account was closed. Consequently, USDA has so far paid about $6,200 for services being provided to an unoccupied building. Offices that are being closed require a detailed analysis of billing records and an inventory of telecommunications lines and services. The analysis and inventory are essential for preparing orders for termination of services. Experience has shown that termination orders should be followed through several billing cycles to ensure that termination actually occurred. However, as the Levelland office case illustrates, unless these steps are fully implemented at each office to be closed, USDA could incur thousands of dollars in vendor charges for services that are no longer needed. Options Available for Reviewing Commercial Telephone Bills OIRM officials told us that, although they have done so for services and equipment acquired under FTS 2000, they have not established a call detail program or prepared guidance on what options exist for reviewing commercial bills because these bills are handled differently. Bills for commercial carrier services are sent directly from the carriers to NFC where the bills are processed and paid. NFC receives thousands of bills in paper form each month and, in most cases, does not forward copies to the agencies for verification of charges. While handling thousands for paper bills each month is a laborious task, it does not preclude agencies from reviewing commercial carrier bills or absolve OIRM of its responsibility to provide agencies proper guidance. According to NFC officials, agencies can obtain bills from NFC upon request. For example, a Forest Service office we visited recently began requesting monthly bills from NFC for review. At the time of our visit, the office reported that it had recently found a $1,400 overcharge for commercial services. After notifying the telephone company of the inaccurate bill, the charge was removed. Employees at the office also told us that several similar billing mistakes had been identified over the previous few months since they began to verify bills and estimated that about $10,000 annually could be saved at just this one site by reviewing bills. However, according to NFC records, during the month of April 1995, only 80 out of about 23,000 commercial carrier bills had been requested by USDA agencies for review. Besides requesting specific bills from NFC, agencies have other options for obtaining bills for review. For example, NFC requires USDA agencies to set limits on bills and notifies agencies when bills exceed these limits. However, OIRM has not provided USDA agencies with guidance on setting limits and NFC officials reported that agencies often deliberately set these limits at unreasonably high dollar levels to avoid having to review bills. Consequently, many bills never exceed the limit and few are reviewed. NFC officials also noted that they regularly select a sample of about one percent of the bills and send them to the agency for review. However, these officials reported that agencies do not confirm that they reviewed the bills and found them to be accurate. Without effective telecommunications policy and guidelines, such as guidance to establish a call detail program for verifying charges by private vendors, USDA agencies lack the management direction they need to institute effective management controls over telecommunications resources. In this regard, USDA agency officials cited the lack of guidance as a key problem, noting that they were often unaware of telecommunications management requirements or what such practices would entail. OIRM’s Associate Director of Policy agreed that USDA’s telecommunications policy has not been comprehensive enough to ensure that agencies have the necessary policy guidelines to effectively manage telecommunication resources. The Associate Director also stated that OIRM recognizes this problem and plans to develop additional agency guidance. Oversight of Agencies Telecommunications Management Has Not Been Adequate To monitor agencies’ management of IRM resources, including telecommunications, USDA established its IRM Review Program in accordance with federal requirements for conducting periodic reviews of IRM activities. USDA’s “IRM Review Program” is intended to (1) ensure that agencies comply with governmentwide and departmentwide IRM policies, regulations, rules, standards, and guidelines, (2) ensure that agencies efficiently acquire and effectively use resources, and (3) determine whether agencies’ controls over and reviews of their IRM resources provide effective management oversight. USDA policy states that OIRM’s Program Review Standards Division (PRSD) is required to conduct periodic selective reviews at each of USDA’s 29 agencies to validate the management of IRM and telecommunications resources, assure the Secretary that IRM policy is working as intended, and recommend agency improvement. However, PRSD conducts very few IRM selective agency reviews, and in the cases when reviews were performed, agency management of telecommunications resources was not adequately addressed. For example, since 1990 PRSD has conducted only five selective agency reviews, of which only one addressed telecommunications management, but did not evaluate whether (1) adequate inventories of equipment and services had been established and annual surveys were conducted, (2) the acquisition of services are monitored to avoid redundancies, and (3) FTS 2000 and commercial telecommunications charges are verified to control costs. By not conducting reviews, OIRM has no assurance that USDA agencies are following federal regulations or departmental policy, such as using mandatory services, or are making cost-effective use of telecommunications resources and sharing resources when there are opportunities to do so. The General Services Administration (GSA), which periodically reviews federal agencies’ IRM activities, and USDA’s Office of Inspector General have previously raised concerns about USDA’s inadequate agency review program. After reviewing USDA’s IRM program in 1990, GSA reported that OIRM needed to be more proactive and did not place adequate emphasis on performing agency reviews. In 1994, after returning to review USDA’s IRM program, GSA reported that OIRM had not made sufficient progress to improve its IRM selective review program. In 1993, USDA’s Office of Inspector General also reported the need for OIRM to perform IRM reviews. PRSD’s Chief agreed that OIRM needs to conduct more selective reviews. According to this official, OIRM plans to have the ALOs develop IRM review proposals for selective reviews and participate on review teams for agencies. USDA Actions to Strengthen Telecommunications Management Fall Short Senior OIRM officials recognize the need to improve telecommunications management across the Department. To make improvements, OIRM has several initiatives either planned or underway. For example, in response to our April 1995 report, OIRM has shown more leadership on efforts to consolidate and optimize USDA’s FTS 2000 telecommunications services. Specifically, OIRM developed and issued policy requiring component agencies to order and use optimum service configurations and consolidate service access and met with USDA senior managers and has begun a process to systematically identify sites across the Department where FTS 2000 services could be consolidated and optimized. With respect to strengthening controls over how telecommunications resources are acquired and managed by agencies, in early 1995 OIRM used an existing contract to begin developing a life-cycle management process for all IRM resources. OIRM’s Associate Director for Policy believes this initiative should provide agencies with more direction on what management practices are expected USDA-wide and therefore should ultimately improve management of telecommunications. In addition, USDA plans to collect inventory information and has begun investigating the possibility of establishing an electronic billing process to provide agencies with commercial call detail information for review and verification. OIRM’s Initiatives Will Not Fully Resolve Telecommunications Management Weaknesses OIRM’s initiatives are encouraging and, if fully carried out, they should generate departmentwide benefits. However, these efforts will not fully resolve the widespread telecommunications management weaknesses we found. This is because OIRM’s initiatives do not focus on the root causes of the weaknesses: a lack of comprehensive policy and implementing guidelines, and inadequate oversight of the agencies’ telecommunications management activities. For example, USDA has opportunities to save millions under its initiatives to consolidate and optimize FTS 2000 telecommunications services. However, although OIRM has prepared and issued a new policy requiring agencies to consolidate and optimize FTS 2000 services, OIRM has not (1) provided the agencies with specific guidelines for implementing these policies, such as procedures for regularly monitoring telecommunications purchases to consolidate services when it is cost-effective to do so or (2) devised a method for reviewing agency activities to ensure that this policy is effectively carried out. Consequently, agencies will likely continue making telecommunications purchases as they have in the past and perpetuate the use of redundant and duplicative telecommunications services. Likewise, OIRM and the farm service agencies have begun collecting inventory information at sites scheduled for consolidation under the Secretary’s plan to establish Field Office Service Centers. While we agree this step is needed, OIRM has not defined how this inventory information will be updated and managed after it is collected. Moreover, according to OIRM’s Associate Director for Operations, there are no plans to advise the agencies’ senior managers about requirements to conduct regular surveys of telecommunications resources or to assist agencies in maintaining inventory information needed for implementing fundamental telecommunications management controls. By not doing so, it is highly unlikely agencies will take the initiative on their own to begin obtaining and maintaining inventory information that is essential to planning and managing resources. OIRM has also not developed any action plans for providing guidance to USDA agencies to help them establish billing review practices. Although OIRM has made FTS 2000 billing data available for agency review, this information does not include commercial carrier bills for millions of dollars in services. OIRM officials, who have investigated electronic billing opportunities with several commercial carriers, have no plan for providing such capabilities nationwide and OIRM has done little to establish interim guidelines and procedures for agencies to follow to request and review paper bills on a periodic basis. OIRM’s Associate Director for Policy agreed that these initiatives alone will not be enough to correct shortcomings in the Department’s management of telecommunications. However, this official noted that OIRM has just begun an effort to define a telecommunications management program for the Department, which he believes will provide improved telecommunications guidance to the agencies. This official also added that USDA needs to modernize its IRM program, including instituting performance measures to evaluate the agencies’ management practices and then holding the agencies directly accountable for needed improvements. We agree with OIRM’s Associate Director that performance measures and accountability are critical for improving management of telecommunications resources. In May 1994, after reviewing how leading public and private organizations improved mission performance, we reported that increasing line accountability and involvement works because it immediately focuses information management decision-making on measurable mission outcomes of strategic importance. However, before setting measures and increasing accountability, an organization needs to first gain an understanding of its current performance, telecommunications systems and services spending, and major information management problems. OIRM has not completed a thorough, systematic review of the agencies’ current telecommunications management practices to determine what management deficiencies exist and the reason for these deficiencies. Without such a review, OIRM does not know what actions are necessary to fully resolve management weaknesses, articulate what management practices are expected, and then define who is accountable for these processes. In 1991, the Department of Defense (DOD) established a program to analyze its communications management deficiencies and develop ways to solve those deficiencies. The goal of this program was similar to USDA’s initiatives—to improve communications management processes. We reported that for DOD’s effort to succeed, besides analyzing management deficiencies, the organization must (1) clearly articulate how telecommunications management processes are to be conducted DOD-wide and (2) precisely define the roles and responsibilities of all components involved in the telecommunications business and management processes. OIRM’s efforts to define a telecommunications management program for the Department and establish an IRM life-cycle management program have potential for sustained departmentwide management improvements if developed and implemented properly. However, because OIRM is in the early stages of these efforts, it is unclear what impact these will have on resolving the management weaknesses we found. Nevertheless, USDA’s failure to cost-effectively manage its annual $100 million telecommunications investment constitutes a material internal control weakness under the Federal Managers’ Financial Integrity Act of 1982 (31 U.S.C. 3512(b)and(c)). As previously discussed, federal regulations require agencies to establish inventories of telecommunications resources to, among other things, identify resources that are outdated or no longer used and ensure that agencies pay for only those resources that they use. These regulations also require agencies to establish adequate management controls to ensure the cost-effective use of telecommunications resources and detect possible misuse of government-provided FTS 2000 and commercial long-distance services for which they are charged. Because USDA does not maintain inventories or have adequate management controls established over its telecommunications resources and expenditures, the Department continues to pay millions for telecommunications services that are unnecessary or never used and equipment that is outdated or no longer needed. Networks Are Not Planned to Support USDA’s Information and Resource Sharing Needs USDA has many heterogeneous, independent networks acquired and developed over time by USDA agencies. As discussed in chapter 1, these “stovepipe” systems make it difficult for agencies to share information necessary to address complex, cross-cutting issues and effectively execute USDA programs. Despite the need to address this problem, USDA’s agencies continue developing their own networks that are often redundant and perpetuate information sharing problems rather than resolve them. This is allowed to occur because OIRM continues to approve agencies’ plans for new network systems without (1) determining what information sharing needs USDA agencies have and what opportunities exist to share other agencies’ existing or planned networks and (2) ensuring that the planned networks adequately address the need to share information and resources. Consequently, USDA spends millions of dollars developing networks that do not make efficient use of the Department’s telecommunications resources and cannot support information sharing without costly modifications. Agencies Plan Their Own Networks Without Considering Information and Resource Sharing Needs Increasing demands for efficiency and for collaborative agency work on complex agricultural and environmental issues prompted the former Secretary to call for integrating networks and systems to increase data and resource sharing among agencies. Also, federal law requires that (1) USDA reduce expenses by jointly using resources, such as telecommunications services and equipment, at field offices where two or more agencies reside and (2) whenever USDA procures or uses information technology, it does so in a manner that promotes computer information sharing among agencies of the Department. However, USDA agencies continue to plan and acquire their own costly new networks without incorporating requirements for sharing information among agencies. Also, agencies overlook opportunities to share resources because they independently design, build, and operate their own networks without considering whether other USDA agencies’ existing or planned networks would meet their communication needs. For example, the Forest Service plans to spend almost $1 billion modernizing its information technology, part of which will be spent establishing a new agencywide network. However, specific requirements for sharing data with other agencies and how these requirements will be met are not addressed in planning documentation. Although Natural Resources and Conservation Service (NRCS) officials told us they could benefit from the exchange of ecosystem and natural resources information with the Forest Service, current plans do not address these needs. In another example, the Animal and Plant Health Inspection Service (APHIS) plans to spend about $267 million modernizing its technology, which includes acquiring a new network that provides connectivity among its offices. These network plans do not take into account that APHIS offices are often collocated with other USDA agencies at field sites throughout the country. Instead, APHIS plans to acquire its own network to connect over 1,200 agency office sites, rather than exploit opportunities for sharing with other agencies’ existing or planned networks. Therefore, USDA risks losing an important opportunity to reduce communications costs by consolidating network resources. In addition to the Forest Service and APHIS, other USDA agencies have developed or plan to develop their own networks. These include: Over 2,500 field service centers—which house the Consolidated Farm Service Agencies, the Rural Housing and Community Development Service, and the Natural Resources and Conservation Service—are to be interconnected by a new $90 million network over the next 3 years. During 1994, the Agriculture Marketing Service completed integrating 113 field offices and its Washington headquarters into a single network. The National Agriculture Statistics Service is connecting 43 state statistical offices and the Washington and Fairfax headquarters via local area and wide area networks. The Agriculture Research Service is providing local area networks in each of its 8 area offices and 122 research sites and plans to link these LANs via dedicated lines between area offices and dial-up access at the research centers. Like the Forest Service and APHIS, these agencies are planning their own new networks without considering departmentwide interagency data and resource sharing needs. For example, each of the agencies listed above, except for RECD and NRCS, participate in USDA’s Integrated Pest Management Program to coordinate the Department’s research and extension programs with customers who implement pest management practices. This cross-cutting program requires the agencies to exchange information on pesticide use and research. However, the agencies’ network plans do not address this requirement. Therefore, the interagency sharing that must take place to consolidate this information for customers at a farm service center location will not occur. As a result, customers will be unable to obtain the information they need on pest management practices from a single location. Monitoring Network Planning and Development Does Not Ensure Data and Resource Sharing Needs Are Addressed Development of individual agency networks, such as the ones discussed above, is allowed to continue because OIRM approves each of these networks separately without having (1) determined whether some or all of the telecommunications services could be provided by other agency networks, (2) determined what information sharing needs USDA agencies have and what opportunities exist to share resources and (3) ensured that the planned networks adequately address these needs. Besides its responsibilities for establishing USDA-wide telecommunications policy and overseeing telecommunications resources (discussed in chapter 2), OIRM is also required to review and approve agency IRM strategic plans and information and telecommunications technology acquisition plans. Among other things, such monitoring is necessary to ensure that agencies plan and acquire telecommunications networks cost-effectively and in accordance with departmental needs. Unlike PRSD’s IRM Review Program that is supposed to validate management of existing IRM and telecommunications resources, the ALO program and technical approval process provide OIRM with direct involvement in agencies’ IRM projects as they are being planned. OIRM monitors agency IRM activities under the ALO program and Technical Approval process. OIRM formed its ALO program to improve coordination of agency IRM planning across the Department. Among other things, this program is intended to help ensure that agencies plan their use of information and telecommunications technology to meet departmental needs. However, the OIRM manager for this program told us that ALOs do not review agencies’ network plans to ensure that they incorporate information sharing needs and network sharing opportunities. In addition, OIRM reviews of component agencies’ acquisition plans under USDA’s technical approval process have not ensured that data and resource sharing needs are being effectively addressed. For example, OIRM staff responsible for technical approvals told us that they evaluate proposed procurements individually and do not review them to assess whether or not data sharing requirements and opportunities to share network services among agencies have been addressed before approving acquisitions. OIRM’s Associate Director for Policy, who has responsibility for USDA’s ALO program and also the technical approval process, told us that OIRM needs to do a better job determining whether data sharing needs and resource sharing opportunities are adequately addressed by agencies as part of ALO and technical approval staff monitoring activities. The Associate Director noted that in most cases, however, these staff often cannot effectively make such determinations because they lack detailed information describing agencies’ data sharing requirements and the composition and current configuration of all existing agency networks. According to this official, OIRM and the agencies have not taken sufficient steps to obtain the information that defines data sharing requirements and identifies what networks exist. Further, this official stated that OIRM needs to enhance staff expertise in telecommunications to improve monitoring activities. Although the Associate Director acknowledged that more needs to be done by OIRM, he said that the Office has taken an important step by developing USDA’s strategic telecommunications plan. The plan, issued in September 1993, called for integrating existing USDA agency networks to achieve interoperability and enable agencies to share data where they need to and share resources where they can. According to the plan, OIRM, in cooperation with USDA agencies, would undertake initiatives that include (1) defining interagency data sharing requirements, (2) identifying all existing agency networks, and (3) aggregating networks and other telecommunications resources where opportunities exist for cost savings. However, at the conclusion of our review, little progress had been made by OIRM and the agencies to carry out the Plan’s initiatives and gather detailed information necessary for identifying data sharing requirements and network sharing opportunities across the Department. Progress had been delayed because OIRM and the agencies have not yet developed a strategy for carrying out this critically important work. However, as mentioned in chapter 2, OIRM and the agencies have recently made some progress identifying opportunities to share existing network resources at some collocated agency office sites, such as USDA’s headquarters offices, and have begun to act on these opportunities. Continued Development of Individual Agency Networks Poses Costly Risks OIRM is continuing to approve individual agency networks without determining whether agency network plans meet departmental information and resource sharing goals. This poses costly risks to USDA. First, because agencies have planned their new networks separately and no one has ensured that these efforts are properly coordinated, the agencies may install new communications lines and circuits that overlap or are redundant, resulting in unnecessary costs. For example, collocated agencies at some offices in Kansas City, Missouri, and Washington, D.C., were wasting about $41,000 per year because they were maintaining networks with dedicated transmission service lines that were redundant or unnecessary. This occurred because the agencies acquired these circuits separately without identifying opportunities to share existing circuits with other collocated agencies. Following our April 1995 report, OIRM took action to eliminate these redundant or unnecessary lines. Also, by allowing agencies to continue to develop networks without assurance that they incorporate data sharing requirements, USDA may need to spend millions in the future making modifications to interconnect networks so they can exchange data. For example, a May 1994 reportdeveloped for the National Institute of Standards and Technology noted that over the past 20 years organizations have evolved to support a wide variety of networks that cannot support required data exchange capabilities. The report states that attempts by organizations to interconnect their incompatible networks after the fact—rather than planning for network interface requirements—typically produced expensive but unsatisfactory results, characterized as “functionally disparate islands of technology.” Conclusions, Recommendations, Agency Comments and Our Evaluation Conclusions USDA lacks the basic telecommunications inventory information and management controls necessary to properly plan and manage telecommunications resources. Consequently, the Department has wasted millions of dollars by not making cost-effective use of the $100 million it spends each year on these resources. This is because OIRM has not demonstrated effective departmentwide leadership by providing USDA agencies with the guidance and oversight they need to help them ensure that the Department’s telecommunications resources are used effectively and prudently. Without sufficient telecommunications guidance and oversight, many agencies have not established the fundamental management controls necessary to ensure that USDA does not (1) acquire separate telecommunications equipment and services that are redundant and unnecessary, (2) pay for leased equipment that is not used and for services billed but never provided, and (3) use more expensive commercial services than those services already provided under FTS 2000. Although OIRM is aware of these problems, which are long-standing, it has done little to address the agency management shortfalls that allow these problems to persist. Until OIRM (1) provides the guidance and direction necessary to help USDA agencies establish adequate management controls and (2) takes additional actions to oversee that agencies effectively implement such controls and other telecommunications requirements in compliance with federal and departmental policies, the serious and widespread problems we found are likely to continue. Further, if USDA is ever to successfully share information whenever and wherever it is needed, the Department must prevent agencies from planning and building their own stovepipe networks. However, because OIRM has not fulfilled its departmental responsibility to identify agencies’ information sharing needs and determine with the agencies how to address these sharing requirements, OIRM cannot ensure that new agency networks are compatible. Therefore, USDA risks wasting millions more building new networks that are redundant and may not provide the capabilities necessary for sharing information among agencies. Recommendations We recommend that the Secretary of Agriculture report the Department’s management of telecommunications as a material internal control weakness under the Federal Managers’ Financial Integrity Act. This weakness should remain outstanding until USDA fully complies with federal regulations for managing telecommunications and institutes effective management controls. We also recommend that the Secretary of Agriculture direct the Under Secretaries and Assistant Secretaries to immediately conduct —in cooperation with USDA’s Chief Financial Officer, the National Finance Center, and OIRM—a one-time review of commercial telephone bills for accounts over 3 years old to identify instances where USDA may be paying for telecommunications services or leased equipment that are unnecessary or no longer used. Further, all accounts associated with any USDA office that has closed or moved within the last 3 years should also be reviewed to identify telephone services that private vendors may still be providing to closed offices. On the basis of this review, USDA should (1) take appropriate action with vendors to disconnect any unnecessary or unused telecommunications services and terminate leases for equipment no longer needed or in use by agencies and (2) seek recovery of expenditures for any vendor charges deemed inappropriate. The Secretary should also direct the Under Secretaries and Assistant Secretaries to establish and implement procedures for reviewing telecommunications resources at offices USDA plans to either close or relocate to ensure that (1) all unneeded telecommunications services are terminated promptly and vendor accounts closed and (2) telecommunications equipment is properly accounted for and reused where it is practical and cost-beneficial to do so. We further recommend that the Secretary of Agriculture direct the Assistant Secretary for Administration to take immediate and necessary action to address and resolve the Department’s telecommunications management and network planning weaknesses. At a minimum, the Assistant Secretary should require the Office of Information Resources Management to revise departmental policies to require USDA agencies to establish and maintain agencywide telecommunications inventories that contain, at a minimum, circuit information, equipment and service types, network usage levels, costs, and other information agencies need to effectively manage and plan telecommunications resources in accordance with federal requirements; develop additional departmental policy requiring agencies to establish management controls over the acquisition and use of telecommunications resources and assist agencies in carrying out these requirements by completing a systematic review of the agencies’ current telecommunications management practices to (1) identify and correct telecommunications management deficiencies that exist and (2) establish an agency telecommunications management program that sets performance expectations over agency telecommunications activities and assigns responsibility and accountability necessary to ensure these activities are effectively carried out; provide USDA agencies with explicit guidelines that include, at a minimum, procedures to (1) monitor acquisitions of telecommunications services and equipment and coordinate purchases with other agencies to ensure that resources are cost-effectively obtained and (2) implement call detail programs and other necessary procedures to regularly review vendor-provided bills for telecommunications services and leased equipment to verify the accuracy of these charges and ensure the proper use of FTS 2000 and other government-provided resources and services; strengthen oversight by conducting periodic reviews of agency telecommunications management activities in accordance with federal requirements to ensure that (1) inventories of telecommunications equipment and services are properly maintained, (2) sufficient management controls exist over telecommunications resources and expenditures, and (3) redundant or uneconomical services are eliminated; determine, with assistance from the Under Secretaries and Assistant Secretaries for USDA’s seven mission areas, interagency information sharing requirements necessary to effectively carry out the Department’s cross-cutting programs and include these data sharing requirements in departmental and agency strategic IRM and telecommunications plans; enhance the ALO and technical approval programs by increasing the technical focus of reviews of agency telecommunications strategic plans and network acquisition plans, and providing explicit implementing guidance to ensure that information sharing requirements and opportunities to share network resources are identified; and preclude USDA component agencies from developing networks that do not address departmentwide sharing needs by requiring that OIRM technical approvals be made contingent on the component agencies having considered and sufficiently addressed information sharing requirements and opportunities to share network resources. Agency Comments and Our Evaluation USDA’s Assistant Secretary for Administration provided written comments on a draft of this report. The Assistant Secretary agreed with most of our recommendations, noting that the draft report contained many excellent recommendations which were well received by the Department. The Assistant Secretary stated, however, that he disagreed with two of our recommendations. Regarding our recommendation to determine interagency information sharing requirements, the Assistant Secretary stated that USDA’s existing policy is adequate to meet departmental requirements. This statement is not consistent with the facts. USDA’s written policy does not require OIRM and the component agencies to identify the interagency information sharing requirements that must be met to effectively and fully carry out cross-cutting programs. Therefore, this recommendation remains unchanged. The Assistant Secretary also disagreed with our recommendation to enhance ALO and technical reviews of agency telecommunications plans and activities, noting that USDA’s ALO and selective review programs are not technical functions. However, when the ALO program was established, USDA told the Congress that ALOs would perform the in-depth tasks necessary to improve system compatibility and data sharing across agencies and would strengthen coordination of the agencies’ telecommunications projects. Further, the report addressed technical reviews, not the selected GSA reviews USDA discusses in its comments. We revised the report to clarify this, but the recommendation remains unchanged. The Assistant Secretary also raised questions about how much money is wasted due to ineffective departmental management and planning of telecommunications. The dollar amounts included in our report are based on USDA documentation and on interviews with USDA’s OIRM staff. For example: We obtained USDA commercial telephone billing records, which are maintained at NFC, showing that USDA pays tens of thousands of dollars each year to lease telephone equipment that is either no longer used or cannot be located. We also obtained billing records showing that USDA pays more than it should because agencies fail to make long-distance telephone calls using available FTS 2000 services and fail to terminate telecommunications services at offices being closed. OIRM’s Telecommunications Services Division staff, who are responsible for identifying opportunities to consolidate telecommunications, told us USDA could save as much as $15 million to $30 million annually by eliminating redundant commercial telecommunications services and by sharing resources, and save as much as $2 million each year by using FTS 2000 to make intra-LATA telephone calls. We held numerous meetings with OIRM and NFC staff during our review in which these amounts were discussed in great detail. We also included these dollar amounts in the information we provided to the Assistant Secretary, the Deputy Assistant Secretary, and the OIRM Director during an exit conference held with these officials on July 12, 1995. At that time, we also provided copies of the billing records that contained the dollar amounts we cite in the report to USDA’s Deputy Chief Financial Officer and the NFC Director, so the Department could discontinue payments for leased equipment and services that are not being used. Finally, the Assistant Secretary said the draft report did not give USDA sufficient credit for OIRM actions recently taken to improve departmentwide telecommunications management. The report discusses each improvement initiative undertaken by OIRM that we could substantiate with available USDA documentation. The Assistant Secretary’s written comments and our response are provided in appendix I.
Pursuant to a congressional request, GAO reviewed the Department of Agriculture's (USDA) management and planning of its telecommunications resources, focusing on whether USDA is: (1) managing its telecommunications resources cost-effectively; and (2) planning telecommunications networks to support its information sharing needs. GAO found that: (1) USDA is not cost-effectively managing its annual $100 million telecommunications investment; (2) USDA agencies waste millions of dollars each year paying for unnecessary telecommunications services and equipment, because the Office of Information Resources Management (OIRM) has not fulfilled its responsibility to manage and oversee USDA telecommunications resources; (3) USDA is not effectively planning its telecommunications networks and ensuring that they can support its information sharing needs for the future; and (4) OIRM continues to approve the acquisition and development of costly new agency networks that overlap and do not support interagency information sharing.
Background State and local entities are typically responsible for disaster response efforts, but federal law establishes the process by which a state may request a presidential disaster declaration to obtain federal assistance. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), as amended, permits the President to declare a major disaster after a governor of a state or chief executive of an affected tribal government finds that a disaster is of such severity and magnitude that effective response is beyond the capabilities of the state and local governments and that federal assistance is necessary. The act also generally defines the federal government’s role during disaster response and recovery and establishes the programs and processes through which the federal government provides disaster assistance. Figure 1 shows the number of major disasters declared in the United States since Hurricane Katrina, from fiscal years 2005 through 2014. Federal financial assistance for a major disaster comes through the Disaster Relief Fund, a source of appropriated funding that provides grants and other support to state, local, and tribal governments during disaster recovery. The fund is also used by FEMA for its administrative costs related to providing and managing disaster assistance, and for contracts in support of disaster relief efforts. For example, FEMA awarded contracts worth more than $347 million in fiscal years 2013 and 2014 to provide technical support—such as architecture and engineering services—to the public assistance program that helps states and local governments rebuild damaged infrastructure. Contracting Workforce in FEMA’s Headquarters and Regions FEMA’s contracting efforts are supported by a contracting workforce in OCPO, located in FEMA headquarters and in its 10 regions. This office is led by FEMA’s Chief Procurement Officer, who is also the Head of Contracting Activity. Figure 2 shows the current structure of the OCPO, which reflects a recent reorganization effective January 1, 2015. FEMA’s contracting officers in headquarters support a variety of functions, including supporting information technology, activities to prepare for and mitigate disasters, and disaster response. The disaster and field operations division manages contracting for disaster response efforts including logistics—delivering goods and services to support disaster survivors and communities, including life-sustaining commodities such as meals, blankets, and electricity generators, response—coordinating capabilities needed immediately following a disaster, such as air and ground evacuation services and emergency sheltering, and recovery—primarily supporting rebuilding efforts, including technical assistance programs. In fiscal years 2013 and 2014, FEMA’s headquarters contracting offices primarily responsible for supporting disaster relief efforts—including logistics, response, and recovery—obligated $631 million in contracts and task orders. While the majority of FEMA’s contracting workforce is located in headquarters, contracting officers are also located in each of FEMA’s regional offices. Figure 3 identifies the location of FEMA’s headquarters and 10 regional offices. While they support a variety of contracting functions for their respective regions, these contracting officers serve as the first response for contracting if a disaster occurs in their region. During a disaster, the regional offices can request additional contracting support from headquarters if needed. Contracting officers are typically located in each regional office’s mission support division, which provide essential administrative, financial, information technology, and acquisition support for the region. Regional contracting officers report to both their mission support division supervisors and their OCPO supervisor in headquarters, as shown in figure 4. Each region is headed by a Regional Administrator who reports directly to the head of FEMA. In fiscal years 2013 and 2014, FEMA’s regional contracting offices obligated almost $137 million for various efforts, including contracts for regional support, such as supplies and services to support regional offices, as well as disaster support. Like FEMA’s overall workforce, the contracting staff consists of a combination of employees hired under different authorities. The authority under which employees are hired affects the type of work all employees— including contracting staff—can support at FEMA: Title 5 employees are both permanent and temporary employees who make up FEMA’s day-to-day workforce and are responsible for administering the agency’s ongoing program activities in headquarters and regional offices. During disasters, these employees can be deployed as needed. These employees are hired under title 5 of the United States Code which established the law for managing human resources in the federal government. Stafford Act employees provide support for disaster-related activities and augment FEMA’s disaster workforce at facilities, regional offices, and headquarters. Stafford Act employees include a Cadre of On-Call Response/Recovery Employees, who have 2- to 4-year renewable appointments and can be deployed to fulfill any role specifically related to the incident for which they are hired and qualified, such as contracting, during disaster assistance response and recovery efforts. They also include reservists, who work on an intermittent basis and are deployed as needed. FEMA contracting officials explained that this means title 5 contracting officers can award and administer all types of FEMA contracts, while Stafford Act employees are limited primarily to disaster-related contracting efforts. Most FEMA contracting offices, at headquarters and in the regions, include a combination of both title 5 and Stafford Act employees. The contracting workforce includes professionals in several job series, with qualifications that are standard across civilian government contracting. These job series include the following: Contracting specialists in the 1102 series, which includes contracting officers who have warrants that authorize them to obligate and commit government funds. Some warrants are unlimited; others are limited to a specific dollar amount or functions, such as construction. To maintain their warrants, contracting officers must meet core education, training, and experience requirements set by the Office of Federal Procurement Policy. Purchasing agents in the 1105 job series who are qualified to contract for smaller purchases, typically under $150,000. For the purposes of this report, we refer to staff in the contracting specialist job series, not purchasing agents, as the contracting workforce. After a major disaster is declared, FEMA establishes a joint field office, a temporary office through which it coordinates disaster response and recovery efforts with state and local governments and organizations. Led by a federal coordinating officer, the joint field office is supported by incident management staff of various FEMA teams that are deployed to support the disaster. One of these teams includes contracting support staff that may come from headquarters or regional offices. Once the need for disaster response and recovery ends and a joint field office is closed, the contracts supporting the disaster are returned to the cognizant regional contracting office. In cases where long-term recovery is needed, FEMA may transition a joint field office into a long-term recovery office. For example, the joint field offices established in New York and New Jersey to support Hurricane Sandy in 2012 became long-term recovery offices in 2014. PKEMRA Contracting Requirements PKEMRA was enacted to address various shortcomings identified in the preparation for and response to Hurricane Katrina. In a November 2008 report, we identified more than 300 provisions associated with PKEMRA and we described actions that DHS and FEMA had taken toward implementation of the law. These included 4 provisions related to FEMA’s contracting: restricting the contract period to 150 days for noncompetitive disaster support contracts justified as an urgent need, identifying products and services suitable for advance contracts—for example, food and cots for survivors and engineering services—and establishing such contracts, providing a contracting preference to local vendors for disaster response contracts, justifying awards made to non-local vendors, and transitioning any contracts awarded prior to disasters, such as housing inspection contracts, to local vendors, and limiting the use of subcontracts to 65 percent of the cost of cost- reimbursement contracts, task orders, or delivery orders. This applies to contracts and orders that exceed $150,000 and are used to support disaster response and recovery efforts. In our 2008 report, we reported that FEMA had taken preliminary action on the PKEMRA provisions we reviewed. For example, we found that FEMA had drafted a regulation to limit the use of subcontracting in certain contracts, but it was still under review at the time of our report. FEMA Has Expanded Its Contracting Workforce since 2005 but Does Not Have Sufficient Processes to Prioritize Disaster Workloads or Cohesively Manage Contracting Officers FEMA’s contracting officer workforce has grown significantly since Hurricane Katrina, but the agency has struggled with attrition at times. Turnover in FEMA’s contracting officer workforce has had particular impact on smaller regional offices which, with only one or two contracting officers, face gaps in continuity. FEMA’s workforce increases are due in part to the creation of the Disaster Acquisition Response Team (DART) in 2010, headquarters staff charged with supporting disasters. DART has gradually assumed responsibility for administering the majority of FEMA’s disaster contract spending, but FEMA does not have a process for how the team will prioritize its work when they are deployed during a disaster. Further, in 2011, FEMA established an agreement between the regions and headquarters to revise regional contracting staff reporting responsibilities; however, we found challenges with how the agreement is being implemented, particularly in that it heightens the potential for an environment of competing interests for the regional contracting officers. FEMA has not updated the agreement, even though the agreement states it will be revisited each year, leaving it in conflict with more recent guidance that increases contracting officer training requirements. FEMA Has Increased the Size of Its Contracting Workforce since Hurricane Katrina The size of FEMA’s contracting officer workforce at the end of fiscal year 2014 was more than triple the size of its workforce at the time of Hurricane Katrina. When Hurricane Katrina struck in 2005, FEMA had a total of 45 contracting officers in its headquarters and regional offices. In addition to hiring headquarters and regional contracting officers after Hurricane Katrina, FEMA also established long-term recovery offices to assist with lengthy recovery efforts in Louisiana and elsewhere. By the time Hurricane Sandy landed in 2012, the workforce had grown to over 170 contracting officers. This number has declined slightly since then, with FEMA having 163 contracting officers by the end of fiscal year 2014. See figure 5 for additional information. During this period of growth, FEMA struggled with attrition at times, experiencing years in which the number of contracting officers leaving the job series outpaced the number of new additions. As seen in figure 6, FEMA was able to replace about two-thirds of the departures in fiscal years 2009, 2010, and 2013. FEMA officials noted that some of these departures were to be expected due to the natural decline in workload at long-term recovery offices for various disasters, including Hurricane Katrina. FEMA officials also explained that this slowdown in hiring occurred at FEMA due to budget shortfalls, but they received authorization to hire additional staff in 2014 and began to fill these positions in 2015. Turnover has disproportionately affected some of FEMA’s 10 regions, where each office had two to five contracting officers at the end of fiscal year 2014. For example, at the end of fiscal year 2014, 6 of FEMA’s 10 regional offices had contracting officers with an average of 3 years or less of contracting experience at FEMA. This turnover results in gaps in continuity, particularly for regions that have a smaller number of contracting officers; for example: Officials stated that, as of July 2015, one region was without contracting officers due to recent staff departures and relied on headquarters assistance to meet its contracting needs. The headquarters staff is providing the assistance in addition to their usual duties, so the region has limited contracting capacity and potential continuity challenges. In two regions, officials said they received complaints from unhappy vendors due to unpaid invoices left by previous contracting officers. Another contracting officer noted that the contracting staff only know about open contracts when unspent funds remain and do not know how many open contracts were complete but waiting to be closed. The turnover also limits the cumulative amount of disaster contracting experience within each regional office. As a result, some regional offices have contracting officers with limited hands-on disaster experience, yet they are tasked with being the first response for contracting should a disaster occur in his or her region. In a 2008 memorandum about assessing the acquisition functions of an agency, the Office of Federal Procurement Policy stated that retention and turnover issues can be signs of potential staff loss or indicators of other matters related to morale, cautioning agencies that high turnover can impact mission accomplishment. Senior level FEMA officials said that morale was a challenge in addition to the high demand for contracting officers across the government. In one region, a regional supervisor stated that contracting officers can easily find other opportunities for advancement without the hassles of disaster contracting, especially if they hold certain kinds of warrants, such as for construction contracts. Headquarters officials noted that the recent reorganization was implemented partly to create more opportunities for promotion and improve morale because staff will leave if there are not enough opportunities. In addition, the officials said they have also prioritized hiring efforts to rebuild their workforce after recent years of limited hiring, which affected morale. FEMA Created a Team of Disaster Contracting Officers to Provide Increased Oversight, but Has Not Established a Process for Prioritizing Workload In a 2010 business case to justify hiring additional contracting staff, OCPO officials said it had substantially added to the size of its contracting workforce in the years since Hurricane Katrina, but that it did not have enough specialized contracting staff to manage the contract administration and oversight requirements of several simultaneous large- scale disasters or a catastrophic event. FEMA identified contract oversight as a priority after a DHS Inspector General report found that FEMA incurred over $5 million in excessive contract costs because of inadequate controls during Hurricane Katrina. To address the need for improved contract oversight, in 2010, FEMA created 18 new contracting officer positions to form DART, a team whose primary purpose is to support contract administration for disasters. Most DART members are located in three regional offices when not deployed to disasters, but are considered headquarters staff for management purposes. If a region needs additional contracting assistance for a disaster, it can come from reservists, who have limited procurement authority as purchasing agents and can support smaller disasters, and from DART if larger contracts or contracts that require specific warrants are needed. For example, FEMA officials reported that a DART member was deployed to a recent disaster in Alaska because none of the regional contracting officers had the architecture warrant needed to support the disaster. To illustrate how FEMA deploys DART, Figure 7 shows how and when DART members were deployed to support Hurricane Sandy response efforts. For example, two of the three DART contracting officers in Oakland, California deployed to Hurricane Sandy. In its 2010 business case, OCPO stated that with DART, FEMA would be able to deploy experienced personnel to joint field offices to provide increased oversight of complex contracts during a disaster. These oversight duties would include making necessary modifications to complicated contracts and monitoring of contractor performance, such as assessing contractor compliance with the terms of awarded contracts and tracking costs and invoice payments. Since its establishment in 2010, DART has gradually assumed more responsibility for administering the majority of FEMA’s disaster contract spending, which senior officials explained was the original intent. Much of this expansion in their responsibilities has occurred during a time period in which FEMA has responded to fewer disasters. In addition to deploying and supporting joint field offices during disasters, DART’s duties also now include the following: Administering FEMA’s national contracts for housing inspection services, telecommunication services, and construction of temporary camps for disaster response personnel. Some contracts are multimillion dollar contracts. For example, FEMA obligated more than $117 million in fiscal years 2013 and 2014 for housing inspection services, and the estimated overall value of these contracts ranges from $550 to $800 million. A FEMA official stated that permanent full- time contracting officers at headquarters previously handled most of these contracts. Preparing to manage FEMA’s public assistance contracts, which are used to assess the extent of damage to public facilities and critical infrastructure and account for the largest share of FEMA’s disaster support contracts—$348 million of the $631 million obligated by contracting offices in headquarters that support disasters in fiscal years 2013 and 2014. Further, officials explained that a permanent full-time contracting officer at headquarters previously handled these contracts. Assisting other non-disaster efforts in FEMA. For example, in a 2014 memo, FEMA’s Head of Contracting Activity noted that he had asked for DART’s assistance in augmenting headquarters staff to close out contracting actions, even though DART is normally reserved for disaster response support. In addition, FEMA officials stated that DART has been called upon to provide support in a region that was without a contracting officer since November of 2014. As DART has assumed more responsibilities, FEMA has not established a process for prioritizing workload during busy disaster seasons. FEMA officials in charge of DART said that they review requests for DART’s assistance on an ad hoc basis and follow FEMA’s standard agency policy about how to redistribute their work if DART members were suddenly deployed to a disaster. While FEMA policy addresses the process for transitioning contract files from one contracting officer to another, it does not address how decisions will be made to prioritize which contracts the deployed DART member will remain responsible for during their deployment and which contracts will transition to another contracting officer. If a disaster were to strike, DART contracting officers said they would take some of their current workload with them while other tasks might have to wait until they could return to their normal contracting duties, or be reassigned to other contracting officers. Federal internal control standards call for agencies to document responsibilities through policies and have mechanisms in place to react to risks posed by changing conditions. Although disaster response often occurs in a changing environment, FEMA’s 2010 business case for establishing DART and the policy for transitioning contract files do not provide a standardized process through which requests for assistance will be assessed and prioritized, or how individuals’ workloads will be prioritized during disasters. Without additional guidance that specifies FEMA’s criteria for prioritizing DART contracts and is tailored for a workforce expected to frequently deploy in support of disasters, FEMA risks creating oversight gaps that may affect its largest contracts. Agreement Establishing Headquarters and Regional Responsibilities Poses Challenges for FEMA to Cohesively Manage Its Contracting Workforce In 2011, FEMA created a formal agreement between the regions and headquarters to establish a new role for FEMA’s OCPO in overseeing regional contracting staff. Prior to the agreement, regional contracting officers only reported to their respective supervisor in the region, with no formal link to OCPO. FEMA instituted this agreement in response to a 2009 DHS Inspector General report which recommended, in keeping with DHS guidance and federal internal control standards, that only contracting officials should manage the technical performance of contracting officers. The report stated that having the contracting officer’s performance and career advancement controlled by someone who is not a contracting professional was an internal control risk and created a potential conflict-of-interest situation for the contracting officer. As a result of this agreement, regional contracting officers have a dual reporting chain to both OCPO and their supervisor within the region. The 2011 agreement outlines responsibilities of the regional contracting officer’s supervisors in OCPO and in the region. OCPO serves as the contracting officers’ official performance reviewer, while a regional supervisor manages their day-to-day activities. Table 1 details the responsibilities established through the agreement. The agreement states that its intent is to establish roles and responsibilities for an oversight arrangement that requires greater collaboration between headquarters OCPO and regional supervisors in order to be successful. While the current arrangement is an improvement over the prior situation, where regional contracting officers had no reporting chain to the headquarters Chief Procurement Officer, we found four challenges with the current agreement that limit cohesive implementation: it creates the potential for competing interests, limits full visibility into the contracting officers’ workload, does not mitigate the potential for miscommunication between headquarters and regions, and does not reflect new training requirements. Competing interests. With respect to operational control, we found that the dual reporting chain to both headquarters and regional mission support, set forth in the service level agreement, heightens the potential for an environment of competing interests for the regional contracting officers. Specifically, in some regions, supervisors have assigned duties outside of a contracting officer’s responsibilities. In other cases, contracting officers have experienced pressure from program officials to make decisions that may not be appropriate. In both situations, being physically located in a regional office where their regional supervisor is not a contracting professional gives contracting officers less standing to resist requests; for example: Based on our discussions with regional supervisors and contracting officers, we found that regional supervisors in three regions had asked contracting officers to take on additional duties outside of their contracting responsibilities. In one case, an internal review at FEMA showed that a regional contracting officer did not deploy to a disaster because he was carrying out non-contracting tasks as requested by a regional supervisor not typically responsible for overseeing contracting officers. As a result of the internal review, FEMA reassigned the contracting officer to a regional mission support supervisor to follow the management structure used in other regions. Contracting officers in four regions reported resistance from regional program staff in following contracting processes, such as meeting competition requirements. One mission support supervisor explained that when there are questions about contracting processes, she does not necessarily understand what the contracting officer is required to do in order to adhere to contracting regulations. In one case contracting officers reported that program staff wanted them to eschew contracting requirements and award a noncompetitive contract. The program officials complained to the regional supervisor, who in turn pressured the contracting officers to make the award. In a July 2010 report, we found that the potential exists for program offices, which play a significant role in the contracting process, to exert pressure on contracting officers or that may not result in the best use of taxpayer dollars. Further, a 2008 Office of Federal Procurement Policy memorandum states that agencies should consider where an acquisition function is placed because it may be viewed as an administrative support rather than as a business partner, so that contracting requirements are circumvented. Under the current agreement, the risks associated with the divided structure of FEMA’s regional contracting offices, and actions that may be taken to mitigate these risks, are not specifically addressed. Limited insight into contracting officers’ work. Dividing supervisory responsibilities between headquarters and regional staff has resulted in cases where neither had full insight into contracting officers’ work, in both the operational control and training areas of responsibility. In some cases, problems were not detected by management and led to gaps in oversight; for example: A regional supervisor reported discovering poor contract administration after the departure of a contracting officer. The problems included: awarding a contract to an incorrect vendor, miscommunicating about the period of performance on a contract, and neglecting to send a copy of a contract to a vendor. Contracting officers in three regions discovered overdue invoices, and contracting officers in one of the regions said they had to reestablish creditability with the local vendor community that had been lost due to unpaid invoices left by previous contracting staff. Contracting officers said this situation increased some vendors’ unwillingness to work with FEMA, and cited it as a potential barrier to competition in geographic areas where there are relatively few vendors available. Regional supervisors in one region stated that they were unaware of the extent of the training requirements for contracting officers until one of their contracting officers temporarily lost his warrant after not meeting them. Regional supervisors noted that it was difficult to operate without one of their contracting officers and ultimately decided to ask for help from headquarters and said that DART temporarily supported the region. Senior FEMA officials noted that they recently established a quality review team in headquarters that will provide more oversight to help ensure contract actions and documentation prepared by regional and headquarters contracting officers adhere to government-wide and agency-specific regulations. The quality review team is to examine contract files for contracts starting at $500,000 and above, while contracting staff are to conduct peer reviews of contracts below $500,000. Challenges with communication and coordination. Overall, headquarters and regional supervisors said that even with the agreement in place, communication and coordination are challenging across most of the areas of divided responsibilities. For example, one regional supervisor said that it was difficult to address personnel issues without being the official performance reviewer, as headquarters retains this function under the agreement. In another region, the regional supervisor said he was not made aware of an escalating disagreement between a regional contracting officer and headquarters, until the day before a task order needed to be awarded. With the current task order set to expire, the regional supervisor said that the region ultimately ceded to headquarters’ preferences, even though the contracting officer felt pressured to do so. The regional supervisor noted that the current agreement with OCPO does not adequately address the roles of headquarters or regional supervisors, especially when there is a difference of opinion about how to manage staff, and further, it was difficult to find someone at headquarters to discuss how to handle the situation. A headquarters supervisor noted that the dual reporting structure sometimes created confusion about who was supposed to make specific decisions. For example, regional staff were not sure who would decide how workload would be covered while contracting officers were at training. Key practices for successful collaboration among government agencies include clear roles and responsibilities, compatible policies and procedures, and articulation of a common outcome. Additionally, internal control standards require the easy flow of information throughout the organization, especially between functional activities such as procurement and production. Addressing changes in training requirements. The agreement does not reflect current training requirements for contracting officers. Specifically the agreement states that contracting officers could satisfy their ongoing training requirements through online courses available at the time the agreement was written and that these online courses would not require the regions to pay for contracting officers’ travel costs to take the training. However, in 2014, FEMA issued guidance that required contracting officers to obtain classroom training to fulfill their requirements for ongoing training, and the Office of Federal Procurement Policy increased classroom requirements needed for contracting officers to advance to the next certification level. Contracting officers in three regions told us that meeting these requirements would likely require travel funds due to the scarcity of course availability in some regions. One regional supervisor noted that these travel funds would be paid out of the region’s training budget, even if there is no fee for a course. In one region, a contracting officer reported that she had to cancel her travel plans the day before her scheduled departure for a course due to lack of funding. The Office of Federal Procurement Policy had previously established guidance in 2008 that encourages agencies to provide contracting staff with resources for continuous learning efforts, as skills and knowledge gaps can inhibit contracting officers’ ability to properly oversee the types of contracts used. Without addressing recent changes to training requirements, there is a risk that contracting officers will not meet training requirements. Although the formal agreement between the regions and OCPO states that both parties are to revisit it on an annual basis, senior FEMA headquarters officials stated this has not occurred and that they did not see a need to revisit it because they had not received feedback that the regions wanted to do so. As a result, the agreement does not address the concerns identified above, and has not been updated in the more than 4 years since its creation to reflect good practices or lessons learned. Contracting Reforms Are Not Fully Implemented and Disaster Contract Management Practices Are Inconsistent FEMA has taken actions to address most of the four PKEMRA requirements we examined, but the agency has not fully implemented them. Additionally, inconsistent contract management practices during disaster deployments—such as incomplete contract files and reviews— create oversight challenges. FEMA Has Not Fully Implemented Required Contracting Reforms Following Hurricane Katrina Based on our review of 27 disaster support contracts from fiscal years 2013 and 2014, FEMA has made progress in addressing some aspects of the contracting reforms required by PKEMRA, including the use of contracts established prior to a disaster for goods and services that are typically needed during a disaster response—known as advance contracts. However, we found that confusion exists about key requirements, including the 150-day limit on certain noncompetitive contracts and transitioning awards to local vendors. This confusion is furthered by a lack of specific guidance on how to implement these requirements, including a clear definition of the term local contracting. In addition, DHS has taken no action on the requirement involving limits on subcontracting. See table 2 for more information. Our review of 27 disaster support contracts and task orders included 13 that were not competitively awarded, which contracting officers explained as necessary due to unusual and compelling urgency. More than half—8 of the 13—exceeded the PKEMRA time limit of 150 days, which was put in place to reduce the use of noncompetitive contracts. However, we found that FEMA had not approved any of these to exceed 150 days, as required. These 8 contracts and task orders exceeded this time limit from a few months to one and a half years. DHS acquisition regulations require that this approval be given by FEMA’s senior acquisition official, the Head of Contracting Activity, who reported that he had rarely been asked to approve extensions beyond the 150 days during his time in office. For the eight contracts and task orders that exceeded the 150-day limit, contracting officers were either unaware of the time limitation or did not take steps to get approval from the Head of Contracting Activity as required. For example, five of the eight were for hotels to house FEMA employees in the immediate aftermath of Hurricane Sandy. Contracting officials explained that these contracts, which totaled almost $6 million in fiscal year 2013 and 2014 obligations, were urgent because of difficulties FEMA faced in finding enough hotel rooms at government per diem rates for deployed FEMA employees. A FEMA report following the hurricane noted that almost 10,000 employees were deployed to support Hurricane Sandy. At the same time, more than 11,000 displaced survivors from New York and New Jersey were housed in hotels and motels in the area. While this situation was clearly urgent in the immediate aftermath of the hurricane, we found no documentation in the contract files as to why the hotel rooms were still needed more than 150 days after the disaster or why they did not obtain the necessary approval to extend the contracts. The other three contracts that exceeded the 150-day time limit included: A $66 million task order for architect and engineering technical assistance awarded after Hurricane Sandy, which was extended a year and a half beyond the 150-day limit. Contracting officials did not realize that the 150-day PKEMRA limit applied to the order. A $200,000 contract for leases of mobile home park spaces to provide temporary housing for Hurricane Sandy disaster victims, which was extended more than a year beyond the 150-day limit. Contracting officials explained that such services are often not competed because of the limited number of available vendors in disaster areas, but the contract file did not contain a justification for exceeding the PKEMRA requirement. A $200,000 contract for security services after Hurricane Irene, which struck the New York area in August 2011. The contract had a justification but was it not approved by the Head of Contracting Activity as required. This contract exceeded the 150-day limit by about a year. Contracting officers in two regions and DART contracting officers said they might not transition a noncompeted contract to a competed award after 150 days because it may not be a priority, noting that adequate vendor performance, workload prioritization, and the potential costs to re- compete the contract as factors that may be considered. In addition to the case studies we reviewed, FEMA contracting officials in five regions were confused by the 150-day requirement for noncompeted disaster support contracts or the appropriate use of the “urgent and compelling” justification for noncompetitive contract awards. For example, in several instances we were told that the 150-day restriction was not absolute, or that all contracts are considered urgent in a disaster. While the FAR provides some flexibility for disaster contracting, officials are to justify noncompeted contracts and meet the 150-day restriction for disaster contracts justified based on an urgent need. One official also said they were not aware of any guidance on the appropriate use of the urgency justification for disaster contracts. While this information is included in the DHS’s justification and approval guide and a 2008 FEMA standard operating procedure for sole source justification and approvals, FEMA does not address this requirement in training materials or other guidance to its contracting officers. For example, FEMA’s desk guide and disaster contracting training course do not mention this disaster-specific 150-day restriction. Senior FEMA contracting officials said the requirements will be reviewed in future training updates. In accordance with PKEMRA, FEMA has submitted quarterly reports to Congress since December 2007 that list all disaster contracting actions, including details on contracts awarded by noncompetitive means. However, in our review of reports to Congress in fiscal years 2013 and 2014, we found that some did not capture all of FEMA’s noncompetitive task order actions. We found that $32 million in noncompetitive obligations were not reported in fiscal year 2013. This number included more than $14 million in obligations to a $66 million technical assistance award that was not competed because of an urgent need for services immediately following Hurricane Sandy. A FEMA official explained that there had been an error in the data compilations prior to mid-2013 that inadvertently excluded noncompeted task orders issued under competitively awarded contracts. The official stated that FEMA has since updated its process to capture these types of awards, including implementing additional quality control reviews such as comparisons with federal procurement data sources and adding more people in the review process. As a result, the FEMA official stated that the quarterly reports submitted after the third quarter of fiscal year 2013 are accurate. We confirmed that similar task orders were included in FEMA’s fiscal year 2014 reports, but FEMA officials told us that they have not notified Congress of the errors in prior reports and do not plan to do so. Without accurate information, Congress does not know the full extent of FEMA’s past noncompetitive awards and cannot use these reports to evaluate FEMA’s noncompetitive spending over time. Advance Contracting and Agreements PKEMRA required FEMA to identify and establish contracts for goods and services that can be obtained before a disaster and FEMA has done so for many of the categories identified, such as nonperishable food items and housing assistance. PKEMRA also required FEMA to develop a contracting strategy that maximized the use of advance contracts to the extent practical and cost-effective. As we found in 2006 following Hurricane Katrina, agencies need to have competitively awarded contracts in place before a disaster to be effective. According to FEMA, establishing contracts for goods and services in advance ensures they can rapidly mobilize resources in immediate response to disasters and can reduce the need to buy disaster relief and recovery items through noncompetitive contracts. In 2008 we found that FEMA provided Congress with a list of categories of the products and services suitable for establishing contracts in advance and a plan for maximizing the use of these contracts, as required by PKEMRA. FEMA officials explained that indefinite-delivery indefinite-quantity (IDIQ) contracts facilitate the goal of having contracts available if there is a disaster. In addition, as part of their overall acquisition strategy, FEMA officials said that they use other advance vehicles through which they obtain goods and services, including interagency agreements and mission assignments, which are work orders directed to other federal agencies to complete a specified task. See table 3 for examples of these contracts and agreements. Although the contracting officers we spoke with were aware of certain headquarters advance contracts, such as for housing inspections and technical assistance—which made up the majority of FEMA’s fiscal year 2013 and 2014 obligations from disaster support and regional contracting offices—they reported varying awareness of information available on other advance contracts. FEMA headquarters maintains a list of these contracts through its shared document management system and identifies additional contracts in training sessions, but we found that contracting officers in three regions were not aware of such information and did not know how to access the contracts. These contracting officers only learned about the list when they were told to use the list for certain items, such as fuel and translation services. In one case, a regional contracting officer tried to establish a contract in advance for fuel but was stopped because he was not aware that these efforts violated policy to use a headquarters contract for this item. Another contracting officer had a similar experience when trying to award a contract for translation services. We found that the information in FEMA’s lists and training material do not comprehensively identify all of the advance contracts or vehicles available. For example, officials told us that FEMA has an interagency agreement with another agency to provide law enforcement and security forces, which is one of the service categories that FEMA previously identified as appropriate for advance contracts. However, this interagency agreement is not identified in FEMA’s lists or training materials. A senior contracting official explained that these services may no longer need to be on the advance contract list since FEMA makes an effort to award security contracts to local law enforcement as part of their local business efforts. Fire and rescue support services are another requirement met by mission assignment to another agency that is not identified in FEMA’s list or training materials. PKEMRA also requires that FEMA coordinate advance contracts with state and local governments and that FEMA encourage state and local governments to engage in similar pre-planning for contracting. Our review found that outreach with state and local governments varied greatly, limiting FEMA’s ability to support advance contracting efforts. Several regions, including two with a larger number of contracting staff and more disaster contracting experience, described how they engage in advance contracting efforts. One region’s contracting officers began disaster pre-planning and conducted outreach to state vendors in an effort to build internal advance contract capacity. Contracting officers said these efforts have since expanded to help several states access contracts awarded in advance, such as General Services Administration schedule contracts. Similarly, contracting officials from another region emphasized that they take the initiative to engage in strategic planning to identify needs and conduct regular outreach to local businesses across the region. They told us these activities facilitate local awards and provide for multiple sourcing options during a disaster. They said these pre- planning efforts are often efficient enough to have the bulk of a disaster’s contracting in place soon after the disaster. In contrast, contracting staff in the other FEMA regions have more limited capacity for outreach or do not know that it is expected. One regional contracting officer said that he does not have any contacts within the state and has not taken steps to coordinate regional advanced contracts. Other regional contracting officials said they would like to do more outreach to states, but find it difficult with their current workloads or staffing shortages. FEMA’s existing guidance and training for contracting officers does not address their relationship with state and local contracting counterparts or the expectation for how they will support advance contracts. Contracting with Local Businesses The FAR, which implements the PKEMRA requirement to provide a contracting preference to local firms where feasible, offers contracting officers some flexibility to increase local awards, including setting work aside for only local firms to compete. The FAR requires that contracting officers document any decision to award disaster contracts to non-local firms—those companies or individuals that do not reside or primarily do business in a declared disaster area—in the contract file. The FAR also requires transitioning non-local contracts awarded before a disaster strikes to local vendors as soon as possible, with FEMA policy stating that this should be accomplished for contracts awarded within 180 days. Federal Acquisition Regulation 26.201-26.202 Local area preference When awarding emergency response contracts during the term of a major disaster or emergency declaration by the President of the United States under the authority of the Robert T. Stafford Disaster Relief and Emergency Assistance Act (42.U.S.C 5121, et seq.), preference shall be given to the extent feasible and practicable, to local firms. “Local firm” means a private organization, firm, or individual residing or doing business primarily in a major disaster or emergency area. “Major disaster or emergency area” means the area included in the official Presidential declaration(s) and any additional areas identified by the Department of Homeland Security. Figure 8 is an example of a FEMA disaster declaration that depicts which counties are included in which the vendors would be considered local. While contracting officials recognized the importance of local contracting, in five of the 10 regions we met with, contracting officers either showed a great degree of confusion about determining which awards were local or told us that the process for determining if a vendor is local is not well- defined; for example: One official said that vendors could be considered local if they are in the same zip code of the designated disaster zones, even though the FAR says location is based on the area’s declared disasters, which are typically counties. Other officials said that contracting officers could exercise their discretion as to what constituted a local award, regardless of the declared area, with one contracting officer noting that if vendors in the declared area were unavailable due to the disaster, then going to vendors in nearby counties could be considered local. While this is permissible, the contracting officer would not be able to call the contract local and would have to document the action in the contract file. Several contracting officers said that they would like additional clarification on local area contracting requirements, specifically what could be considered local. One said there were so many different approaches that she was not sure which ones were correct. Confusion over the definition of local was evident in the contract files we reviewed. Our analysis found that FEMA awarded 13 of the 26 contracts or task orders to vendors located outside the counties declared as disasters, but in only one of the cases was the award to a non-local vendor documented or otherwise addressed as required. Among these were two task orders issued under contracts that had been awarded before the disaster; these task orders exceeded 180 days but were not transitioned to local vendors as required by FEMA policy. See figure 9. The two awards that did not transition to local vendors after 6 months were housing inspection and technical assistance task orders, which are services that account for the majority of FEMA’s disaster contract obligations. A contracting official explained that they do not have a process for moving these awards to local vendors after 6 months, although FEMA’s 2010 guidance specifies that such contracts require transition. Further, contracting officials had incorrectly identified 4 of the 13 contracts as local in FEMA’s data systems. FEMA created a Local Business Transition Team in 2007 as a pilot program in part to support the transition to local vendors, and officials said they folded the team into FEMA’s broader industry outreach efforts that can provide virtual assistance. FEMA’s training materials and guidance do not fully address the requirement to document the contract file when making awards to non- local vendors. For example, FEMA’s existing training does not reflect the FAR provision that requires documentation in any cases where local vendors are not used. Further, FEMA’s Emergency Contracting Desk Guide misconstrues the local requirement, incorrectly stating that contracts must be set aside for local vendors unless a written justification is provided. In contrast, the FAR states that local preference may take the form of local area set-asides or an evaluation preference. As a result of our review, a FEMA official responsible for developing the contracting officers’ training curriculum said that local contracts will be addressed in more detail in a future revision to a course on disaster contracting planned for late fiscal year 2015. Limits on Tiered Subcontracting In 2010, DHS published a proposed rule to implement Section 692 of PKEMRA, the provision of the law that prohibits the use of subcontracts for more than 65 percent of the cost of cost-reimbursement type contracts that exceed the simplified acquisition threshold—which is generally $150,000—and are used to support disaster response and recovery efforts. However, DHS has not issued a final rule. DHS policy officials said they have delayed implementing this rule because of comments they received that indicate the limitation would have a negative impact on small businesses. Officials explained that FEMA uses cost-type contracts primarily for construction services that often brings an array of specialists together on one job, creating the need for subcontracting. These officials explained that construction specialists are often small businesses; they noted that limitations imposed by this rule could inhibit these businesses’ ability to get work. While we understand DHS’s concern about the potential limitations this could place on small businesses, the requirement has not been addressed. DHS officials said they are considering requesting a congressional amendment to the law which would delete the requirement to limit the use of subcontractors under Section 692 of PKEMRA. However, other than publishing the proposed rule in 2010, DHS has taken no further action regarding implementation of Section 692. Without taking further action, DHS risks not addressing Congress’s direction to limit the use of subcontracts as is required under Section 692 of PKEMRA. FEMA Experienced Challenges with Contract Management during Disasters Contract management is the primary part of the procurement process that assures the government gets what it paid for, of requisite quality, on time, and within budget. We have previously reported that contract management presents challenges for agencies within the federal government. For FEMA, contract management is further complicated by the dynamic environment in which contracting officers operate during a disaster. FEMA contracting officials must respond quickly to acquire goods and services to assist survivors, but they must do so while complying with federal law and FAR requirements. They must also work within the joint field office structure and deployment processes that may result in multiple contracting officials supporting individual contracts at different points in time, particularly in cases where staff is deployed to support the region. These conditions can present challenges to good management of disaster support contracts. The issues we saw in the files reviewed and heard from contracting staff included the following: Incomplete documentation: In one region, mission support and contracting staff reported not receiving any files from contracts that had been awarded at the joint field office; others only learned of contracts when they received vendor invoices after the joint field office had closed. In another region, a deployed contracting officer awarded several contracts for hotels during the immediate response to Hurricane Sandy but returned to headquarters shortly thereafter before having an opportunity to bring the contract files up to date. This resulted in key documents missing from the file, including justifications for noncompetitive awards. Additionally, a $66 million task order for technical assistance services did not have the justification and approval required for a noncompetitive award. Lack of contract closeouts: Contracting officers in several regions told us that they have backlogs of contracts to be closed out. Contract closeout begins when all services have been performed and products delivered and closeout completes when all administrative actions have been completed and final payment to the vendor has been made. Prompt contract closeout is critical to ensure that all government debts are paid and unneeded funds are de-obligated. FEMA officials told us they are trying to address this issue by setting the goal for deployed staff to close out 90 percent of files before returning to their home office. FEMA training that included contracting officer metrics from fiscal years 2011 to 2014, showed that FEMA had de-obligated over $116 million and that over 1,900 contracts were available for closeout. No evidence of higher-level reviews: Nine of the 27 contract files we reviewed required review and approval by a person at least a level above the contracting officer. Three files contained some evidence of communication with a reviewer at the appropriate level, but only one of the files documented the required approval. For example, a $1.8 million security contract did not include evidence of the required review. DHS’s Office of the Chief Procurement Officer has conducted several internal reviews and found similar problems with FEMA contracts. A major finding from the most recent review, in September 2014, was that FEMA’s poor contracting practices had extended over a period of time, and that FEMA required significant improvement in the quality, documentation and management of their contract actions to comply with laws and regulations. For example, the audit found problems with missing or incomplete contract files, lack of funding documentation, and a lack of required Congressional notifications. The report also cited FEMA’s inability to comply with their corrective action plans from prior reports, which FEMA acknowledged. In our discussions with the DHS reviewers, we were told that FEMA is now responding to its corrective action plan and that management was responsive to addressing the issues raised. Conclusions FEMA often contracts for products and services under extreme pressures to deliver these items to disaster survivors, and sometimes under the scrutiny of the entire nation. FEMA can leverage different resources to provide contracting support in a disaster, with regional contracting officers being the first to respond. Although FEMA has taken steps to increase the size of its contracting workforce, it does not manage its headquarters and regional workforce in a cohesive manner resulting in contracting problems that are sometimes missed or overlooked. FEMA’s development of DART in 2010 has helped to increase its capacity to provide contracting support for disasters, but the relatively low number of disasters in recent years led FEMA to increase the responsibilities of this team when not deployed, including taking on responsibility for some of FEMA’s largest disaster- related contracts. Without updated guidance on this team’s prioritization of workload in the event of a disaster, FEMA is at risk of not having complete coverage of its contracts during a disaster. Hurricane Katrina occurred 10 years ago and spurred the PKEMRA contracting requirements discussed in this report. Even after 10 years, we found variation in the extent to which contracting officers were aware of and complied with the statutory requirements of PKEMRA, putting efficient use of taxpayer dollars at risk. Additionally, decision makers in FEMA and Congress need timely and accurate information, but FEMA has not informed Congress of errors it made in its quarterly reports on noncompetitive contracts prior to 2014. Without this information, Congress does not know the full extent of FEMA’s past noncompetitive awards and cannot use these reports to evaluate spending over time. Finally, without taking steps to implement Section 692 of PKEMRA, the provision regarding limits to subcontracting or seeking an amendment to the law which would delete the requirement, DHS runs the risk of never addressing this statutory requirement. Recommendations for Executive Action We are making eight recommendations to the FEMA Administrator and one recommendation to the Secretary of Homeland Security. To help ensure that FEMA is prepared to manage the contract administration and oversight requirements of several simultaneous large-scale disasters or a catastrophic event, we recommend that the FEMA Administrator update its guidance to establish procedures for prioritizing DART team members’ workloads when deployed to a disaster. To improve coordination and communication between FEMA OCPO and region mission support officials, we recommend that the FEMA Administrator direct OCPO and the regional administrators to revisit the 2011 service level agreement to: add details about the extent of operational control headquarters and regional supervisors should exercise to minimize potential competing interests experienced by regional contracting officers; further detail headquarters and regional supervisors’ roles and responsibilities for managing regional contracting officers to improve coordination and communication; and ensure that the agreement reflects any new requirements, including recent changes in training that may require travel funds, and establish a plan to ensure that the agreement is reviewed on an annual basis as intended. To improve implementation of the contracting provisions of PKEMRA, we recommend that the FEMA Administrator provide new or updated guidance to ensure all contracting officers are aware of requirements concerning the 150-day limit on noncompetitive contracts justified as urgent and compelling, current information on available advance contracts and how they should be accessed and used, the need to conduct outreach to state and local governments to support their use of advance contracts, and how to contract with local vendors, including an understanding of the regulatory definition of “local,” the documentation requirements for the use of non-local vendors, and the process for transitioning non-local awards to local vendors within required timelines or documenting why the transition was not completed. To ensure the accuracy of information provided under PKEMRA, we recommend that the FEMA Administrator inform Congress of errors in reporting noncompetitive task orders in quarterly reports issued prior to 2014. To address PKEMRA, we recommend that the Secretary of Homeland Security should: take action to address the requirements of Section 692 to implement subcontractor limitations or request that Congress amend the law to delete Section 692. Agency Comments We provided a draft of this report to DHS for review and comment. In its written response, reproduced in appendix II, DHS agreed with our findings and recommendations. The written response also includes information on the steps that FEMA and DHS will take to address each recommendation and provides an estimated completion date for these actions. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and the Administrator of FEMA. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff making key contributions to the report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology To assess the Federal Emergency Management Agency’s (FEMA) efforts to build and manage its contracting workforce and structure, we reviewed and analyzed data on FEMA’s workforce since Hurricane Katrina, which occurred in 2005, to identify staff size, rates of attrition, and years of experience. To assess the reliability of the workforce data used in the review, we reviewed information on the data collection process and compared key data elements from the workforce data to statements about start dates, home offices, and deployments made by officials that we interviewed. We concluded the workforce data was sufficiently reliable for purposes of this report. To understand how FEMA manages its contracting workforce, including staff that support disasters, we met with contracting officers in FEMA’s 10 regional offices and in headquarters offices in Washington, D.C. that support disaster contracting activities. We also analyzed available workforce documents, including training materials and requirements for deployment, to determine the range of activities carried out by regional and headquarters contracting staff. We analyzed the agreement that governs headquarters’ role in regional contracting to determine the roles and responsibilities of regional offices and headquarters in disaster contracting. We also met with the FEMA headquarters officials responsible for regional contracting officers and the mission support officials from each of FEMA’s regional offices to discuss FEMA’s contracting workforce. We reviewed Office of Federal Procurement Policy and FEMA guidance regarding training requirements for contracting officers. Further, we reviewed federal internal control standards to determine if any major performance challenges exist. We also assessed the extent to which FEMA relies on contractors to support its acquisition function by identifying acquisition support contracts in federal procurement data, reviewing available files, and interviewing contracting officials in FEMA’s headquarters and regional offices regarding whether such contracts are in use. We found that FEMA’s use of acquisition support contracts was limited. To assess the adoption of the Post-Katrina Emergency Management Reform Act of 2006 (PKEMRA) contracting reforms and good management practices, we analyzed data from the Federal Procurement Data System-Next Generation (FPDS-NG) to identify contracts awarded by offices principally involved in planning for or responding to disasters. Because the PKEMRA contracting reforms only apply to disaster support contracts, we focused on contracting offices most likely to award such contracts. We identified the contracting offices based on our analysis of FEMA’s obligations in FPDS-NG, which we confirmed with senior FEMA contracting officials. These included contracting offices responsible for response, recovery, and logistics in FEMA’s headquarters and the contracting offices in FEMA’s 10 regions, which award contracts for both disaster and non-disaster support efforts. From these offices, we selected a non-representative sample of 27 contracts and task orders with obligations in fiscal years 2013 and 2014 and confirmed that they supported disaster response efforts. Our selection process was as follows: Sixteen of the contracts and orders were selected using a stratified random sample that reflected key elements of PKEMRA contracting reforms. These included contracts and orders that were (1) not competed and justified based on an unusual and compelling urgency; (2) not awarded in advance through indefinite-delivery indefinite- quantity contracts (IDIQs) to understand how decisions regarding advance and local contracts were made; and (3) from the products and services FEMA obligated the most money during the time period to understand how FEMA spends the majority of its contracting dollars for disaster-support. Eleven contracts and orders were selected from the random sample or to reflect the regional offices we visited, based on factors including their representation of PKEMRA elements, such as local contracts in that region, or significant obligations relative to other contracts awarded by the region. We reviewed the contract files for the 27 contracts and task orders to identify the documents related to the PKEMRA requirements, such as justifications for noncompetitive contracts exceeding 150 days and documentation of non-local awards, and compared this information to requirements stated in PKEMRA, the Federal Acquisition Regulation (FAR), and Department of Homeland Security (DHS) and FEMA acquisition guidance. Although the information collected from our review of contracts is not generalizable to all relevant contracts, it was valuable in supplementing interviews with FEMA contracting officials in the contracting offices most likely to support disaster contracts and from FEMA’s 10 regions. To understand steps taken to implement PKEMRA contracting requirements, we spoke to DHS officials from the Office of the Chief Procurement Officer (OCPO) and FEMA policy officials. We also reviewed FEMA’s quarterly reports to Congress on contracting activities, including noncompetitive awards, and spoke to officials responsible for these reports, to determine if they accurately reported information identified in the 27 contracts and orders we reviewed. We compared FEMA’s 2007 report on the categories of products and services most appropriate for advance contracting and compared the categories to FEMA’s current lists of available advance contracts to determine the extent to which FEMA has established contracts for products and services identified in 2007 and how that information is made available in FEMA’s training and guidance. We also met with contracting officers in FEMA’s 10 regional offices and from the headquarters offices most likely to award contracts supporting disaster relief efforts to discuss their understanding of PKEMRA’s requirements. In addition, we met with officials responsible for FEMA’s Local Business Transition Team to discuss their role in supporting local vendors. We reviewed documentation related to good management practices, including the FAR and DHS guidance on required contracting reviews. Additionally, we met with contracting officials responsible for most of the contracts and orders we examined to clarify questions we had regarding the contract files. We conducted this performance audit from October 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Molly Traci, Assistant Director; Jennifer Dougherty; Brett Caloia; LeAnna Parkey; Manuel A. Valverde; Jocelyn Yin; Julia Kennon; Virginia Chanley; John Krump; Emily Bond; and Roxanna Sun made key contributions to this report.
FEMA obligated $2.1 billion in fiscal years 2013 and 2014 for products and services, which included almost $770 million from offices responsible for disaster contracting. Providing disaster relief in a timely manner is essential, while adhering to contracting laws and regulations helps safeguard taxpayer dollars. Following Hurricane Katrina, Congress passed PKEMRA to improve FEMA's disaster contracting. GAO was asked to review FEMA's disaster contracting practices. This report assesses the extent to which FEMA (1) made efforts to build and manage its contracting workforce and structure since PKEMRA, and (2) adopted PKEMRA reforms and demonstrated good management practices for disaster contracting. GAO analyzed data on FEMA's workforce from fiscal years 2005 through 2014, reviewed workforce guidance, and reviewed 27 contracts—including 16 selected through a random sample and 11 through a nonprobability sample based on factors including high cost—to determine the extent to which PKEMRA provisions were met. GAO also met with contracting officials. The Federal Emergency Management Agency (FEMA) has more than tripled the number of contracting officers it employs since Hurricane Katrina in 2005, but it does not have a sufficient process in place to prioritize disaster workload and cohesively manage its workforce. Some of the workforce growth is attributed to the establishment of the Disaster Acquisition Response Team (DART) in 2010, which has the primary mission of deploying to provide disaster contracting support, such as contracting for blankets or debris removal. DART has gradually assumed responsibility for administering the majority of disaster contract spending, but FEMA does not have a process for prioritizing the team's work during disasters. Without such a process, FEMA is at risk of developing gaps in contract oversight during major disasters. Further, in 2011, FEMA established an agreement that regional contracting officers would report to headquarters supervisors for technical oversight while continuing to respond to regional supervisors—who have responsibility for administrative duties—for everyday operations. This agreement has led to challenges for FEMA in cohesively managing its workforce, including heightening the potential for an environment of competing interests for the regional contracting officers. Further, FEMA has not revisited this agreement on annual basis as called for in the agreement. As a result, it does not incorporate lessons learned since its creation 4 years ago. FEMA has not fully implemented 2006 Post-Katrina Emergency Management Reform Act (PKEMRA) contracting reforms due in part to incomplete guidance.
Background The Davis-Bacon Act (40 U.S.C. 276a) requires workers on federal construction projects valued in excess of $2,000 to be paid, at a minimum, wages and fringe benefits that the Secretary of Labor determines to be prevailing for corresponding classes of workers in the locality where the contract is to be performed. The act covers every contract to which the United States or the District of Columbia is a party, for construction, alteration, or repair of public buildings or public works. The $2,000 threshold for projects covered by the Davis-Bacon Act has not changed since 1935. WHD, within Labor’s Employment Standards Administration (ESA), has responsibility for administering the Davis-Bacon Act through approximately 50 staff in the Washington, D.C., headquarters and in its six regional offices. Its duties include the collection of wage and fringe benefits data on construction projects for the calculation of local prevailing wage rates. For fiscal year 1996, the Congressional Budget Office estimates that $42 billion will be spent on the construction of federal projects. Labor’s Administrative Review Board hears appeals of prevailing wage determinations issued under the Davis-Bacon Act and upheld by the WHD Administrator. The Office of the Solicitor provides legal advice and assistance to Labor personnel relative to the administration and enforcement of the Davis-Bacon Act and represents WHD in Davis-Bacon wage determination cases before the Administrative Review Board. Labor collects wage and fringe benefit data through voluntary participation in a wage survey. Although the survey form does not explicitly so inform participants, failure to supply truthful answers can have serious consequences. It is a crime under federal law (18 U.S.C. 1001) to knowingly submit false data to the government, and it is a crime under federal law (18 U.S.C. 1341) to use the U.S. mail for fraudulent purposes. In previous reviews of the Davis-Bacon Act, we raised concerns about the accuracy of Labor’s wage determinations. In 1979, we pointed out that the act appeared to be impractical to administer due to the magnitude of the task of producing an estimated 12,400 accurate and timely prevailing wage determinations. Since then, Labor has implemented regulatory changes that have addressed some of our specific concerns about the process used to determine prevailing wages. For example, rules were changed to generally prohibit (1) including federal contracts in the area wage surveys and (2) mixing prevailing wage data from surveys of urban and rural areas. An additional change has likely resulted in more wage determinations being based on the average wage of an area rather than on the wage specified in area collective bargaining agreements. Technological improvements have also improved Labor’s ability to administer the Davis-Bacon wage determination process. Despite these changes, in 1994, we found continuing verification problems with the data that Labor uses to make prevailing wage determinations. The Congress is currently considering separate bills that would either repeal or reform the Davis-Bacon Act. Two bills (S. 141 and H.R. 500) would repeal the Davis-Bacon Act. Two other bills (H.R. 2472 and S. 1183) would reform the act. The two latter bills would change the way that Labor determines and enforces wage decisions in the construction industry. These reform bills include provisions that would (l) increase the $2,000 threshold for construction projects covered by Labor’s wage surveys and (2) expand Labor’s enforcement authority. Thirty-two states have “little Davis-Bacon” laws requiring the payment of prevailing wages on certain state-funded construction projects. Of these, 15 states conduct their own wage surveys as part of their prevailing wage rate determination process. Three of the remaining 17 states— Connecticut, Kentucky, and Oklahoma—have recently used the federal wage determinations as the basis for the prevailing rates on state construction projects, while the others generally base their wage rates on the union rate. Recent Events Resurface Concerns With Labor’s Process In January 1995, federal Labor and Oklahoma state labor officials received reports about possible inaccuracies in the results of a recent survey conducted by Labor that set prevailing wages for work on certain types of construction projects in the Oklahoma City and Tulsa areas. Based on this information, Labor directed an audit of the wage data used in the Oklahoma City survey. Labor’s review found that inaccuracies did exist in the wage determinations issued for some heavy construction and building construction job classifications in the Oklahoma City area. In March 1995, Labor issued a revised determination for certain heavy construction job classifications for the Oklahoma City area. Labor also reviewed the wage survey data of the remaining heavy construction job classifications for the Oklahoma City and Tulsa areas and issued revised wage rates in May 1995. During this period, believing the initial survey rates to be incorrect, Oklahoma state labor officials independently directed a review of the third-party data used in the contested survey. This investigation detected several occurrences of potentially inaccurate and fraudulent wage and fringe benefit information reported by third parties, which served to increase the federal prevailing wage rate for certain construction job classifications. In July 1995, Labor received the Oklahoma state Labor Commissioner’s report alleging that fraudulent data were submitted and used in the original wage determinations. This report however, did not challenge the revised wage determinations for heavy construction that Labor had issued in March and May 1995. In July 1995, Labor then initiated a review of the building construction wage surveys for Oklahoma that had been collected at the same time as the initial heavy construction wage survey data. Based on a determination that potential data verification problems existed in the building construction survey, Labor withdrew its wage determinations for all Oklahoma City building construction job classifications in August 1995. In April 1996, Labor issued new wage determinations for heavy construction based on completely new survey data and for building construction based on complete verification and analysis of previously conducted building construction survey data. The new determinations established prevailing rates that were higher in some instances and lower in others than the wage determinations in place in January 1995. As of mid-May 1996, these new determinations had not been contested. Labor’s Office of the Inspector General is currently surveying the extent to which fraudulent or inaccurate wage data were used by Labor in 1995 to determine prevailing wages under the Davis-Bacon Act in several of Labor’s regions. The study is expected to be completed in the fall of 1996. Labor’s Wage Determination Process Based on Voluntary Survey Participation Labor’s procedures for determining prevailing wages for individual counties or groups of counties are based on a survey of the wages and fringe benefits paid to workers in similar job classifications on comparable construction projects in the particular area. This information is collected through the voluntary submission of data from employers and third parties on construction projects surveyed for each wage determination. Labor’s wage determination process consists of four basic stages: planning and scheduling surveys, conducting the surveys, clarifying and analyzing respondents’ wage data, and issuing the wage determinations. In addition, any employer or interested party who wishes to contest or appeal Labor’s final wage determination can do so. Labor encourages the submission of wage information from all employers and third parties, including employee unions and industry associations that are not directly involved with the surveyed projects. In fiscal year 1995, Labor completed about 100 prevailing wage surveys, gathering wage and fringe benefit data from over 37,000 employers and third parties. Labor surveys wages and fringe benefits paid to workers in different job classifications for four basic types of construction (building, residential, heavy, and highway) covering more than 3,000 counties or groups of counties within the United States. Given the large number of prevailing wage determinations and Labor’s limited resources, Labor develops an annual plan to identify those geographic areas or counties for which wage determinations are most in need of revision. (See app. II for a detailed description of Labor’s prevailing wage determination and appeal procedures.) For each area designated for survey, Labor identifies the counties for which the wage determination should be conducted and determines what construction projects will be surveyed. Labor places primary responsibility for the collection and compilation of the relevant wage data on about 30 staff distributed among six Labor regional offices. The survey is distributed to the participant population, which includes the general contractor for each construction project identified as comparable and within the survey’s geographic area. In surveying the general contractors, Labor requests information on subcontractors to solicit their participation. Labor also surveys interested third parties, such as local unions and construction industry associations that are located or active in the survey area. Once the data submissions are returned, the analysts review and analyze the returned wage survey forms—WD-10 wage reporting forms. They follow up with the employer or third parties to clarify any information that seems discrepant, inaccurate, or confusing. The analysts then use this information to create computer-generated recommended prevailing wages for key construction job classifications. These recommended prevailing wages are reviewed and approved by Labor’s National Office in Washington, D.C. Labor publishes the Davis-Bacon final wage determinations in printed reports and on its electronic bulletin board, allowing updates to be rapidly communicated to contracting and assisting agencies. Modifications to wage determinations are published in the Federal Register. Any interested party has the opportunity to review or contest, through a written or telephone request, a final wage determination issued by Labor in Washington, D.C. (See app. II for details on Labor’s appeals process.) Weaknesses in Labor’s Procedures Could Lead to Inaccurate Prevailing Wage Rates Labor’s wage determination procedures contain weaknesses that could permit the use of fraudulent or inaccurate data for setting prevailing wage rates. These weaknesses include limitations in the degree to which Labor verifies the accuracy of the survey wage and fringe benefit data it receives, limited computer capabilities and safeguards to review wage data before calculating prevailing wage rates, and an appeals process that may not be well publicized. Labor’s failure to prevent the use of fraudulent or inaccurate data may result in wages and fringe benefits being paid to construction workers that are lower than those prevailing. Erroneous prevailing wage rates could also lead to excessive government construction costs and undermine confidence in the system among survey respondents, reducing their future participation. Verification of Wage Data Largely Limited to Telephone Contacts Labor’s regional staff rely primarily on telephone responses from employers or third parties to verify the information received on the WD-10 wage reporting forms. Staff in Labor’s regional offices that have Davis-Bacon operations reported that most of their verifications of data submissions—clarifications concerning accuracy, appropriateness, or inclusion—were conducted by telephone. Labor’s procedures also do not require and Labor staff rarely request supporting documentation—for example, payroll records—that supplement the WD-10 wage reporting forms submitted by employers. Labor officials and staff told us that if an employer insists that the wages reported are accurate, Labor’s wage analysts generally accepted what was communicated verbally by telephone. Analysts conduct telephone verification with the employer on all third-party data that appear to be inaccurate. For example, when employers and third parties submit wage information on the same project, verification is conducted by contacting the employer in the event of a discrepancy between data received from the employer and a third party.However, Labor officials and staff told us that, before August 1995, there was no requirement to contact the employer regarding the verification of third-party data. Typically, if there was some question regarding third-party data, staff generally resolved the matter by contacting the third party only, rather than verifying the information with the employer. Labor headquarters officials also said that because of resource constraints, regional staff do not conduct on-site inspections or reviews of employer payroll records to verify wage survey data. In recent years, Labor has reduced the number of staff allocated to Davis-Bacon wage-setting activities. For example, the number of staff in Labor’s regional offices assigned to the Davis-Bacon wage determination process—who have primary responsibility for the wage survey process—decreased from a total of 36 staff in fiscal year 1992 to 27 staff in fiscal year 1995, and Labor officials in one region also told us that staff had only received two training courses in the last 6 years. Labor’s regional staff told us that this staff decline has challenged their ability to collect and review wage survey data for accuracy and consistency. Limited Computer Capabilities Hinder Ability to Detect Erroneous Data Labor officials reported a lack of both computer software and hardware that could assist wage analysts in their reviews. They said that Labor staff depend on past experience and eyeballing the wage data for accuracy and consistency. For example, Labor offices do not have computer software that could detect grossly inaccurate data reported in Labor’s surveys. Regional staff reported only one computer edit feature in the current system that could eliminate duplicate entry of data received in the wage surveys. As a result, several review functions that could be performed by computers are conducted by visual reviews by one or more wage analysts or supervisory wage analysts in Labor’s regional offices. Labor’s ability to review wage survey data is also hindered by a lack of up-to-date computer hardware. For example, in the Atlanta and Philadelphia regional offices, most of the computer hardware is old and outdated. In these offices, because of the computers’ limited memory capabilities, Labor staff told us that they are unable to store historical data on prior wage determinations that would allow wage analysts to compare current with prior recommendations for wage determinations in a given locality. These limitations could be significant given the large number of survey forms received and the frequency of errors on the WD-10 reporting forms. In 1995, Labor received wage data from over 37,000 employers and third parties, and Labor staff reported that submissions with some form of data error were quite common. The frequency of errors could be caused in part by employer confusion in completing the wage reporting forms. Depending on the employer’s size and level of automation, completing the WD-10 reporting forms could be somewhat difficult and time consuming. For example, the employer must not only compute the hourly wages paid to each worker who was employed on the particular project in a certain job classification but must also determine the time period when the most workers were employed in each particular job classification.Representatives of an employer association, a union, and state labor officials also told us that many smaller, nonunion employers do not have the capability to easily report information on the WD-10 wage reporting forms. Although Labor staff reported that wage surveys with data errors are fairly common, agency officials believe that it is very unlikely that erroneous wage data went undetected and were used in the prevailing wage determination. They said that a key responsibility of Labor’s wage analysts is to closely scrutinize the WD-10 wage reporting forms and contact employers as necessary for clarification. Labor officials contended that, over time, this interaction with employers and third parties permitted Labor staff to develop considerable knowledge of and expertise in the construction industry in their geographic areas and to easily detect wage survey data that are inaccurate, incomplete, or inconsistent. Although Labor officials also acknowledged that additional staff, enhanced computer capabilities, and the provision of more training and outreach to employers and third parties on how to participate in the surveys could improve their review of wage survey data and reduce errors, they said that all these options require additional resources that are currently unavailable. Lack of Awareness of the Appeals Process May Limit Its Effectiveness Labor’s regulations provide any interested party, such as an employee, employer or contractor, or representatives of associations or unions, the opportunity to request a reconsideration of Labor’s prevailing wage determinations. A formal request for reconsideration must be in writing and accompanied by a full statement of the interested party’s views and any supporting wage data or other pertinent information. Instead of formally requesting a reconsideration, an interested party may make informal inquiries by telephone or in writing for quick resolution of questions about wage determinations. Labor’s regional officials handle informal inquiries about wage determinations, with WHD’s National Office staff getting involved only in the formal reconsiderations. Labor reported that most inquiries on its wage determinations are informal and are generally resolved quickly over the telephone at the regional offices. If an informal inquiry is not resolved to the satisfaction of the interested party, he or she may submit a formal request for reconsideration to either the regional or National Office. On formal requests for reconsideration, regional offices may or may not make recommendations before referring them to the National Office for a decision. A successful request for reconsideration typically results in Labor modifying an existing determination or conducting a new wage survey. An interested party may appeal an unsuccessful request to Labor’s Administrative Review Board for adjudication. Labor officials said it is extremely rare for anyone to make formal requests for reconsideration of a determination, reporting that there had been only one such case in the last 5 years. Labor officials interpreted this record as a vindication of complaints about the accuracy and fairness of the prevailing wage determinations issued. The small number of formal appeals could also be evidence of interested parties’ lack of awareness of their rights and the difficulty they faced in collecting the evidence necessary to sustain a case. Representatives of construction unions and industry trade associations told us that employers were generally unaware of their rights to appeal Labor’s final wage determinations. In addition, officials with the Oklahoma Department of Labor told us that even if an interested party wanted to appeal a wage determination to the National Office and the Administrative Review Board, the length of time it takes to independently verify wage data submissions could discourage such an action. For example, for their 1995 study of wage rates in Oklahoma City, an intra-agency team took 1 month to fully investigate and verify the information for only three construction projects. A private employer or organization wishing to appeal a determination on the basis that the wage information used was inaccurate might experience similar difficulties. Labor officials reported that for an interested party to contest a new wage determination successfully, it must present evidence demonstrating that the survey wage rates do not reflect the pattern of wages paid in a particular area. The amount of evidence to warrant a new survey will vary according to a variety of factors, including the quality of the evidence and the amount of construction activity in an area. Labor officials contended that collecting information to contest a wage survey is not difficult for most interested parties who inquire. They said that most inquiries originate either from contractors who have access to wage rates on their own projects; unions who have access to collective bargaining rates; or project grantees, such as local governments, who have access to the wage rates paid on their other projects. However, Labor officials acknowledged that it could be difficult for an interested party to challenge a wage determination on the basis that the wage data submitted by employers were inaccurate. Consequences of Wage Determinations Based on Erroneous Data Wage determinations based on erroneous data could result in wages and fringe benefits paid to workers that are higher or lower than would otherwise be prevailing on federal construction projects. For example, although they considered it unlikely, Labor officials acknowledged that there could be an incentive for third parties, particularly union contractors, to report higher wages than those being paid on a particular construction project. The reporting of higher wages could influence the prevailing wages in a local area toward the typically higher union rate or at least minimize any wage differential between the unionized wage rate and the prevailing wage to be paid on Davis-Bacon construction projects. The use of inaccurate data could also lead to lower wages for construction workers on federal projects than would otherwise be prevailing. Industry association members and officials told us that in several parts of the country, employers, especially nonunion contractors, paid wages on their private projects below the prevailing wage levels specified by the Davis-Bacon Act in their areas. These officials told us that this differential sometimes proved problematic for contractors in retaining their skilled labor force. For example, an official of an employer association told us that an employer who successfully bid on a Davis-Bacon contract but who typically paid wages below the prevailing rate would be required to pay the workers employed on the new project at the higher Davis-Bacon wage rates. Depending on the local labor market conditions, when the project was completed, these workers typically received their pre-Davis-Bacon, lower wages and fringe benefits on any future work. In such cases, some employees became disgruntled, believing that they were being cheated, or suffered lower morale that sometimes led to increased staff turnover. Given these conditions, Labor officials acknowledged that an employer in a largely nonunion area who had been paying lower than average wages would have an incentive to “chisel” or report wages and fringe benefits levels somewhat lower than what he or she was actually paying, in an attempt to lower the Davis-Bacon rate. To the extent that the submission of fraudulent or inaccurate data is perceived by the construction industry to be a widespread problem, it could also erode survey participation support among the interested parties. Officials from one industry association reported that despite training classes and other assistance it provides, it was difficult for the association to foster employer survey participation, especially among the nonunion contractors. To the extent that participants’ beliefs about erroneous data being used as the basis for Labor’s wage determinations became widespread, the number of survey respondents would likely decrease. Labor’s Short- and Long-Term Initiatives to Improve Wage Determination Process At least partially in response to the problems detected in the Oklahoma wage surveys, Labor has proposed both short- and long-term initiatives to improve the accuracy of the data used in prevailing wage determinations. In August 1995, Labor implemented a procedural change requiring its regional wage analysts to conduct telephone verifications with the employer on all third-party data that appear to be inaccurate or discrepant. In addition, the new policy requires analysts to verify with the employers at least a 10-percent sample of third-party data that appear to be accurate. Under this requirement, Labor staff first attempt to verify this information by telephone with the employer. If Labor staff are unable to contact the employer, they will then contact the third party to request supporting documentation verifying the submitted wage information on the specific construction project. This new requirement was linked with training for all regional office staff that reemphasized agency procedures for analyzing and verifying employer and third-party data received in its wage surveys. Although the new procedures may improve the accuracy of data received from third parties, Labor’s change does not include enhanced verification of the majority of the data used in most wage determinations; that is, data directly received from employers. In addition, the new procedures do not move toward encouraging the use of Labor’s appeals process. Labor has proposed placing a statement on the WD-10 survey reporting form that would inform respondents that they could be prosecuted if they willfully falsify data in the Davis-Bacon wage surveys. Labor officials solicited comments on this proposal in the Federal Register in February 1996, with the comment period ending in May 1996. Labor has also proposed a long-term strategy to review the entire Davis-Bacon wage determination process. In late 1995, Labor established an ongoing task group to identify various strategies for improving the process it uses to determine prevailing wages. Labor officials held meetings with contractors knowledgeable about the Davis-Bacon prevailing wage determination process. These continuing discussions have led to the identification of various weaknesses in the wage determination process and steps Labor might take to address them. Labor has acknowledged the weaknesses identified by the task group; for example, the system’s vulnerability to manipulation through the submission of false data that can erode the accuracy of some of its wage determinations. In response, in its fiscal year 1997 budget request, Labor asked for about $4 million to develop, evaluate, and implement alternative reliable methodologies or procedures that will yield accurate and timely wage determinations at reasonable cost. These alternatives would include exploring the feasibility of replacing the current labor-intensive wage survey process with the development of econometric models from which occupational wage rates could be extrapolated from existing sources of wage data and privatizing the wage survey process using alternative technologies that would derive prevailing wage rates from a sample design rather than from a universe survey as is currently used. If such alternatives are not feasible for all localities and occupational job classifications, Labor would focus on enhancing the existing survey process, including the improvement of data verification procedures, the fostering of employer participation, and the expansion of the geographic scope of the Davis-Bacon surveys. Labor anticipates completing its evaluation of the wage determination process in late 1996 and it expects to consider any recommendations that may result from the Office of the Inspector General study, which should be completed about the same time. In the interim, absent any additional action, Labor’s procedures will still contain many of the weaknesses that we have identified. These could result in the use of erroneous data in its determination of prevailing wages. Conclusions Labor’s responsibilities to establish prevailing wage rates have a significant impact on the $42 billion to be spent in fiscal year 1996 in federal construction contract business and the wages paid to construction workers. Although Labor has worked to improve the accuracy of its wage determinations, a lack of confidence still exists with Labor’s process. Labor has begun to address process weaknesses, including the exploration of alternative reliable methodologies for collecting the information it needs to make the wage determinations. In addition, if it discovers that such alternatives are not feasible for all localities, Labor plans to take other action to improve its existing survey process. Labor’s actions are clearly positive steps; however, what is missing from Labor’s plans is a short-term solution to the existing system’s vulnerability to the use of fraudulent or inaccurate data. Even if Labor obtains the additional funds that it requested to improve its process, it would take some time to identify and implement improvements. In the meanwhile, Labor would continue to issue new wage determinations and enforce compliance with existing ones that may be based on fraudulent or inaccurate data. Such data can lead to the payment of wages that are either lower than what workers should receive by law or else higher than the actual prevailing wages, which would inflate federal construction costs at the taxpayer’s expense. Although we have not established the extent to which such data have been used, the system’s long-standing vulnerabilities and a lack of confidence by some critics in the accuracy of the wage determinations suggest that immediate changes are in order. We believe that Labor needs to improve its verification of wage data submitted by employers—similar to the change it made in verification of third-party submissions. More specifically, Labor should apply to employer submissions a comparable approach of selecting a sample for more intensive review. When a submission is selected for review, Labor should ask the employer to provide additional documentation supporting the wage data. Although Labor has indicated that it is unable to do more intensive verification because of limited staff resources, Labor does not appear to have explored proposals to target its scarce resources more effectively toward verification efforts. Labor also needs to revisit its procedures to appeal wage determinations to improve their accessibility. At a minimum, it needs to publicize the availability of the appeals process to all interested parties and the rights of those parties to obtain information on the data used to develop the wage determinations believed to be questionable. For example, Labor could place a statement either with its issuance of each wage determination, on its wage survey forms, or in some other manner, informing interested parties of their rights to request summary information on a wage determination (that is, the WD-22a construction project wage summary report) and of the procedures for initiating an appeal. Recommendations While Labor continues in the long term to evaluate the Davis-Bacon wage determination process, we recommend that the Secretary of Labor require the Assistant Secretary for Employment Standards to request a sample of participating employers to submit appropriate documentation on their data submissions or to conduct a limited number of on-site inspection reviews of employer wage data. Because Labor’s appeals process can serve as an additional internal control to guard against the use of fraudulent or inaccurate data in the wage determination process, we recommend that the Secretary of Labor require the Assistant Secretary for Employment Standards to inform employers, unions, and other interested parties of their rights to request summary information on a wage determination and of the agency’s procedures for initiating an appeal of a wage determination. Agency Comments The Department of Labor concurs with our recommendations and stated that it is developing an action plan, consistent with available resources, to implement the recommendations while it continues to evaluate longer-term revisions to the Davis-Bacon wage determination process. However, Labor disagreed with our chacterization of a “pervasive” lack of confidence in the wage determinations on the part of employers and other affected parties. We agree with Labor’s comment and deleted that characterization from our conclusions. Labor also provided information to clarify the report’s chronology regarding action to correct wage determinations in Oklahoma, and we have revised our description as necessary. Finally, Labor provided some technical corrections, which we incorporated as appropriate in the report. Labor’s comments are included in appendix III. As agreed with your office, we are sending copies of this report to the appropriate congressional committees, the Secretary of Labor, the Assistant Secretary of the Employment Standards Administration, WHD officials in Atlanta and Philadelphia, and the respective regional offices that participated in our telephone survey. We will also make copies available to others on request. Major contributors to this report are listed in appendix IV. Objectives, Scope, and Methodology We were asked by the Chairmen of the Subcommittee on Oversight and Investigations and the Subcommittee on Workforce Protections of the House Committee on Economic and Educational Opportunities to study potential weaknesses in the process that the Department of Labor uses to make prevailing wage determinations under the Davis-Bacon Act of 1931. More specifically, the objectives of our review were to (1) identify the steps used by Labor to collect data and determine and report the prevailing wages to be paid on federally funded construction projects, (2) determine whether specific weaknesses in the process could have resulted in the use of inaccurate or fraudulent data in its prevailing wage determinations, and (3) assess the extent to which Labor is addressing any identified process weaknesses. Methodology To respond to this request, we collected information from Labor’s WHD on the Davis-Bacon prevailing wage determination process, including Labor’s survey procedures and its implementing regulations. To understand the procedures for collecting, determining, and reporting survey wage data, we interviewed Labor officials and staff in Washington, D.C.; Atlanta; and Philadelphia. We also surveyed staff in Labor’s six regions with Davis-Bacon operations to ascertain the procedures used to review and verify wage and fringe benefits survey data. We also collected information from these regions on the procedures available to appeal Labor’s wage determinations and the frequency of those appeals. We also obtained the views of representatives of individual employers, construction unions, and industry associations regarding potential weaknesses in Labor’s wage determination process and on possible options to address those weaknesses. For example, we spoke with representatives from the Associated Builders and Contractors, Incorporated; the International Union of Operating Engineers; and the National Alliance for Fair Contracting to obtain the views of groups most directly affected by the administration of the Davis-Bacon Act. In addition, we spoke with representatives from the State of Oklahoma Department of Labor and the F.W. Dodge Division of McGraw-Hill Information Systems. We also reviewed the social science literature on the Davis-Bacon Act, focusing on those articles that addressed issues on the wage determination process. Analysis of the Wage Determination Process To identify the steps in the survey and wage determination process, we reviewed Labor’s documents, including the Davis-Bacon Construction Wage Determinations Manual of Operations, and Davis-Bacon training materials for wage analysts. We interviewed Labor officials at the National Office in Washington, D.C., to clarify our understanding of the policies and procedures used in the wage determination process and to obtain information on changes to the process implemented in 1995. We visited Labor’s regional offices in Atlanta and Philadelphia, where we interviewed the regional administrators and wage specialists and analysts about how the process works and obtained their perspectives on weaknesses in the process. We chose to conduct on-site visits in the Atlanta and Philadelphia regional offices on the basis of regional personnel experiences in the prevailing wage process, the dollar value of federal construction, and the degree of unionization. The remaining four regional offices were contacted by telephone and questioned about specific aspects of the survey process dealing with data integrity and the appeals process. Analysis of Recent Changes to the Process We reviewed Labor’s recent procedural changes to improve the Davis-Bacon wage determination process. We also spoke with federal Labor officials in the Washington, D.C., office about efforts recently instituted as well as actions being considered. We also reviewed Labor’s 1996 draft report of proposals to evaluate ways to improve the full Davis-Bacon prevailing wage determinations process, including Labor’s fiscal year 1997 budget request for additional funds to contract for the development, evaluation, and implementation of alternative reliable methodologies. Labor’s recent changes to the wage determination process were assessed on the basis of their potential to strengthen those areas we found potentially vulnerable to the inclusion of inaccurate or fraudulent data. We did not evaluate the actual impact of these changes. Limitations of Our Review Because we limited our analysis of the wage survey and determination process to issues directly related to the detection of inaccurate or fraudulent data, we did not attempt to determine the extent to which any identified weaknesses in Labor’s process were actually contributing to inaccurate prevailing wage determinations. We also did not verify the accuracy of the wage determination data Labor used or explore the adequacy of Labor’s survey response rates or its calculation of prevailing wages. Labor’s Wage Determination and Appeals Process Under the Davis-Bacon Act The Davis-Bacon Act requires that workers employed on federal construction contracts valued in excess of $2,000 be paid, at a minimum, wages and fringe benefits that the Secretary of Labor determines to be prevailing for corresponding classes of workers employed on projects that are similar in character to the contract work in the area where the construction takes place. To determine the prevailing wages and fringe benefits in various areas throughout the United States, Labor’s WHD periodically surveys wages and fringe benefits paid to workers in four basic types of construction (building, residential, highway, and heavy). Labor has designated the county as the basic geographic unit for data collection, although Labor also conducts some surveys setting prevailing wage rates for groups of counties. Wage rates are issued for a series of job classifications in the four basic types of construction, so each wage determination requires the calculation of prevailing wages for many different trades, such as electrician, plumber, carpenter, and drywall installer. For example, the prevailing wage rates for the Washington, D.C., metropolitan area include wage rates for 143 different construction trade occupations. Because there are over 3,000 counties, more than 12,000 surveys could be conducted each year if every county in the United States was surveyed. In fiscal year 1995, Labor completed about 100 prevailing wage surveys, gathering wage and fringe benefit data from over 37,000 employers and interested parties. As shown in figure II.1, Labor’s wage determination process consists of four basic stages: planning and scheduling surveys of employers’ wages and fringe benefits in similar job classifications on comparable construction projects; conducting surveys of employers and third parties, such as representatives of unions or industry associations, on construction projects; clarifying and analyzing respondents’ data; and issuing the wage determinations. In addition, an interested party, such as a contractor; labor union; or federal, state, or local agency, may seek review and reconsideration of Labor’s final wage determinations through an appeals process. Stage 1: Planning and Scheduling Survey Activity Labor annually identifies the geographic areas that it plans to survey. Because it has limited resources, a key task of Labor’s staff is to identify those counties and types of construction most in need of a new survey.In selecting areas for inclusion in planned surveys, the regional offices establish priorities based on criteria that include the need for a new survey based on the volume of federal construction in the area; the age of the most recent survey; and requests or complaints from interested parties, such as state and county agencies, unions, and contractors’ associations. If a type of construction in a particular county is covered by a wage determination based on collective bargaining agreements (CBA) and Labor has no indication that the situation has changed such that a wage determination should now reflect nonunion rates, an updated wage determination may be based on updated CBAs. The unions submit their updated CBAs directly to the National Office. The Regional Survey Planning Report Shows Where Federally Financed Construction Is Concentrated Planning begins in the third quarter of each fiscal year when the National Office provides regional offices with the Regional Survey Planning Report (RSPR). The RSPR provides data obtained under contract with the F.W. Dodge Division of McGraw-Hill Information Systems. The data show the number and value of active construction projects by region, state, county, and type of construction and give the percentage of total construction that is federally financed. Labor uses the F.W. Dodge data because they comprise the only continuous nationwide database on construction projects. Labor supplements the F.W. Dodge data with additional information provided to the National Office by federal agencies regarding their planned construction projects. The RSPR also includes the date of the most recent survey for each county and whether the existing wage determinations for each county are union, nonunion, or a combination of both. Using this information, the regional offices, in consultation with the National Office, designate the counties and type of construction to be included in the upcoming regional surveys. Although Labor usually designates the county as the geographic unit for data collection, in some cases more than one county is included in a specific data gathering effort. The regional offices determine the resources required to conduct each of the priority surveys. When all available resources have been allocated, the regional offices transmit to the National Office for review their schedules of the surveys they plan to do: the types of construction, geographic areas, and time periods that define each survey. When Labor’s National Office approves all regional offices’ preliminary survey schedules, it assembles them in a national survey schedule that it transmits to interested parties, such as major national contractor and labor organizations, for their review and comment. The National Office transmits any comments or suggestions received from interested parties to its affected regional offices. Organizations proposing modifications of the schedule are requested to support their perceived need for alternative survey locations by providing sufficient evidence of the wages paid to workers in the type of construction in question in the area where they want a survey conducted. Each Regional Office Obtains a File of Active Projects That Match Its Survey Objectives The target date for establishing the final fiscal year survey schedule is September 15. Once the National Office has established the final schedule, each regional office starts to obtain information it can use to generate lists of survey participants for each of the surveys it plans to conduct. Each regional office contacts Construction Resources Analysis (CRA) at the University of Tennessee. CRA applies a model to the F.W. Dodge data that identifies all construction projects in the start-up phase within the parameters specified in the regional office’s request and produces a file of projects that were active during a given time period. The time period may be 3 months or longer, depending on whether the number of projects active during the period is adequate for a particular survey. F.W. Dodge provides information on each project directly to the regional offices. The F.W. Dodge reports for each project include the location, type of construction, and cost of the project; the name and address of the contractor or other key firm associated with each project; and if available, the subcontractors. Analysts Screen Projects to Determine Those to Be Surveyed When the F.W. Dodge reports are received by the regional offices, Labor analysts screen them to make sure the projects meet four basic criteria for each survey. The project must be of the correct construction type, be in the correct geographic area, fall within the survey time frame, and have a value of at least $2,000. In addition to obtaining files of active projects, Labor analysts are encouraged to research files of unsolicited information that may contain payment evidence submitted in the past that is within the scope of a current survey. Stage 2: Conducting Surveys of Participants Regional Offices Conduct the Surveys When the regional offices are ready to conduct the new surveys, they send the WD-10 wage reporting form to each contractor (or employer) identified by the F.W. Dodge reports as being in charge of one of the projects to be surveyed, together with a transmittal letter that requests information on any additional applicable projects the contractor may have. (See figs. II.2, II.4, and II.5.) Every WD-10 that goes out for a particular project has on it a unique project code, the location of the project, and a description of it. Data requested on the WD-10 include a description of the project and its location, in order to assure the regional office that each project for which it receives data is the same as the one it intended to have in the survey. The WD-10 also requests the contractor’s name and address; the value of the project; the starting and completion dates; the wage rate, including fringe benefits, paid to each worker; and the number of workers employed in each classification during the week of peak activity for that classification. The week of peak or highest activity for each job classification is the week when the most workers were employed in that particular classification. The survey respondent is also asked to indicate which of four categories of construction the project belongs in. Detailed instructions appear on the back of the WD-10. (See fig. II.5.) Survey Is Announced to Third Parties In addition, about 2 weeks before a survey is scheduled to begin, regional offices send WD-10s and transmittal letters to a list of third parties, such as national and local unions and industry associations, to encourage participation. (See fig. II.3.) Labor encourages the submission of wage information from third parties, including unions and contractors’ associations that are not the direct employers of the workers in question, in an effort to collect as much data as possible. Third parties that obtain wage data for their own purposes may share it with Labor without identifying specific workers. For example, union officials need wage information to correctly assess workers’ contributions toward fringe benefits. Third-party data generally serve as a check on data submitted by contractors if both submit data on the same project. Regional offices also organize local meetings with members of interested organizations to explain the purpose of the surveys and how to fill out the WD-10. Because the F.W. Dodge reports do not identify all the subcontractors, both the WD-10 and the transmittal letter ask for a list of subcontractors on each project. Subcontractors generally employ the largest portion of on-site workers, so their identification is considered critical to the success of the wage survey. Analysts send WD-10s and transmittal letters to subcontractors as subcontractor lists are received. Participants Who Submit Data Receive a Written Acknowledgment Transmittal letters also state that survey respondents will receive an acknowledgment of data submitted, and that they should contact the regional office if one is not received. Providing an acknowledgment is intended to reduce the number of complaints that data furnished were not considered in the survey. Labor analysts send contractors who do not respond to the survey a second WD-10 and a follow-up letter. If they still do not respond, analysts attempt to contact them by telephone to encourage them to participate. Stage 3: Clarifying and Analyzing Respondents’ Data Analysts Review the Data Submitted as They Receive Them As the Labor wage analysts receive the completed WD-10s in the regional offices, they review and analyze the data. Labor’s training manual guides the analyst through each block of the WD-10, pointing out problems to look for in data received for each one. Analysts are instructed to write the information they receive by telephone directly on the WD-10 in a contrasting color of ink, indicating the source and the date received. They are instructed to draw one line through the old information so it is still legible. Labor’s wage analysts review the WD-10s to identify missing information, ambiguities, and inconsistencies that they then attempt to clarify or verify by telephone. For example, an analyst may call a contractor for a description of the work done on a project in order to verify that a particular project has been classified according to the correct construction type. An analyst may also call a contractor to ask about the specific type of work that was performed by an employee in a classification that is reported in generic terms, such as a mechanic. In that situation, the analyst would specify on the WD-10 whether it is a plumber mechanic or some other type of mechanic to make sure that the wages that are reported are appropriately matched to the occupations that are paid those rates. Similarly, due to variations in area practice, analysts may routinely call to find out what type of work the employees in certain classifications are doing. This is because in some areas of the country some contractors have established particular duties of traditional general crafts, for example carpenters, as specialty crafts that are usually paid at lower rates than the general craft. New Policy Implemented for Verifying Third-Party Data In August 1995, Labor implemented a new policy for verifying third- party data. Where data submitted by third parties present problems, Labor now requires wage analysts to conduct a verification review by telephone of all data from the third party with the employer. In cases where the employer cannot be reached, Labor will accept third-party data only with supporting payroll documentation. Furthermore, the new policy requires analysts to verify with employers a sample of at least 10 percent of the third-party data that appear to present no problems. Data Are Recorded and Tabulated When an analyst is satisfied that any remaining issues with respect to the data on the WD-10s for a particular project have been resolved, the data are recorded and tabulated. The analyst enters them into a computer, which uses the data to generate a Project Wage Summary, Form WD-22a, for reporting survey information on a project-by-project basis. The WD-22a has a section for reporting the name, location, and value of each project; the number of employees who were in each classification; and their hourly wage and fringe benefits. It also has a section for reporting the date of completion or percentage of the project completed, whichever is applicable. (See fig. II.6.) Analysts Determine If Data Are Adequate At least 2 weeks before the survey cut-off date, the response rate for the survey is calculated to allow time to take follow-up action if the response rate is determined to be inadequate. For example, WHD operational procedures specify that if data gathered for building or residential surveys provide less than a 25-percent usable response rate or less than one-half of the required key classes of workers, the analyst will need to obtain data from comparable federally financed projects in the same locality. If an analyst has no data on occupations identified by Labor as key classifications of workers for the type of construction being surveyed, he or she is required by Labor’s procedures to call all the subcontractors included in the survey who do that type of work and from whom data are missing, to try to get data. If the analyst still cannot get sufficient data on at least one-half of the required key classes, consideration must be given to expanding the scope of the survey geographically to get more crafts represented. If the overall survey usable response rate is 25 percent or more, data on three workers from two contractors are sufficient to establish a wage rate for a key occupation. After the survey cut-off date, when all valid data have been recorded and tabulated, a final survey response rate is computer-generated. Typically, it takes a WHD analyst 4 months to conduct a survey. Prevailing Wage Rates Are Computer-Generated Once all the valid project data have been entered, the prevailing wage rate for each classification of worker can be computer-generated. If there is a majority of workers paid at a single rate in a job classification, that rate prevails for the classification. The wage rate needs to be the same to the penny to constitute a single rate. If there is no majority paid at the same rate for a particular classification, a weighted average wage rate for that occupation is calculated. The prevailing wage rate for each occupation is compiled in a computer-generated comprehensive report for each survey, the Wage Compilation Report, Form WD-22. The WD-22 lists each occupation and the wage rate recommended for that occupation by the regional office. A rule column indicates whether the rate is based on a majority (M) or a weighted average (A), and a column to the left of the rule column provides the number of workers for which data were used to compute each wage rate. (See fig. II.7.) The regional offices transmit survey results to the National Office, which reviews the results and recommends further action if needed. Stage 4: Issuing the Wage Determinations When all its recommendations have been acted upon, the National Office issues the wage determination. These determinations are final. There is no review or comment period provided to interested parties before they go into effect. Access to wage determinations is provided both in printed reports available from the U.S. Superintendent of Documents and on an electronic bulletin board. Modifications to general wage determinations are published in the Federal Register. Labor’s Appeals Process An interested party may seek review and reconsideration of Labor’s final wage determinations. The National Office and the regional offices accept protests and inquiries relating to wage determinations at any time after a wage determination has been issued. The National Office refers all the complaints it receives to the relevant regional offices for resolution. Most inquiries are received informally by telephone, although some are written complaints. Regional office staff said that a majority of those with concerns appear to have their problems resolved after examining the information (collected on a Form WD-22a) for the survey at issue, because they do not pursue the matter further. If an examination of the forms does not satisfy them, they are required to provide information to support their claim that a wage determination needs to be revised. The National Office modifies published wage determinations in cases where regional offices, based on evidence provided, recommend that it do so; for example, when it has been shown that a wage determination was the result of an error by the regional office. However, some of those who seek to have wage rates revised are told that a new survey will be necessary to resolve the particular issue that they are concerned about. For example, if the wage rates of one segment of the construction industry are not adequately reflected in survey results due to a low rate of participation in the survey by that segment of the industry, a new survey would be necessary to resolve this issue. An Interested Party May Appeal a Decision of Labor’s WHD Administrator Those who are not satisfied with the decision of the regional office may write to the National Office to request a ruling by Labor’s WHD Administrator. If the revision of a wage rate has been sought and denied by a ruling of Labor’s WHD Administrator, an interested party has 30 days to appeal to the Administrative Review Board for review of the wage determination. The board consists of three members appointed by the Secretary of Labor. The Solicitor of Labor represents WHD in cases involving wage determinations before the Administrative Review Board. A petition to the board for review of a wage determination must be in writing and accompanied by supporting data, views, or arguments. Labor reports that it has had only one appeal with respect to wage determinations in the past 5 years. The result of the appeal was that a contested rate was changed. Transmittal Letters and Forms Used in Labor’s Davis-Bacon Prevailing Wage Determination Process Presented below are examples of transmittal letters and forms used in Labor’s Davis-Bacon prevailing wage determination process. Included are examples of (1) the transmittal letter to accompany the Form WD-10 sent to contractors; (2) the transmittal letter to accompany the Form WD-10 sent to interested parties; (3) the front of the Form WD-10 used to collect data on which wage determinations are based and (4) the back of the Form WD-10 with instructions for filling out the front of the form; (5) the Form WD-22a, which provides a summary of data received on a particular project; and (6) the Form WD-22, which is a comprehensive report of the prevailing wage rate for each occupation in a survey. Related GAO Products Addressing the Deficit: Budgetary Implications of Selected GAO Work for Fiscal Year 1996 (GAO/OCG-95-2, Mar. 15, 1995). Workplace Regulation: Information on Selected Employer and Union Experiences (GAO/HEHS-94-138, Vol. I, June 30, 1994). Davis-Bacon Act (GAO/HEHS-94-95R, Feb. 7, 1994). The Davis-Bacon Act Should Be Repealed (GAO/HRD-79-18, Apr. 27, 1979). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Labor's efforts to prevent the use of inaccurate wage data for Davis-Bacon Act wage rate determinations, focusing on: (1) the steps Labor follows in collecting and reporting wage data; and (2) weaknesses in Labor's wage determination process. GAO found that: (1) Labor's wage determinations are based on voluntary submissions of wage and benefit data from employers and third parties; (2) such internal control weaknesses as inaccurate wage and fringe benefit data, limited computer capability, and an inaccessible appeals process often lead to increased government construction costs or lower wages and fringe benefits for construction workers; (3) Labor began requiring its regional staff to verify third-party wage survey data in August 1995, but the verification does not address erroneous employer-reported data; (4) Labor does not have sufficient computer resources to automate data collection and verification; and (5) Labor requested $4 million in its fiscal year 1997 budget to develop, evaluate, and implement alternative reliable methodologies that will provide accurate and timely wage determinations at a reasonable cost.
The Long-Term Fiscal Outlook Remains Unsustainable The unified budget deficit declined between fiscal years 2003 and 2007, but this did not change the long-term path: it remains unsustainable. Moreover, while the recent past shows some progress in the annual unified deficit figures, any assessment of the federal government’s long- term fiscal outlook also needs to recognize the fact that the Social Security cash surplus has been used to offset spending in the rest of government for many years. In fiscal year 2007, for example, the “on-budget” deficit— the deficit excluding the Social Security surplus—was $344 billion, more than double the size of the unified deficit of $163 billion. There is a limit to how long the Social Security surplus will offset other spending. The rest of the budget will feel the pressure when the Social Security cash surplus begins to decline starting in 2011—less than 3 years from now. In 2017 the Social Security cash flow turns negative—at that point the choices will be increased borrowing from the public, reduced spending, or increased revenue. These dates call attention to the narrowing window. The real challenge then is not this year’s deficit or even next year’s; it is how to change the current fiscal path so that growing deficits and debt levels do not reach unsustainable levels. By definition something that is unsustainable will stop—the challenge is to take action before being forced to do so by some sort of crisis. Health care costs are growing much faster than the economy, and the nation’s population is aging. These drivers will soon place unprecedented, growing, and long-lasting stress on the federal budget. Absent action, debt held by the public will grow to unsustainable levels. Figure 1 shows GAO’s simulation of the deficit path based on recent trends and policy preferences. In this simulation, we start with CBO’s baseline and then assume that (1) all expiring tax provisions are extended through 2018—and then revenues are brought to their historical level as a share of gross domestic product (GDP) plus expected revenue from deferred taxes—(2) discretionary spending grows with the economy, and (3) no changes are made to Social Security, Medicare, or Medicaid. Figure 2 looks behind the deficit path to the composition of federal spending. It shows that the estimated growth in Medicare, Medicaid, and to a lesser extent Social Security leads to an unsustainable fiscal future. In this figure the category “all other spending” includes much of what many think of as “government”—discretionary spending on such activities as national defense, homeland security, veterans health benefits, national parks, highways and mass transit, and foreign aid, plus mandatory spending on the smaller entitlement programs such as Supplemental Security Income, Temporary Assistance for Needy Families, and farm price supports. The growth in Social Security, Medicare, Medicaid, and interest on debt held by the public dwarfs the growth in all other types of spending. Rapidly rising health care costs are not simply a federal budget problem; they are a problem for other levels of government and other sectors. As shown in figure 3, GAO’s fiscal model demonstrates that state and local governments—absent policy changes—will also face large and growing fiscal challenges beginning within the next few years. As is true for the federal budget, growth in health-related spending—Medicaid and health insurance for state and local employees and retirees—is the primary driver of the long-term fiscal challenges facing the state and local governments. These simulations imply that state and local fiscal challenges will add to the nation’s fiscal difficulties and suggest that the nation’s fiscal challenges cannot be remedied simply by shifting the burden from one sector to another. If unchanged, the federal government’s increased spending and rising deficits will drive a rising debt burden. At the end of fiscal year 2007, federal debt held by the public exceeded $5 trillion. Figure 4 shows that this growth in the federal government’s debt cannot continue unabated without causing serious harm to the economy. In the last 200 years, only during and after World War II has debt held by the public exceeded 50 percent of GDP. But this is only part of the story. The federal government for years has been borrowing the surpluses in the Social Security trust funds and other similar funds and using them to finance federal government costs. When such borrowings occur, the Department of the Treasury issues federal securities to these government funds that are backed by the full faith and credit of the U.S. government. Although borrowing by one part of the federal government from another does not have the same economic and financial implications as borrowing from the public, it represents a claim on future resources and hence a burden on future taxpayers and the future economy. If federal securities held by those funds are included, the federal government’s total debt is much higher—about $9 trillion as of the end of fiscal year 2007. As shown in figure 5, total federal debt increased over each of the last 4 fiscal years. On September 29, 2007, the statutory debt limit had to be raised for the third time in 4 years in order to avoid being breached; between the end of fiscal year 2003 and the end of fiscal year 2007, the debt limit had to be increased by about one-third. It is anticipated that actions will need to be taken in fiscal year 2009 to avoid breaching the current statutory debt limit of $9,815 billion. While today’s debt numbers are large, they do not represent a measure of all future claims. They exclude a number of significant items, such as the gap between currently scheduled Social Security and Medicare benefits and the revenues earmarked for these programs as well as the likely cost of veterans’ health care and a range of other commitments and contingencies that the federal government has pledged to support. For example, the Statement of Social Insurance in the 2007 Financial Report of the United States Government disclosed that as of September 30, 2007, for Social Security and Medicare alone, projected expenditures for scheduled benefits exceed earmarked revenues (i.e., dedicated payroll taxes and premiums) by approximately $41 trillion over the next 75 years in present value terms. Of that amount, $34 trillion is related to Medicare and $7 trillion to Social Security. While Social Security, Medicare, and Medicaid dominate the long-term outlook, policymakers need to look at other policies that limit flexibility—not necessarily to eliminate them but to at least be aware of them and make a conscious decision about them. Several years ago, we developed the term “fiscal exposures” to provide a framework for considering the wide range of responsibilities, programs, and activities that may explicitly or implicitly expose the federal government to future spending. Fiscal exposures vary widely as to source, extent of the government’s legal obligation, likelihood of occurrence, and magnitude. They include not only liabilities, contingencies, and financial commitments that are identified on the balance sheet or accompanying notes, but also responsibilities and expectations for government spending that do not meet the recognition or disclosure requirements for that statement. By extending beyond conventional accounting, the concept of fiscal exposure is meant to provide a broad perspective on long-term costs and uncertainties. Fiscal exposures include items such as retirement benefits, environmental cleanup costs, the funding gap in Social Security and Medicare, and the life cycle-cost for fixed assets. Given this variety, it is useful to think of fiscal exposures as lying on a spectrum extending from explicit liabilities to the implicit promises embedded in current policy or public expectations. Many ways exist to assess the long-term fiscal challenge. One quantitative measure is called “the fiscal gap.” This measures the amount of spending cuts or tax increases that would be needed to keep debt as a share of GDP at or below today’s ratio. The fiscal gap is an estimate of the action needed to achieve fiscal balance over a certain time period such as 75 years. Another way to say this is that the fiscal gap is the amount of change needed to prevent the kind of debt explosion shown in figure 4. The fiscal gap can be expressed as a share of the economy or in present value dollars. For example, under our alternative simulation closing the fiscal gap would require spending cuts or tax increases equal to 6.7 percent of the entire economy over the next 75 years, or about $54 trillion in present value terms. To put this in perspective, closing the gap would require an increase in today’s federal tax revenues of more than one-third or an equivalent reduction in today’s federal program spending (i.e., in all spending except for interest on the debt held by the public, which cannot be directly controlled) and maintained over the entire period. Table 1 shows the changes necessary to close the fiscal gap over the next 75 years. Policymakers could phase in the policy changes so that the tax increases or spending cuts would grow over time and allow people to adjust. The size of these annual tax increases and spending cuts would be more than five times the fiscal year 2007 deficit of 1.2 percent of GDP. Delaying action would make future adjustments even larger. Under our alternative simulation, waiting even 10 years would require a revenue increase of about 45 percent or noninterest spending cuts of about 40 percent. This gap is too large to grow out of the problem. To be sure, additional economic growth would certainly help the federal government’s financial condition, but it will not eliminate the need for action. The Federal Government’s Long- Term Fiscal Outlook Is Driven Primarily by Health Care The large fiscal gap is primarily the result of spending on Medicare and Medicaid, which continue to consume ever-larger shares of both the federal budget and the economy. Federal expenditures on Medicare and Medicaid represent a much larger, faster-growing, and more immediate problem than Social Security. Medicare and Medicaid are not unique in experiencing rapid spending growth, but instead this growth largely mirrors spending trends in other public health care programs and the overall health care system. A number of factors contribute to the rise in spending, including the use of new medical technology and market dynamics that do not encourage the efficient provision of health care services. Addressing these challenges will not be easy. Health Care Costs Have Outpaced Economic Growth Federal health care spending comprises a myriad of programs, but federal obligations are driven by the two largest programs, Medicare and Medicaid. Spending for these two programs threatens to consume an untenable share of the budget and economy in the coming decades. Figure 6 shows the total future draw on the economy represented by Social Security, Medicare, and Medicaid. While Social Security will grow from 4.3 percent of GDP today to 5.8 percent in 2080, Medicare and Medicaid’s burden on the economy will more than triple—from 4.7 percent to 15.7 percent of the economy. Although some of the increased burden is due to the aging of the population, the majority is due to increased costs per beneficiary, some of which is the result of interaction between demographics and health care spending. Consequently, unlike Social Security, which will level off after growing as a share of the economy, Medicare and Medicaid will continue to grow. The projections for Medicaid spending assume a long-term cost growth rate consistent with the long-term growth rate assumption of the Medicare Trustees—GDP per capita plus about 1 percent on average. This growth rate, which would represent a slowing of the current trend, is well below recent historical experience of about 2.5 percent above GDP per capita. The federal government and other public payers are not the only ones facing rapidly rising health care expenses. Private payers face the same challenges. As shown in figure 7, total health care spending from both public and private payers is absorbing an increasing share of our nation’s GDP. From 1976 through 2006, spending on health care grew from about 8 percent of GDP to 16 percent, and it is projected to grow to about 20 percent of GDP by 2016. While growth in public spending strains government budgets, growth in private sector health care costs erodes employers’ ability to provide coverage to their workers and undercuts their ability to compete internationally. When compared with other nations, the United States is an outlier in its high level of health care spending. For example, in 2005, health care accounted for about 15 percent of GDP in the United States, the largest share among developed nations who are members of the Organization for Economic Co-operation and Development (OECD). The United States also ranks far ahead of other OECD countries in terms of per capita health spending. In that same year, the United States spent $6,401 per person, a level nearly twice that found in France, Canada, and Germany, and about two and a half times higher than the levels found in Italy, Japan, and the United Kingdom. Despite this higher level of health care spending, the United States still fares poorly on many health measures. Compared to other nations, the United States has above-average infant mortality, below- average life expectancy, and the largest percentage of uninsured individuals. For example, according to the most recent published data from OECD, the United States ranked 27 out of 30 in infant mortality and 24 out of 30 in life expectancy. Systemwide Growth in Health Care Spending Is Driven by Certain Key Factors Public and private health care spending continues to rise because of several key factors, including the following: Medical technology. While new and existing medical technology can lead to medical benefits, in some cases technology can lead to the excessive use of resources. On the one hand, experts agree that technology’s contributions over the past 20 years—new pharmaceuticals, diagnostic imaging, and genetic engineering, among others—have been, on the whole, of significant value to the nation’s health. Such advances in medical science have allowed providers to treat patients in ways that were not previously possible or to treat conditions more effectively. On the other hand, experts note that the nation’s general tendency is to treat patients with available technology even when there is little chance of benefit to the patient and without consideration of costs. Market dynamics. Another cost-containment challenge for all payers relates to the market dynamics of health care compared with other economic sectors. In an ideal market, informed consumers prod competitors to offer the best value. However, without reliable comparative information on medical outcomes, quality of care, and cost, consumers are less able to determine the best value. Insurance masks the actual costs of goods and services, providing little incentive for consumers to be cost- conscious. Many insured individuals pay relatively little out of pocket for care at the point of delivery because of comprehensive health care coverage. Current federal tax policies encourage such comprehensive coverage, for example, by excluding employers’ contribution for premiums from employees’ taxable income. These tax exclusions represent a significant source of forgone federal revenue and work at cross-purposes to the goal of moderating health care spending. Furthermore, clinicians must often make decisions in the absence of universal medical standards of practice. Under these circumstances, medical practices vary across the nation, as evidenced by wide geographic variation in per capita spending and outcomes, even after controlling for patient differences in health status. Population health. Obesity, smoking, and other population risk factors can lead to expensive chronic conditions, such as diabetes and heart disease. The increased prevalence of such conditions drives spending as the utilization of health care resources rises. For example, one study indicated that the rising prevalence of obesity and higher relative per capita health care spending among obese individuals resulted in 27 percent of the growth in inflation-adjusted per capita health care spending from 1987 through 2001. Addressing these drivers will be a major societal challenge. Solving the problem of the federal government’s escalating health care costs is especially difficult, since changing programs such as Medicare and Medicaid will involve changes, not just within these federal programs, but to our country’s health care system as a whole. However, many experts have recommended that the federal government could help drive improvement in the health care system. For example, experts note the need for strong financial incentives to overcome a lack of systems— including information systems—to reduce error and reinforce best practices. Medicare—the single, largest purchaser of health care services in the United States—could play a more active role in promoting a market that rewards better performance through payment incentives that promote the pursuit of improved quality and efficiency. The Window of Opportunity Is Narrowing Here in the first half of 2008, the long-term fiscal challenge is not in the distant future. The first baby boomers have already retired. (See table 2.) The budget and economic implications of the baby-boom generation’s retirement have already become a factor in CBO’s 10-year baseline projections and that effect will only intensify as the baby boomers age. As the share of the population over 65 climbs, demographics will interact with rising health care costs. The longer action on reforming heath care and Social Security is delayed, the more painful and difficult the choices will become. Simply put, the federal budget is on an unsustainable long- term fiscal path that is getting worse with the passage of time. The window for timely action is shrinking. Albert Einstein said the most powerful force in the universe is compound interest, and today the miracle of compounding is working against the federal government. After 2011 the Social Security cash surplus—which has cushioned and masked the effect of the federal government’s fiscal policy—will begin to shrink, putting pressure on the rest of the budget. The Medicare Hospital Insurance trust fund is already in a negative cash-flow situation. Demographics narrow the window for other reasons as well. People need time to prepare for and adjust to changes in benefits. There has been general agreement that there should be no change in Social Security benefits for those currently in or near retirement. If changes are delayed until the entire baby-boom generation has retired, that becomes much harder and much more expensive. Meeting this long-term fiscal imbalance is the nation’s largest sustainability challenge. Aligning the federal government to meet the challenges and capitalize on the opportunities of the 21st century will require a fundamental review of what the federal government does, how it does it, and how it is financed. Attention should be focused not only on the spending side of the budget but also on the revenue side. Tax expenditures, for example, should be reexamined with the same scrutiny as spending programs. Moving forward, the federal government needs to start making tough choices in setting priorities and linking resources and activities to results. Meeting the nation’s long-term fiscal challenge will require a multipronged approach bringing people together to tackle health care, Social Security, and the tax system as well as strengthening oversight of programs and activities, including creating approaches to better facilitate the discussion of integrated solutions to cross-cutting issues; and reengineering and reprioritizing the federal government’s existing programs, policies, and activities to address 21st century challenges and capitalize on related opportunities. There are also some process changes that might help the discussion by increasing the transparency and relevancy of key financial, performance, and budget reporting and estimates that highlight the fiscal challenge. Stronger budget controls for both spending and tax policies to deal with both near-term and longer-term deficits may also be helpful. As we recently reported, several countries have begun preparing fiscal sustainability reports to help assess the implications of their public pension and health care programs and other challenges in the context of overall sustainability of government finances. European Union members also annually report on longer-term fiscal sustainability. The goal of these reports is to increase public awareness and understanding of the long-term fiscal outlook in light of escalating health care cost growth and population aging, to stimulate public and policy debates, and to help policymakers make more-informed decisions. These countries used a variety of measures, including projections of future revenue and spending and summary measures of fiscal imbalance and fiscal gaps, to assess fiscal sustainability. Last year, we recommended that the United States should periodically prepare and publish a long-range fiscal sustainability report. I am pleased to note that the Federal Accounting Standards Advisory Board (FASAB) is considering possible changes to social insurance reporting and has initiated a project on fiscal sustainability reporting. Mr. Chairman, Senator Grassley, members of the committee—health care may be the principal driver of the long-term fiscal outlook, but that does not mean government should ignore other drivers. Demographics are a smaller component than rapid health care cost growth, but the two interact, and aging is not a trivial contributor to the federal government’s long-term fiscal condition. We have suggested that to right the fiscal path will require discussing health care and Social Security and looking at both the spending and tax sides of the budget. Although these entitlements and revenue drive the overall fiscal trends, it is also important that the federal government look at other programs and activities. Reexamining what government does and how it does business can help government meet the challenges of this century in providing some specific and practical steps that Congress can take to help address these long-term challenges. In this effort Congress may find a report we published in December 2007 useful. The report is entitled, A Call for Stewardship: Enhancing the Federal Government’s Ability to Address Key Fiscal and Other 21st Century Challenges. Thank you Mr. Chairman, Senator Grassley, and members of the committee for having me today. We at GAO, of course, stand ready to assist you and your colleagues as you tackle these important challenges. Contacts and Acknowledgments For further information on this testimony, please contact Susan J. Irving, Director, Federal Budget Analysis, Strategic Issues at (202) 512-9142, [email protected], or Marjorie Kanof, Managing Director, Health Care at (202) 512-7114, [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include James Cosgrove, Jay McTigue, Jessica Farb, and Melissa Wolf. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO was asked to provide its views on the long-term fiscal outlook. This statement addresses four key points: (1) the federal government's long-term fiscal outlook is a matter of utmost concern; (2) this challenge is driven primarily by health care cost growth; (3) reform of health care is essential but other areas also need attention which requires a multipronged solution; and (4) the federal government faces increasing pressures yet a shrinking window of opportunity for phasing in needed adjustments. GAO's simulations of the federal government's long-term fiscal outlook were updated with the Trustees 2008 intermediate projections and continue to indicate that the long-term outlook is unsustainable. This update combined with GAO's analysis of the fiscal outlook of state and local governments demonstrates that the fiscal challenges facing all levels of government are linked and should be considered in a strategic and integrated manner. Since 1992, GAO has published long-term fiscal simulations of what might happen to federal deficits and debt levels under varying policy assumptions. GAO developed its long-term model in response to a bipartisan request from Members of Congress who were concerned about the longterm effects of fiscal policy. Information about GAO's model and assumptions can be found at http://www.gao.gov/special.pubs/longterm/ . Long-term fiscal simulations by GAO, the Congressional Budget Office (CBO), and others all show that despite a decline in the federal government's unified budget deficit between fiscal years 2003 and 2007, it still faces large and growing structural deficits driven primarily by rising health care costs and known demographic trends. Simply put, the federal government is on an unsustainable long-term fiscal path. Although Social Security is important because of its size, over the long term health care spending is the principal driver--Medicare and Medicaid are both large and projected to continue growing rapidly in the future. Rapidly rising health care costs are not simply a federal budget problem. Growth in health-related spending is the primary driver of the fiscal challenges facing state and local governments as well. Unsustainable growth in health care spending also threatens to erode the ability of employers to provide coverage to their workers and undercuts their ability to compete in a global marketplace. Public and private health care spending continues to rise because of several key factors: (1) increased utilization of new and existing medical technology; (2) lack of reliable comparative information on medical outcomes, quality of care, and cost; and (3) increased prevalence of risk factors such as obesity that can lead to expensive chronic conditions. Addressing health care costs and demographics--and their interaction--will be a major societal challenge. The longer action on reforming heath care and Social Security is delayed, the more painful and difficult the choices will become. The federal government faces increasing pressures yet a shrinking window of opportunity for phasing in adjustments. In fact, the oldest members of the baby-boom generation are now eligible for Social Security retirement benefits and will be eligible for Medicare benefits in less than 3 years. Additionally, in addressing this fiscal challenge it will be important to review other programs and activities on both the spending and revenue sides of the budget.
Background The GCPR effort developed out of VA and DOD discussions about ways to share data in their health information systems and from efforts to create electronic records for active duty personnel and veterans. The patients served by VA’s and DOD’s systems tend to be highly mobile. Consequently, their health records may be at multiple federal and nonfederal medical facilities both in and outside the United States. In December 1996, the Presidential Advisory Committee on Gulf War Veterans’ Illnesses reported on many deficiencies in VA’s and DOD’s data capabilities for handling service members’ health information. In November 1997, the President called for the two agencies to start developing a “comprehensive, life-long medical record for each service member.” In August 1998, 8 months after the GCPR project was officially established, the President issued a directive requiring VA and DOD to develop a “computer-based patient record system that will accurately and efficiently exchange information.” The directive further stated that VA and DOD should “define, acquire, and implement a fully integrated computer-based patient record available across the entire spectrum of health care delivery over the lifetime of the patient” and recognized VA and DOD’s effort to “create additional interface mechanisms that will act as bridges between existing systems.”IHS became involved because of its expertise in population-based research and its long-standing relationship with VA in caring for the Indian veteran population as well as IHS’ desire to improve the exchange of information among its facilities. Each of the three agencies’ health facilities is linked to their agency’s regional database or an IT center: VA has about 750 facilities in 22 regions, DOD has about 600 MTFs in 14 domestic and overseas medical regions, and IHS has 550 facilities in 12 regions. Currently, these facilities cannot electronically share patient health information across agency lines, and only VA facilities have the capability of sharing certain information across regions. GCPR is not intended to be a separate computerized health information system, nor is it meant to replace VA’s, DOD’s, and IHS’ existing systems. GCPR is intended to allow physicians and other authorized users at the agencies’ health facilities to access data from any of the agencies’ other health facilities by serving as an interface among their health information systems (see fig. 1). As envisioned, the interface would compile requested patient information in a temporary or virtual record while appearing on the computer screen in the format of the user’s system. GCPR would divide health data into 24 categories, or “partitions,” including pharmacy, laboratory results, adverse reactions, vital signs, patient demographics, and doctors’ notes. With this ability to exchange information, GCPR is expected to achieve several benefits, including improving quality of care; providing data for population-based research and public health surveillance; advancing industrywide medical information standards; and generating administrative and clinical efficiencies, such as cost savings. Several management entities share responsibility for GCPR: Military and Veterans Health Coordinating Board: This entity was created to ensure coordination among VA, DOD, and the Department of Health and Human Services (HHS) on military and veteran health matters, particularly as they relate to deployed settings, such as the Persian Gulf. The board also oversees implementation of the President’s August 1998 directive. The board consists of the Secretaries of VA, DOD, and HHS. DOD and VA Executive Council: The council was created to identify and implement interagency initiatives that are national in scope. One initiative is to ensure a smooth transfer of information between DOD’s and VA’s health care systems through efforts such as GCPR. The council comprises VA’s Under Secretary for Health, DOD’s Assistant Secretary for Health Affairs, their key deputies, and the Surgeon General of each military branch. GCPR Board of Directors: The board was established to set GCPR programmatic and strategic priorities and secure funding from VA, DOD, and IHS. The board consists of the VA Under Secretary for Health and CIOs for MHS and IHS. GCPR Executive Committee: The Executive Committee sets tactical priorities, oversees project management activities, and ensures that adequate resources are available. The committee membership consists of senior managers from VA, DOD, and IHS. GCPR is managed on a day-to-day basis by a program office staffed by personnel from VA, DOD, IHS, and the project’s prime contractor, Litton/PRC of McLean, Virginia. Litton/PRC is responsible for building, shipping, installing, configuring, and operating the interface and administering site training. Battelle Memorial Institute of Columbus, Ohio, holds contracts for developing medical “reference models,” which allow for the exchange of data among different systems without requiring standardization. Assisting in the project are government-led work groups, which consist of VA, DOD, and IHS employees and Litton/PRC staff. The work groups’ key tasks include acquisition, finance, legal work, marketing, telecommunications, and documenting clinical practices. Time Frames and Cost Estimates Have Expanded, and Expected Benefits Have Been Delayed Throughout the course of the GCPR project, time frames and cost estimates have expanded, and GCPR’s ability to deliver its expected benefits has become less certain. In 1999, initial plans called for GCPR to begin worldwide deployment October 1, 2000, but target dates for intermediate phases, such as testing, were not met, pushing project deployment out to an undefined date. For example, completion of testing was originally scheduled for September 2000 but was delayed until August 2002 (see fig. 2). GCPR cost estimates also increased. GCPR was estimated in September 1999 to cost about $270 million over its 10-year life cycle; by August 2000, projections for GCPR stood at $360 million (see table 1). However, GCPR project officials told us that the cost estimates were unreliable and probably understated, in part because some costs—such as computer hardware needed by the project’s contractors—were not included. Other cost estimates, such as those for deployment, could not be verified. In the case of deployment, final decisions affecting costs were not made. By the end of 2000, it became apparent that the benefits described in GCPR project documents and brochures and on its website—including access to comprehensive, life-long patient information—would not be realized in the near future. According to Litton/PRC, preliminary testing of data transfer among selected VA facilities is demonstrating that the GCPR technology works. However, significant issues in sharing comprehensive patient data have not been adequately addressed. For example, while GCPR managers planned to field test 6 of the 24 data partitions, they had no plans for when other partitions would be tested. Moreover, access was to be limited to patient information in VA’s, DOD’s, and IHS’ health information systems; information in other major data sources, such as TRICARE—DOD’s managed care program—and other third-party providers would not be accessible. Access to patient information would be further limited because full deployment of CHCS II—DOD’s new, more comprehensive health information system, currently under development— has been delayed until 2004 as the result of complications such as limited system capacity and slow response time. With CHCS II, GCPR would provide access to information on immunizations; allergies; and outpatient encounters, such as diagnostic and treatment codes; as well as to information in CHCS I, DOD’s current system, which primarily includes information on patient hospital admission and discharge, patient medications, laboratory results, and radiology. Providing other anticipated benefits—such as improved quality of patient health records—will also be difficult because GCPR plans do not include steps for correcting long- standing data problems, such as inaccurate data entries. Inadequate Accountability and Planning Compromised GCPR’s Progress The lack of accountability and sound IT project planning—critical to any project, particularly an interagency effort of this magnitude and complexity—put GCPR at risk of failing. The relationships among GCPR’s management entities were not clearly established, and no one entity had the authority to make final project decisions binding on the other entities. As a result, plans for the development of GCPR have not included a clear vision for the project and have not given sufficient attention to technological and privacy and security issues as the effort has moved forward. Lack of Accountability Undermined Agencies’ Commitment to the Project From the outset, decision-making and oversight were blurred across several management entities, compromising GCPR’s progress. The roles and responsibilities of these entities and the relationships among them are not spelled out in the VA-DOD-IHS memorandum of agreement (MOA), and no one entity exercised final authority over the project. The Board of Directors and the Executive Committee did not follow sound IT business practices—such as ensuring agency commitment, securing stable funding, and monitoring the project’s progress—as dictated by federal requirements. For example, GCPR documents show that VA, DOD, and IHS should provide consistent project funding of 40 percent, 40 percent, and 20 percent, respectively, but DOD has never provided this level of funding and, at times, temporarily withheld funding it had promised. Moreover, the Board of Directors and the Executive Committee did not exercise sufficient oversight, including monitoring, to ensure that the project would be adequately funded. Without agency commitment and sufficient oversight, the project team has been limited in its ability to manage GCPR effectively or efficiently. Unstable funding forced GCPR project managers to develop and issue multiple short-term contracts for work that could have been covered by a single longer-term contract. At one point during our review, project managers told us that the project would end after field-testing because of a lack of adequate funding and a lack of a clear mandate to proceed with full deployment, even though plans called for the project to continue through deployment. Inadequate Planning Hindered Progress The three partner agencies never reached consensus on GCPR’s mission and how it would relate to the individual agencies’ missions. In addition, key project documents, such as the MOA establishing GCPR, have not adequately spelled out the project’s goals and objectives. For example, some DOD officials thought GCPR’s mission paralleled the goals and objectives of Presidential Review Directive 5; however, GCPR project managers did not share this understanding and the directive was never adopted as GCPR’s mission. Without an agreed upon mission with clear goals and objectives, it remained unclear what problem GCPR was trying to solve. This lack of consensus on the project’s mission, goals, and objectives affected the agencies’ dedication of resources. Expecting GCPR to enhance its ability to carry out its mission to provide health care to veterans, VA was providing the most funding to the project. In contrast, DOD elected to place priority on funding CHCS II, which is estimated to cost several billion dollars because officials believe it will more specifically address the Department’s health mission. GCPR plans have also not sufficiently addressed other critical issues that need to be resolved, such as decisions about key data elements. For example, DOD and IHS use different identifiers to match health records to patients—DOD facilities use Social Security numbers, while IHS facilities use facility-specific health record numbers. Differences such as these complicate the electronic exchange of health information. Further, in the absence of common medical terminology, project personnel, assisted by Battelle, are developing reference models they believe will interpret VA, DOD, and IHS data and present the data in a format understandable to the user—without requiring cross-agency standards. However, GCPR plans have not specified the key tasks for developing these models, their relation to one another, and who should carry them out. As a result, work progressed slowly and rework has been necessary. For example, coordination between the Battelle team and Litton/PRC was, initially, not adequate to ensure that the reference models developed by Battelle would meet Litton/PRC’s technical requirements for developing the interface. Therefore, the models had to be revised. In addition, the MOA and other key project documents did not lay out the specific roles and responsibilities of VA, DOD, and IHS in developing, testing, and deploying the interface. GCPR plans also did not describe how the project would use the agencies’ existing technologies for sharing patient health information and to avoid duplication of effort. For example, GCPR plans do not discuss VA’s “remote view” capability—which will allow users of VA’s Computer Patient Record System (CPRS) to simultaneously view health data across multiple facilities—or three of DOD’s health information systems: Theater Medical Information Program (TMIP), Pacific Medical Network (PACMEDNET), and Pharmacy Data Transaction System (PDTS). Finally, a comprehensive strategy to guarantee the privacy and security of electronic information shared through GCPR was not developed. GCPR’s draft privacy and security plan delegates primary responsibility for ensuring privacy and security to more than 1,000 VA, DOD, and IHS local facilities, with few additional resources and little guidance. However, there have been long-standing privacy and security problems within VA’s, and DOD’s information systems. For example, weak access controls put sensitive information—including health information—at risk of deliberate or inadvertent misuse, improper disclosure, or destruction. By providing broader access to more users, GCPR may exacerbate these risks. DOD is required by the Floyd D. Spence National Defense Authorization Act for 2001 (P.L. 106-398) to submit to the Congress a comprehensive plan consistent with HHS medical privacy regulations to improve privacy. The act also requires DOD to promulgate interim regulations that allow for use of medical records as necessary for certain purposes, including patient treatment and public health reporting, thus providing DOD the flexibility to share patient health information through a mechanism such as GCPR. The HHS privacy regulations went into effect on April 14, 2001, and contain provisions that require consent to disclose health information before engaging in treatment, payment, or health care operations (45 C.F.R. parts 160-164). CIOs Change Immediate Focus, but Serious Concerns Remain Over the past several months, we have provided briefings on our findings to agency and project officials, including the CIOs of VHA and MHS whom we initially briefed in September 2000. Concerned about the lack of progress and the significant weaknesses that we found, the CIOs have begun to exert much needed oversight. They told us that they are now focusing on “early deliverables” for VA and DOD. To ensure more immediate applicability of GCPR to their missions, VA and DOD’s current priority is to allow VA health care providers to view DOD health data by the end of September 2001. Once this interim effort is completed, the CIOs told us that they plan to resume the broader GCPR project—establishing a link among all three partner agencies’ health information systems. Under the interim effort, as described by the CIOs, certain trigger events, such as a new veteran enrolling for VA medical treatment, will prompt VISTA to contact a central server, which would search the hundreds of CHCS I sites and collect any data on that patient. To help ensure efficient development of the interim effort, VA and DOD now plan to evaluate their existing IT products—such as VA’s remote view capability, which could have the potential to facilitate the retrieval of DOD health data—as well as commercial products to determine if these technologies can be used to electronically transmit data among the agencies’ systems. While we did not conduct an in-depth review of these initiatives, we agree that such an evaluation may allow VA and DOD to reduce or eliminate redundancies because these products have a common aim of sharing patient data. However, it is unclear to what extent the interim effort will be using the GCPR technology—which, according to Litton/PRC, has demonstrated that data can be moved among VA facilities. However, our concerns regarding the usefulness of the information—and the implications for GCPR’s expected benefits—still remain. For example, under the interim effort, the requested information is expected to take as long as 48 hours to be received. In addition, only authorized VHA personnel will have the ability to see CHCS I data from MTFs; health care providers at MTFs will not be able to view health information from VHA— or information from other MTFs. It is also unclear whether all or only selected VA and DOD facilities will have the interim capability now being proposed. IHS will not be included in the interim effort. Moreover, the interim effort will rely on DOD’s aging system, CHCS I, which historically has not been adequate to meet physicians’ needs. CHCS I is primarily limited to administrative information and some patient medical information, such as pharmacy and laboratory results. CHCS I does not include patient information on the health status of personnel when they enter military service, on reservists who receive medical care while not on active duty status, or on military personnel who receive care from TRICARE providers. CHCS I also does not include physician notes made during examinations. In addition, information captured by CHCS I can vary from MTF to MTF. Some facilities, such as Tripler Army Medical Center in Hawaii, have significantly enhanced their CHCS software to respond to the needs of physicians and other system users and to collect patient health information not collected by other facilities. Further, the interim effort will need to address many of the same problems that confronted the broader GCPR effort: Transmitted information will be viewable only as sent; therefore, it will not be computable—that is, it will not be possible to organize or manipulate data for quick review or research. Electronic connectivity among MTFs is limited, and the interim effort does not propose to establish facility-to-facility links. Currently, only MTFs within the same region and using the same DOD IT hardware can access one another’s data using CHCS I. The requested data will not be meaningful to the VA user unless CHCS’ language is translated into VISTA’s. For example, without interpretation, a VA physician’s VISTA query for a patient’s sodium level would not recognize “NA” (used by DOD) as equivalent to “sodium” (used by VA). Until terms and their context are standardized or the variations are identified, or “mapped,” across all VA and DOD facilities, much of the information could be meaningless to VA physicians. According to VHA’s and MHS’ CIOs, detailed plans and time frames are being prepared for the short-term, interim effort to allow VA to receive available electronic health information in CHCS I. However, as of the end of February 2001, no agreement on the goals, time frames, costs, and oversight for the interim approach has been reached, and no formal plans for the interim project exist. Moreover, revised plans for the broader, long- term GCPR project—including how and when IHS will resume its role in the project—have not been developed. While a draft of this report was being reviewed by the agencies, they developed a new near-term effort which they outlined in their comments. This effort, which revises their interim effort, is intended to address our concerns. However, many of our concerns remain and are addressed in our response to comments from the agencies. Conclusions GCPR’s aim to allow health care providers to electronically share comprehensive patient information should provide VA, DOD, and IHS a valuable opportunity to improve the quality of care for their beneficiaries. But without a lead entity, a clear mission, and detailed planning to achieve that mission, it is difficult to monitor progress, identify project risks, and develop appropriate contingency plans to keep the project moving forward and on track. Critical project decisions were not made, and the agencies were not bound by those that were made. The VA and DOD CIOs’ action to focus on short-term deliverables and to capitalize on existing technologies is warranted and a step in the right direction. However, until problems with the two agencies’ existing systems and issues regarding planning, management, and accountability are resolved, projected costs are likely to continue to increase, and implementation of the larger GCPR effort— along with its expected benefits—will continue to be delayed. Recommendations for Executive Action To help strengthen management and oversight of GCPR, we recommend that the Secretaries of VA and DOD and the Director of IHS reassess decisions about the broader, long-term GCPR project, based on the results of the interim effort. If the Secretaries of VA and DOD and the Director of IHS decide to continue with the broader effort, they should direct their health CIOs to apply the principles of sound project management delineated in our following recommendations for the interim effort. For the interim effort, we recommend that the Secretaries of VA and DOD and the Director of IHS direct their health CIOs to take the following actions: Designate a lead entity with final decision-making authority and establish a clear line of authority. Create comprehensive and coordinated plans to ensure that the agencies’ can share comprehensive, meaningful, accurate, and secure patient health data. These plans include an agreed-upon mission and clear goals, objectives, and performance measures, and they should capitalize on existing medical IT capabilities. Agency Comments VA, DOD, and IHS reviewed and separately commented on a draft of this report. Each concurred with the findings and recommendations. The agencies also provided comments that outline a new near-term effort for GCPR and that aim to clarify GCPR’s purpose. Additionally, VA, DOD, and IHS provided written technical comments, which we have incorporated where appropriate. The full texts of their comments are reprinted as appendixes II, III, and IV. Regarding our recommendation to establish a clear line of authority, the Secretary of VA committed to meeting with the Secretary of Defense and the Director of IHS to designate a lead entity that will have decision- making authority for the three organizations. He said that once established, that entity will have a clear line of authority over all GCPR development activities. With regard to our recommendation to create comprehensive and coordinated plans for sharing patient health data, the Secretary of VA said he would direct the VHA CIO, in collaboration with VA’s departmentwide CIO to prepare such plans under the oversight of the lead entity. In response to our recommendation that longer-term GCPR decisions be reassessed based on the results of the interim effort, the Secretary of VA responded that GCPR will be reassessed based on the results of their near-term effort. Additionally, he said that the longer-term strategy will depend to some extent on advances in medical informatics, standards development, and the ability to bring in additional partners. DOD provided similar comments on our recommendation concerning longer-term GCPR decisions and also mentioned that it plans to include the Military Health System Information Management Committee in GCPR oversight. While IHS provided no information on the steps it plans to take to implement our recommendations, it commented, along with VA and DOD, that collaboration is essential to the future of GCPR. Overall, the agencies’ statements, in our view, represent a commitment to oversight and management of GCPR. However, it is much too soon to know whether their commitment will result in a successful project. VA, DOD, and IHS also provided information that, according to the organizations, is intended to serve as a foundation for assessing GCPR and its progress. The agencies emphasized that GCPR is not intended to carry the whole weight for the service members’ health records and the related health information systems, but instead consists of the agencies’ core health information systems with GCPR handling the transfer and mediation of data. Our report does not suggest that GCPR is a replacement for the agencies’ information systems or that it should carry the weight of the agencies’ patient health information. Rather, our report states that GCPR is intended to create an electronic link that will enable the agencies to share patient data from their separate health information systems. The agencies also provided a clarification of GCPR’s purpose, stating that it will provide a longitudinal record covering service members from the start of their service through their care with VA. VA acknowledges that the realities of the challenges the project has presented have led to a scaling back of the initial version of GCPR as described in early project documents, such as budget submissions, contractors’ statements of work, and project plans. These documents indicated that in addition to including IHS, GCPR would permit health care professionals to share clinical information via a comprehensive lifelong, medical record—one that would include information from all sources of care. GCPR was similarly described on GCPR’s home page and during briefings to the Congress and others, such as the National Committee on Vital and Health Statistics. Some documents, such as VA’s Fiscal Year 2001 Performance Plan, have described GCPR as including dependents of service members. To the extent that the agencies agree on the scaled-back description of GCPR, project documents and communications need to reflect this new understanding. This is, in part, why we recommended that the agencies develop and document a clear, agreed upon project mission, along with specific goals, objectives, and performance measures. The agencies’ also provided information on a new near-term effort for GCPR, which they developed while reviewing our draft report. According to the agencies, this revised near-term effort that they have developed uses the GCPR framework and will provide VA clinicians with DOD data on all active duty members, retirees, and separated personnel. VA and DOD recognize that this one-way flow of information is not perfect but should be a substantial improvement for physicians making medical decisions and enhance the continuity of care for veterans. According to the agencies, the near-term effort is funded through year 2001 and they expect to have initial operating capability by fall 2001. We agree that, if successful, this effort should provide useful information to VA clinicians. In our view, their outline of the new near-term approach indicates that it is only in the concept stage and detailed planning and actual work are just beginning. For example, the agencies note that current data will be sent in “near real- time transmission,” and historical data will be “extracted and transmitted on a predetermined schedule.” But they do not define “near real-time” and “predetermined schedule.” Additionally, the agencies assert that the new near-term effort addresses many of the concerns we raised in the report. However, several of these issues remain and, as we recommended, need to be reassessed at the conclusion of the near-term effort because of their implications for the long-term effort: GCPR—both the near-term and larger efforts—will not provide a longitudinal record because plans call for GCPR to use DOD’s CHCS I for the foreseeable future. CHCS I, as DOD acknowledges in its comments, was not designed to include patient information on the health status of personnel when they enter military service, on reservists who receive medical care while not on active duty status, or on military personnel who receive care outside MTFs. The meaningfulness of the transmitted data remains in question because the agencies do not plan to standardize or map the differing terminology in their health information systems. As we note in the report, without standardized terminology or mapping, the meaning of certain terms used in medical records may not be apparent to the VA provider requesting the information. For example, unless the context is clear, the meaning of the term “cold” in a medical record may be interpreted as meaning a rhinovirus, a feeling of being cold, or having chronic obstructed lung disease. The agencies also need to more fully address data-specific matters, such as GCPR’s reference modeling, before developing additional hardware and software. Once they reach consensus on these issues, their agreement must be clearly stated in a formalized document—one that is binding on all three partners. Finally, for the project to be successfully deployed, detailed plans on GCPR’s system components and tasks with clear project parameters need to be developed. Until such plans are developed, the agencies’ GCPR efforts cannot be fully assessed. Privacy and security issues are also continuing concerns. DOD states in its comments that it does not intend to delegate responsibility for complying with DOD and federal privacy and security requirements to its local facilities. However, DOD does not describe how it plans to ensure compliance, raising concerns such as how unintended or unauthorized disclosure or access of information would be prevented when the near- term effort provides selected “data feeds from CHCS I a database to be accessed by VA.” Similarly, VA generally describes how authorized VA staff will access DOD medical records. However, we have concerns about how the two Departments will ensure the privacy and security of patient information given the security weaknesses in their computer systems, which we have repeatedly reported on. In March 2001, we reported that DOD continues to face significant personnel, technical, and operational challenges in implementing a departmentwide information security program, and DOD management has not carried out sufficient program oversight. We included VA’s computer security in our January 2001 High- Risk Series and, in an accompanying report, pointed out persistent computer security weaknesses that placed critical VA operations, including health care delivery, at risk of misuse, fraud, improper disclosure, or destruction. For example, we found that VA has not adequately limited access granted to authorized users, managed user identification and passwords, or monitored access activity—weaknesses that VA’s Inspector General recently testified on. Funding is also a concern. VA states that GCPR’s “success and rate of progression will depend to some extent on the ability to add partners and available funding.” Similarly, DOD states that GCPR program requirements will be funded in accordance with overarching DOD mission priorities. IHS also noted that it faces competing demands for scarce resources. We recognize that each agency has multiple priorities. However, securing adequate and stable funding and determining whether additional partners are needed depends on reliable cost estimates—which can only be determined with well-defined goals and detailed plans for achieving those goals. As DOD points out in its comments, the 10-year cost estimates for GCPR will continue to be considered unreliable until clear mid- and long- term goals and objectives have been established and agreed to by the three agencies. Each of the three agencies also stated that GCPR may have been judged by the criteria used to assess a standard information system development effort and that doing so understates the complexity of their undertaking. While we believe that the technology exists to support GCPR—particularly the new near-term effort—we agree that GCPR presents unique and difficult administrative challenges. Yet it is this very complexity that calls for thorough planning, interagency coordination, and diligent oversight as well as consistent and regular communication of the project’s status and progress to all stakeholders. Finally, VA noted that it would like to discuss with us certain details in our report with which it did not fully agree but yet did not disclose in its comments. Throughout the course of the project—and particularly over the past 6 months—we met frequently with the agencies to provide observations on our work and discuss any concerns that were brought to our attention. We are committed to continuing to meet with VA, DOD, and IHS to help in this important endeavor. We are sending this report to the Honorable Anthony Principi, Secretary of Veterans Affairs; the Honorable Donald Rumsfeld, Secretary of Defense; the Honorable Tommy Thompson, Secretary of Health and Human Services; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. Should you have any questions on matters discussed in this report, please contact me at (202) 512-7101. Other contacts and key contributors to this report are listed in appendix V. Appendix I: Scope and Methodology To determine the status of the GCPR project, we conducted site visits to VA, DOD, and IHS facilities; interviewed personnel at these locations, representatives of nonfederal health care organizations, and others knowledgeable about computerized linking of disparate health information systems; and reviewed documents relevant to the project. We also consulted with project officials at various times during our audit about the status of our review. We went to a total of nine VA, DOD, and IHS health care facilities in California, Hawaii, Indiana, and Washington, D.C. These sites were judgmentally selected based on a variety of factors, including diversity of system capabilities and size and type of facility, such as major medical centers and small community-based clinics. Therefore, they are not necessarily representative of the agencies’ facilities. During these site visits, we spoke with a variety of facility staff—ranging from a DOD regional medical commander and IHS facility managers to VA administrative personnel—about their experiences using the agencies’ existing health information systems. We also asked them about what additional information and system features they consider to be important in treating patients and conducting population-based research. Further, we talked with facility IT technicians and administrators about their systems’ capabilities and the technical requirements for developing the GCPR interface, and we discussed the potential effect the interface might have on current operations and systems. We interviewed VA, DOD, and IHS officials, primarily from the agencies’ headquarters, involved directly in the GCPR project to obtain specific information about the project’s day-to-day operations and management, including timelines, costs, and technical matters. We also interviewed personnel from the two primary GCPR contractors—Litton/PRC in McLean, Virginia, and Battelle Memorial Institute of Columbus, Ohio—on the status of the interface development, particularly regarding the reference modeling. We also talked with agency representatives on the GCPR Board of Directors and Executive Committee about the oversight of the project. To obtain additional perspectives about the development of computerized patient record systems, we talked with recognized leaders in the field and visited selected private sector facilities, including Kaiser Permanente, Aurora HealthCare of Wisconsin, and the Regenstrief Institute of the University of Indiana in Indianapolis. We also talked with officials from the National Committee on Vital and Health Statistics regarding privacy and security issues and the status of the development of HIPAA regulations. Finally, we reviewed many GCPR project documents. These included technical plans, such as the project’s draft privacy and security plan, deployment plans, and other planning documents; cost analyses; and Board of Directors and Executive Committee meeting minutes; and other relevant project documents. We conducted our review between March 2000 and April 2001 in accordance with generally accepted government auditing standards. Appendix II: Comments From the Department of Veterans Affairs Appendix III: Comments From the Department of Defense Appendix IV: Comments From the Indian Health Service Appendix V: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, the following staff made key contributions to this report: Tonia Johnson, Helen Lew, William Lew, Valerie Melvin, Karen Sloan, and Thomas Yatsco.
In November 1997, the President called for the Department of Veterans Affairs (VA) and the Department of Defense (DOD) to create an interface that would allow the two agencies to share patient health information. By allowing health care providers to electronically share comprehensive patient information, computer-based patient record's (GCPR) should help VA, DOD, and the Indian Health Service (IHS) to improve the quality of care for their beneficiaries. But without a lead entity, a clear mission, and detailed planning to achieve that mission, it is difficult to monitor progress, identify project risks, and develop appropriate contingency plans to keep the project moving forward and on track. Critical project decisions were not made, and the agencies were not bound by those that were made. The VA and DOD Chief Information Officers' (CIO) action to focus on short-term deliverables and to capitalize on existing technologies is warranted and a step in the right direction. However, until problems with the two agencies' existing systems and issues regarding planning, management, and accountability are resolved, project costs will likely continue to increase and implementation of the larger GCPR effort--and its expected benefits--will continue to be delayed.
Background The Joint Strike Fighter is DOD’s most expensive aircraft acquisition program. The number of aircraft, engines, and spare parts expected to be purchased, along with the lifetime support needed to sustain the aircraft, mean the future financial investment will be significant. DOD is expected to develop, procure, and maintain 2,443 operational aircraft at a cost of more than $950 billion over the program’s life cycle. The JSF is being developed in three variants for the U.S. military: a conventional takeoff and landing aircraft for the Air Force, a carrier-capable version for the Navy, and a short takeoff and vertical landing variant for the Marine Corps. In addition to its size and cost, the impact of the JSF program is even greater when combined with the number of aircraft expected for international sales (a minimum of 646 aircraft and potentially as many as 3,500). Finally, because a number of current U.S. aircraft will either be replaced by or used in conjunction with the JSF, the program is critical for meeting future force requirements. The JSF program began in November 1996 with a 5-year competition between Lockheed Martin and Boeing to determine the most capable and affordable preliminary aircraft design. Lockheed Martin won the competition. The program entered system development and demonstration in October 2001. At that time, officials planned on a 10½ years development period costing about $34 billion (amount includes about $4 billion incurred before system development start). By 2003, system integration efforts and a preliminary design review revealed significant airframe weight problems that affected the aircraft’s ability to meet key performance requirements. Weight reduction efforts were ultimately successful but added substantially to program cost and schedule estimates. In March 2004, DOD rebaselined the program, extending development by 18 months and adding about $7.5 billion to development costs. In total, estimated development costs for the JSF are now about $10 billion more than at start of system development. In August 2005, DOD awarded a $2.1 billion contract for alternate engine system development and demonstration, of which more than $1 billion has been appropriated to date. Since awarding that contract, DOD’s last three budget submissions have included no funding for the alternate engine program and DOD has proposed canceling it, stating that (1) no net acquisition cost benefits or savings are to be expected from competition and (2) low operational risk exists for the warfighter under a sole-source engine supplier strategy. We have previously reported that DOD’s analysis to support this decision focused only on the potential up-front savings in engine procurement costs. That analysis, along with statements made before this committee last year, inappropriately included cost already sunk in the program and excluded long-term savings that might accrue from competition for providing support for maintenance and operations over the life cycle of the engine. In fiscal year 2007, the program office awarded the first of three annual production contracts to Pratt & Whitney for its F135 engine. Under that acquisition strategy, the program then planned to award noncompetitive contracts to both Pratt & Whitney and to the Fighter Engine Team in fiscal years 2010 and 2011. Beginning in fiscal year 2012, the program planned to award contracts on an annual basis under a competitive approach for quantities beyond each contractor’s minimum sustaining rate. Full-rate production for the program begins in fiscal year 2014 and is expected to continue through fiscal year 2034. The JSF program intends to use a combination of competition, performance-based logistics, and contract incentives to achieve goals related to affordability, supportability, and safety. Through this approach, the JSF program office hopes to achieve substantial reductions in engine operating and support costs, which traditionally have accounted for 72 percent of a program’s life cycle costs. Recent Decisions by DOD Add to Overall JSF Program Risk Today, we are issuing our latest report on the JSF acquisition program, the fourth as mandated in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005. In our report we acknowledge the challenges in managing such a complex and ambitious acquisition and cite recent progress in refining system requirements, forging production agreements with international partners, and beginning flight testing of the prototype aircraft and a flying test bed. DOD also extended the procurement period for 7 years, reducing annual quantities and the rate of ramp up to full production. These actions somewhat lessened, but did not eliminate, the undue concurrency of development and production we have previously reported. We also report continuing cost increases and development risks resulting from recent decisions by DOD to eliminate test resources to replenish needed management reserve funds. We expect that DOD will eventually need more money and time to complete development and operational testing, potentially delaying the full-rate production decision now planned for October 2013. We further report that the official program cost estimate before the Congress is not reliable for decision-making, based on our assessment of estimating methodologies compared to best practice standards. With almost 90 percent of the acquisition program’s spending still ahead, it is important to address these challenges, effectively manage future risks, and move forward with a successful program that meets ours’ and our allies’ needs. Program Cost Estimate Increased Since Last Year DOD reported that total acquisition cost estimate increased by more than $23 billion since our last report in March of 2007, and $55 billion since the program underwent a major restructure in 2004. Recent increases in the procurement cost estimate were principally due to (1) extending the procurement period seven years at lower annual rates; (2) increases to future price estimates based on contractor proposals for the first production lot, and (3) airframe material cost increases. The official development cost estimate remained about the same. However, this was largely achieved by reducing requirements, not fully funding the alternate engine program despite congressional interest in the program, and reducing test resources in order to replenish management reserve funds which were spent much faster than budgeted. Table 1 shows the evolution in costs, unit costs, quantities, and deliveries since the start of the JSF’s system development and demonstration program. JSF Development Program Faces Increased Risks of Further Cost Increases and Schedule Delays Midway through its planned 12-year development period, the JSF program is over cost and behind schedule. The program has spent two-thirds of its budgeted funding on the prime development contract, but estimates that only about one-half of the development work has been completed. The contractor has extended manufacturing schedules several times and test aircraft delivery dates have continually slipped. Repercussions from late release of engineering drawings to the manufacturing floor, design changes, and parts shortages continue to cause delays in maturing manufacturing processes and force inefficient production line workarounds. These design and manufacturing problems depleted management reserve funds to an untenable level in 2007. Facing a probable contract cost overrun, DOD officials decided not to request additional funding and time for development, opting instead to reduce test resources in order to replenish management reserves from $400 million to $1 billion. The decision to replenish management reserves by reducing test resources, known as the Mid-Course Risk Reduction Plan, was ratified by OSD in September 2007. It eliminated two development test aircraft (reducing the total from 15 to 13), reduced flight tests, revised test verification plans, and accelerated the reduction in the prime contractor’s development workforce. Officials from several prominent defense offices objected to specific elements of the plan because of risks to the test program and because it did not treat the root causes of production and schedule problems. We agree with this prognosis and believe the mid-course plan should be re- evaluated to address these concerns, examine alternatives, and correct the causes of management reserve depletion. The plan significantly increases the risks of not completing development testing on time and not finding and fixing design and performance problems until late into operational testing and production, when it is more expensive and disruptive to do so. It also does not directly address and correct the continuing problems that caused the depletion in management reserves. This increases the risk that development costs will increase substantially and schedules will be further delayed. The flight test program has barely begun, but faces substantial risks with reduced assets as design and manufacturing problems continue to cause delays that further compress the time available to complete development. We expect that DOD will have to soon restructure the JSF program to add resources and extend the development period, likely delaying operational testing, the full-rate production decision, and achievement of initial operational capabilities. JSF Program Cost Estimate Is Not Reliable We do not think the official JSF program cost estimate is reliable when judged against cost estimating standards used throughout the federal government and industry. Specifically, the program cost estimate: (1) is not comprehensive because it does not include all applicable costs, including $6.8 billion for the alternate engine program; (2) is not accurate because some of its assumptions are optimistic and not supportable—such as applying a weight growth factor only half as large as historical experience on similar aircraft—and because the data system relied upon to report and manage JSF costs and schedule is deficient; (3) is not well documented in that it does not sufficiently identify the primary methods, calculations, results, rationales and assumptions, and data sources used to generate cost estimates; and (4) is not credible according to individual estimates from OSD’s Cost Analysis Improvement Group, the Defense Contract Management Agency, and the Naval Air Systems Command. All three of these defense offices concluded that the official program cost estimate is understated in a range up to $38 billion and that the development schedule is likely to slip from 12 to 27 months. Despite this and all the significant events and changes that have occurred in the 6 years since the start of system development, DOD does not intend to accomplish another fully documented, independent total program life-cycle cost estimate for another 6 years. Twelve years between high-fidelity estimates is not acceptable in our view, especially given the size of the JSF program, its importance to our and our allies’ future force structures, the changes in cost and quantity in the intervening years, and the unreliability of the current estimate. Based on the evidence we collected, we believe a new estimate will likely be much higher than now reported. In addition to the higher estimates made by the three independent defense organizations, we determined that: DOD has identified billions of dollars in unfunded requirements that are not in the program office estimate, including additional tooling and procurement price hikes. A new manufacturing schedule in the works indicates continued degradation in the schedule and further extends times for first flights. Both the aircraft and engine development contracts have persistent, substantial cost variances that cost analysts believe are too large and too late in the program to resolve without adding to budget. The prime contractor and program office are readying a new estimate needed to complete the program, which is expected to be much larger than what is now budgeted. JSF Faces Challenges as Program Moves Forward The first and foremost challenge for the JSF program is affordability. From its outset, the JSF goal was to develop and field an affordable, highly common family of strike aircraft. Rising unit procurement prices and somewhat lower commonality than expected raise concerns that the United States and its allies may not be able to buy as many aircraft as currently planned. The program also makes unprecedented demands for funding from the defense budget—averaging about $11 billion each year for the next two decades—and must compete with other priorities for the shrinking federal discretionary dollar. Figure 1 compares the current funding profile with two prior projections and shows the impact from extending procurement 7 more years to 2034. This reduced mid-term annual budget requirements, but added $11.2 billion to the total procurement cost estimate. Further, informed by more knowledge as the program progresses, DOD doubled its projection of JSF life-cycle operating and support costs compared to last year’s estimate and its expected cost per flight hour now exceeds the F-16 legacy fighter it is intended to replace. With almost 90 percent (in terms of dollars) of the acquisition program still ahead, it is important to address these challenges, effectively manage future risks, and move forward with a successful program that meets our military needs, as well as those of our allies. Engine Competition Benefits Could Outweigh Costs As we noted in testimony before this committee last year, the acquisition strategy for the JSF engine must weigh expected costs against potential rewards. Without competition, the JSF program office estimates that it will spend $54.9 billion over the remainder of the F135 engine program. This includes cost estimates for completing system development, procurement of 2,443 engines, production support, and sustainment. Due primarily to the money spent on the engine program over the past year, thereby increasing the sunk costs in our calculations, we believe competition could provide an even better return on investment than our previous assessment. Additional investment of between $3.5 billion to $4.5 billion may be required should the Department decide to continue competition. While Pratt & Whitney design responsibilities and associated costs may actually be reduced under a sole-source contract, we remain confident that competitive pressures could yield enough savings to offset the costs of competition over the program’s life. This ultimately will depend on the final approach for the competition, the number of aircraft actually purchased, and the ratio of engines awarded to each contractor. Given certain assumptions with regard to these factors, the additional costs of having the alternate engine could be recouped if competition were to generate approximately 9 to 11 percent savings—about 2 percent less than we estimated previously. According to actual Air Force data from past engine programs, including the F-16 aircraft, we still believe it is reasonable to expect savings of at least that much. Sole-Source Approach Results in Reduced Upfront Costs The cost of the Pratt & Whitney F135 engine is estimated to be $54.9 billion over the remainder of the program. This includes cost estimates for the completion of system development, procurement of engines, production support, and sustainment. Table 2 shows the costs remaining to develop, procure, and support the Pratt & Whitney F135 engine on a sole-source basis. In addition to development of the F135 engine design, Pratt & Whitney also has responsibility for the common components that will be designed and developed to go on all JSF aircraft, regardless of which contractor provides the engine core. This responsibility supports the JSF program level requirement that the engine be interchangeable—either engine can be used in any aircraft variant, either during initial installation or when replacement is required. In the event that Pratt & Whitney is made the sole-source engine provider, future configuration changes to the aircraft and common components could be optimized for the F135 engine, instead of potentially compromised design solutions or additional costs needed to support both F135 and the F136, the alternate engine. JSF Engine Competition Could Result in Future Savings The government’s ability to recoup the additional investments required to support competition depends largely on (1) the number of aircraft produced, (2) the ratio that each contractor wins out of that total, and (3) the savings rate that competitive pressures drive. Our analysis last year, and again for this statement, estimated costs under two competitive scenarios; one in which contractors are each awarded 50 percent of the total engine purchases (50/50 split) and one in which there is an annual 70/30 percent award split of total engine purchases to either contractor, beginning in fiscal year 2012. Without consideration of potential savings, the additional costs of competition total about $4.5 billion under the first scenario and about $3.5 billion under the second scenario. Table 3 shows the additional cost associated with competition under these two scenarios. The disparity in costs between the two competitive scenarios reflects the loss of learning resulting from lower production volume that is accounted for in the projected unit recurring flyaway costs used to construct each estimate. The other costs include approximately $1.1 billion for remaining F136 development and $116 million in additional standup costs, which would be the same under either competitive scenario. Competition may incentivize the contractors to achieve more aggressive production learning curves, produce more reliable engines that are less costly to maintain, and invest additional corporate money in technological improvements to remain competitive. To reflect these and other factors, we applied a 10 to 20 percent range of potential cost savings to our estimates, where pertinent to a competitive environment. Further, when comparing life cycle costs, it is important to consider that many of the additional investments associated with competition are often made earlier in the program’s life cycle, while much of the expected savings do not accrue for decades. As such, we include a net present value calculation (time value of money) in the analysis that, once applied, provides for a better estimate of program rate of return. When we apply overall savings expected from competition, our analysis indicates that recoupment of those initial investment costs would occur at somewhere between 9 and 11 percent, depending on the number of engines awarded to each contractor. A competitive scenario where one of the contractors receives 70 percent of the annual production aircraft, while the other receives only 30 percent reaches the breakeven point at 9 percent savings—1.3 percent less than we estimated before. A competitive scenario where both contractors receive 50 percent of the production aircraft reaches this point at 11 percent savings—again about 1.3 percent less than last year. We believe it is reasonable to assume at least this much savings in the long run based on analysis of actual data from the F- 16 engine competition. Past Engine Programs Show Potential Financial Benefits from Competition Results from past competitions provide evidence of potential financial and non financial savings that can be derived from engine programs. One relevant case study to consider is the “Great Engine War” of the 1980s— the competition between Pratt & Whitney and General Electric to supply military engines for the F-16 and other fighter aircraft programs. At that time all engines for the F-14 and F-15 aircraft were being produced on a sole-source basis by Pratt & Whitney, which was criticized for increased procurement and maintenance costs, along with a general lack of responsiveness with regard to government concerns about those programs. Beginning in 1983, the Air Force initiated a competition that resulted in significant cost savings in the program. For example, in the first 4 years of the competition, when comparing actual costs to the program’s baseline estimate, results included nearly 30 percent cumulative savings for acquisition costs, roughly 16 percent cumulative savings for operations and support costs, and total savings of about 21 percent in overall life cycle costs. The Great Engine War was able to generate significant benefits because competition incentivized contractors to improve designs and reduce costs during production and sustainment. Multiple Studies and Analyses Show Additional Benefits from Competition Competition for the JSF engines may also provide benefits that do not result in immediate financial savings, but could result in reduced costs or other positive outcomes over time. Our prior work, along with studies by DOD and others, indicate there are a number of non financial benefits that may result from competition, including better performance, increased reliability, and improved contractor responsiveness. In addition, the long term impacts of the JSF engine program on the global industrial base go far beyond the two competing contractors. DOD and others have performed studies and have widespread concurrence as to these other benefits, including better engine performance, increased reliability, and improved contractor responsiveness. In fact, in 1998 and 2002, DOD program management advisory groups assessed the JSF alternate engine program and found the potential for significant benefits in these and other areas. Table 4 summarizes the benefits determined by those groups. While the benefits highlighted may be more difficult to quantify, they are no less important, and ultimately were strongly considered in an earlier recommendation to continue the alternate engine program. These studies concluded that the program would maintain the industrial base for fighter engine technology, enhance readiness, instill contractor incentives for better performance, ensure an operational alternative if the current engine developed enhance international participation. Another potential benefit of having an alternate engine program, and one also supported by the program advisory group studies, is to reduce the risk that a single point, systemic failure in the engine design could substantially affect the fighter aircraft fleet. This point is underscored by recent failures of the Pratt & Whitney test program. In August 2007, an engine running at a test facility experienced failures in the low pressure turbine blade and bearing, which resulted in a suspension of all engine test activity. In February 2008, during follow-on testing to prove the root cost of these failures, a blade failure occurred in another engine, resulting in delays to both the Air Force and Marine Corps variant flight test programs. The JSF program continues to work toward identifying and correcting these problems. Though current performance data indicate it is unlikely that these or other engine problems would lead to fleetwide groundings in modern aircraft, having two engine sources for the single-engine JSF further reduces this risk as it is more unlikely that such a problem would occur to both engine types at the same time. Concluding Observations DOD is challenged once again with weighing short-term needs against potential long-term payoffs within the JSF program, especially in terms of the test program and the approach for developing, procuring, and sustaining the engine. We and others believe that the JSF risk reduction plan is too risky—cutting test resources and flight tests will constrain the pace and fidelity of development testing—and additional costs and time will likely be needed to complete JSF development. Finding and fixing deficiencies during operational testing and after production has ramped up is costly, disruptive, and delays getting new capabilities to the warfighter. Further, without directly addressing the root causes of manufacturing delays and cost increases, the problems will persist and continue to drain development resources and impact low-rate production that is just beginning. These actions may postpone events, but a major restructuring appears likely—we expect DOD will need more money and time to complete development and operational testing, which will delay the full- rate production decision. Because the JSF is entering its most challenging phase—finalizing three designs, maturing manufacturing processes, conducting flight tests, and ramping up production in an affordable manner—decision making and oversight by Congress, top military leaders, and our allies is critical for successful outcomes. The size of the JSF acquisition, its impact on our tactical air forces and those of our allies, and the unreliability of the current estimate, argue for an immediate new and independent cost estimate and uncertainty analysis, so that these leaders can have good information for effective decision making. Likewise, the way forward for the JSF engine acquisition strategy entails one of many critical choices facing DOD today, and underscores the importance of decisions facing the program. Such choices made today on the JSF program will have long term impacts. Mr. Chairmen, this concludes my prepared statement. I will be happy to answer any questions you or other members of the subcommittee may have. Contacts and Acknowledgments For future questions regarding this testimony, please contact Michael J. Sullivan, (202) 512-4841. Individuals making key contributions to this testimony include Marvin Bonner, Jerry Clark, Bruce Fairbairn, J. Kristopher Keener, Matt Lea, Brian Mullins, Daniel Novillo, and Charles Perdue. Appendix I: Scope and Methodology To conduct our mandated work on the JSF acquisition program, we tracked and compared current cost and schedule estimates with prior years, identified major changes, and determined causes. We visited the prime contractor’s plant to view manufacturing processes and plans for low rate production. We obtained earned value data, contractor workload statistics, performance indicators, and manufacturing results. We reviewed the Mid Course Risk Reduction Plan and supporting documents, discussed pros and cons with DOD officials, and evaluated potential impacts on flight plans and test verification criteria. We reviewed the cost estimating methodologies, data, and assumptions used by the JSF joint program office to project development, procurement, and sustainment costs. We assessed the program office’s procedures and methodologies against GAO’s Cost Assessment Guide and best practices employed by federal and private organizations. We obtained cost estimates prepared by the Cost Analysis Improvement Group, Naval Air Systems Command, and Defense Contract Management Command and discussed with the cost analysts the methodologies and assumptions used by those organizations. We discussed plans, future challenges, and results to date with DOD and contractor officials. For our work on the alternate engine we used the methodology detailed below, the same as had been used in support of our statement in March 2007. For this statement, we collected similar current information so the cost information could be updated. In conducting our analysis of costs for the Joint Strike Fighter (JSF) engine program, we relied primarily on program office data. We did not develop our own source data for development, production, or sustainment costs. In assessing the reliability of data from the program office, we compared that data to contractor data and spoke with agency and other officials and determined that the data were sufficiently reliable for our review. Other base assumptions for the review are as follows: Unit recurring flyaway cost includes the costs associated with procuring one engine and certain nonrecurring production costs; it does not include sunk costs, such as development and test, and other costs to the whole system, including logistical support and construction. Engine procurement costs reflect only U.S. costs, but assumes the quantity benefits of the 730 aircraft currently anticipated for foreign partner procurement. Competition, and the associated savings anticipated, begins in fiscal year 2012. Engine maturity, defined as 200,000 flight hours with at least 50,000 hours in each variant, is reached in fiscal year 2012. Two years are needed for delivery of aircraft. Aircraft life equals 30 years at 300 flight hours per year. For the sole-source Pratt & Whitney F135 engine scenario, we calculated costs as follows: Relied on JSF program office data on the remaining cost of the Pratt & Whitney development contract. We considered all costs for development through fiscal year 2008 to be sunk costs and did not factor them into analysis. For cost of installed engine quantities, we multiplied planned JSF engine quantities for U.S. aircraft by unit recurring flyaway costs specific to each year as derived from cost targets and a learning curve developed by the JSF program office. For the cost of production support, we relied on JSF program office cost estimates for initial spares, training, support equipment, depot stand-up, and manpower related to propulsion. Because the JSF program office calculates those numbers to reflect two contractors, we applied a cost reduction factor in the areas of training and manpower to reflect the lower cost to support only one engine type. For sustainment costs, we multiplied the planned number of U.S. fielded aircraft by the estimated number of flight hours for each year to arrive at an annual fleet total. We then multiplied this total by JSF program office estimated cost per engine flight hour specific to each aircraft variant. Sustainment costs do not include a calculation of the cost of engine reliability or technology improvement programs. For a competitive scenario between the Pratt & Whitney F135 engine and the Fighter Engine Team (General Electric and Rolls-Royce), we calculated costs as follows: We used current JSF program office estimates of remaining development costs for both contractors and considered all costs for development through fiscal year 2008 to be sunk costs. We used JSF program office data for engine buy profiles, learning curves, and unit recurring flyaway costs to arrive at a cost for installed engine quantities on U.S. aircraft. We performed calculations for competitive production quantities under 70/30 and 50/50 production quantity award scenarios. We used JSF program office cost estimates for production support under two contractors. We assumed no change in support costs based on specific numbers of aircraft awarded under competition, as each contractor would still need to support some number of installed engines and provide some number of initial spares. We used the same methodology and assumptions to perform the calculation for sustainment costs in a competition as in the sole-source scenario. We analyzed actual cost information from past aircraft propulsion programs, especially that of the F-16 aircraft engine, in order to derive the expected benefits of competition and determine a reasonable range of potential savings. We applied this range of savings to the engine life cycle, including recurring flyaway costs, production support, and sustainment. We assumed costs to the government could decrease in any or all of these areas as a result of competitive pressures. We did not apply any savings to the system development and demonstration phase or the first five production lots because they are not fully competitive. However, we recognize that some savings may accrue as contractors prepare for competition. In response to the request to present our cost analyses in constant dollars, then year dollars, and using net present value, we: calculated all costs using constant fiscal year 2002 dollars, used separate JSF program office and Office of the Secretary of Defense inflation indices for development, production, production support, and sustainment to derive then year dollars; when necessary for the out years, we extrapolated the growth of escalation factors linearly; and utilized accepted GAO methodologies for calculating discount rates in the net present value analysis. Our analysis of the industrial base does not independently verify the relative health of either contractors’ suppliers or workload. Related GAO Products Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks, GAO-08-388. Washington, D.C.: Mar. 11, 2008. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy, GAO-07-415. Washington, D.C.: Apr. 2, 2007. Defense Acquisitions: Analysis of Costs for the Joint Strike Fighter Engine Program, GAO-07-656T. Washington, D.C.: Mar. 22, 2007. Joint Strike Fighter: Progress Made and Challenges Remain, GAO-07-360. Washington, D.C.: Mar. 15, 2007. Tactical Aircraft: DOD’s Cancellation of the Joint Strike Fighter Alternate Engine Program Was Not Based on a Comprehensive Analysis, GAO-06-717R. Washington, D.C: May 22, 2006. Recapitalization Goals Are Not Supported By Knowledge-Based F-22A and JSF Business Cases, GAO-06-487T. Washington, D.C.: Mar. 16, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance, GAO-06-356. Washington, D.C.: Mar. 15, 2006. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization, GAO-05-519T. Washington, D.C.: Apr. 6, 2005. Defense Acquisitions: Assessments of Selected Major Weapon Programs, GAO-05-301.Washington, D.C.: Mar. 31, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy, GAO-05-271. Washington D.C.: Mar. 15, 2005. Tactical Aircraft: Status of F/A-22 and JSF Acquisition Programs and Implications for Tactical Aircraft Modernization, GAO-05-390T. Washington, D.C.: Mar. 3, 2005. Joint Strike Fighter Acquisition: Observations on the Supplier Base, GAO-04-554. Washington, D.C.: May 3, 2004. Joint Strike Fighter Acquisition: Managing Competing Pressures Is Critical to Achieving Program Goals, GAO-03-1012T. Washington, D.C.: July 21, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Joint Strike Fighter (JSF) is the Department of Defense's (DOD) most expensive aircraft acquisition program. DOD is expected to develop, procure, and maintain 2,443 aircraft at a cost of more than $950 billion. DOD plans for the JSF to replace or complement several types of aircraft in the Air Force, Navy, and Marine Corps. Given the program's cost and importance, it is critical that decisions are made within this program to maximize its benefit to the nation. This testimony highlights a number of those decisions and impacts. It (1) discusses emerging risks to the overall program, and (2) updates information for GAO's cost analysis of last year regarding sole-source and competitive scenarios for acquisition and sustainment of the JSF engine. Information on the overall program is from our mandated annual report, also issued today. GAO tracked annual cost and schedule changes, reasons for changes, decisions affecting development, and compared DOD cost estimating methodologies to best practices. For the two engines, GAO updated cost data from last year's testimony and made new projections. GAO believes recent DOD decisions, while potentially reducing near-term funding needs, could have long-term cost implications. DOD's recent plan to reduce test resources in order to pay for development cost overruns adds more risk to the overall JSF program. Midway through development, the program is over cost and behind schedule. Difficulties in stabilizing aircraft designs and the inefficient manufacturing of test aircraft have forced the program to spend management reserves much faster than anticipated. To replenish this reserve, DOD officials decided not to request additional funding and time for development at this time, but opted instead to reduce test resources. GAO believes this plan will hamper development testing while still not addressing the root causes of related cost increases. While DOD reports that total acquisition costs have increased by $55 billion since a major restructuring in 2004, GAO and others in DOD believe that the cost estimates are not reliable and that total costs will be much higher than currently advertised. Another restructuring appears likely--GAO expects DOD will need more money and time to complete development and operational testing, which will delay the full-rate production decision and the fielding of capabilities to the warfighter. This year, DOD is again proposing cancellation of the JSF alternate engine program. The current estimated remaining life cycle cost for the JSF engine program under a sole-source scenario is $54.9 billion. To ensure competition by continuing the JSF alternate engine program, an additional investment of about $3.5 billion to $4.5 billion may be required. However, potential advantages from a competitive strategy could result in savings equal to or exceeding that amount across the life cycle of the engine. GAO's updated cost analysis suggests that a savings of 9 to 11 percent--about 2 percent less than what GAO estimated last year--would recoup that investment. Also, as we noted last year, prior experience indicates that it is reasonable to assume that competition on the JSF engine program could yield savings of at least that much. Further, non financial benefits in terms of better engine performance and reliability, more responsive contractors, and improved industrial base stability are more likely outcomes under a competitive environment than under a sole-source strategy. While cancellation of the program provides needed funding in the near term, recent test failures for the primary JSF engine underscore the importance and long-term implications of DOD decision making with regard to the ultimate engine acquisition approach.
DHS Has Reported Progress in Addressing Illegal Cross-Border Activity, but Could Improve Assessment of Its Efforts Border Patrol Has Reported Some Success in Addressing Illegal Migration, but Challenges Remain in Assessing Efforts and Identifying Resource Needs Since fiscal year 2011, DHS has used changes in the number of apprehensions on the southwest border between POEs as an interim measure for border security, as reported in its annual performance reports. As we reported in December 2012, our data analysis showed that apprehensions across the southwest border decreased 69 percent from fiscal years 2006 through 2011. These data generally mirrored a decrease in estimated known illegal entries in each southwest border sector. As we testified in February 2013, data reported by Border Patrol following the issuance of our December 2012 report showed that total apprehensions across the southwest border increased from over 327,000 in fiscal year 2011 to about 357,000 in fiscal year 2012.assess whether this increase indicates a change in the trend for Border Patrol apprehensions across the southwest border. Through fiscal year 2011, Border Patrol attributed decreases in apprehensions across sectors in part to changes in the U.S. economy, achievement of strategic objectives, and increased resources for border security. It is too early to In addition to collecting data on apprehensions, Border Patrol collects other types of data that are used by sector management to help inform assessment of its efforts to secure the border against the threats of illegal migration and smuggling of drugs and other contraband. These data show changes, for example, in the (1) percentage of estimated known illegal entrants who are apprehended, (2) percentage of estimated known illegal entrants who are apprehended more than once (repeat offenders), (3) number of seizures of drugs and other contraband, and (4) number of apprehensions of persons from countries at an increased risk of sponsoring terrorism. Our analysis of these data show that the percentage of estimated known illegal entrants apprehended from fiscal years 2006 through 2011 varied across southwest border sectors. The percentage of individuals apprehended who repeatedly crossed the border illegally declined by 6 percent from fiscal years 2008 through 2011. Further, the number of seizures of drugs and other contraband across the border increased from 10,321 in fiscal year 2006 to 18,898 in fiscal year 2011. Our analysis of the data also show that apprehensions of persons from countries at an increased risk of sponsoring terrorism— referred to as Aliens from Special Interest Countries—increased each fiscal year from 239 in fiscal year 2006 to 399 in fiscal year 2010, but dropped to 253 in fiscal year 2011. As we reported in December 2012, Border Patrol sectors and stations track changes in their overall effectiveness as a tool to determine if the appropriate mix and placement of personnel and assets are being deployed and used effectively and efficiently, according to officials from Border Patrol headquarters. Border Patrol data showed that the effectiveness rate for eight of the nine sectors on the southwest border improved from fiscal year 2006 through 2011. Border Patrol headquarters officials said that differences in how sectors define, collect, and report turn back data (entrants who illegally crossed the border but were not apprehended because they crossed back into Mexico) and got away data (entrants who illegally crossed the border and continued traveling into the U.S. interior) used to calculate the overall effectiveness rate preclude comparing performance results across sectors. Border Patrol headquarters officials stated that until recently, each Border Patrol sector decided how it would collect and report turn back and got away data, and as a result, practices for collecting and reporting the data varied across sectors and stations based on differences in agent experience and judgment, resources, and terrain. Border Patrol headquarters officials issued guidance in September 2012 to provide a more consistent, standardized approach for the collection and reporting of turn back and got away data by Border Patrol sectors. Each sector is to be individually responsible for monitoring adherence to the guidance. According to Border Patrol officials, it is expected that this guidance will help improve data reliability. Implementation of this new guidance may allow for comparison of sector performance and inform decisions regarding resource deployment for securing the southwest border. Border Patrol is in the process of developing performance goals and measures for assessing the progress of its efforts to secure the border between POEs and for informing the identification and allocation of resources needed to secure the border, but has not yet identified milestones and time frames for developing and implementing them. Since fiscal year 2011, DHS has used the number of apprehensions on the southwest border between POEs as an interim performance goal and measure for border security as reported in its annual performance report. Prior to this, DHS used operational control as its goal and outcome measure for border security and to assess resource needs to accomplish this goal. Operational control—also referred to as effective control—was defined as the number of border miles where Border Patrol had the capability to detect, respond to, and interdict cross-border illegal activity. DHS last reported its progress and status in achieving operational control of the borders in fiscal year 2010. At that time, DHS reported achieving operational control for 1,107 (13 percent) of 8,607 miles across U.S. northern, southwest, and coastal borders. Along the southwest border, DHS reported achieving operational control for 873 (44 percent) of the about 2,000 border miles. At the beginning of fiscal year 2011, DHS transitioned from using operational control as its goal and outcome measure for border security. We testified in February 2013 that the interim goal and measure of number of apprehensions on the southwest border between POEs provides information on activity levels but does not inform program results or resource identification and allocation decisions, and therefore until new goals and measures are developed, DHS and Congress could experience reduced oversight and DHS accountability. Further, studies commissioned by CBP have found that the number of apprehensions bears little relationship to effectiveness because agency officials do not compare these numbers with the amount of cross-border illegal activity. Border Patrol officials stated that the agency is in the process of developing performance goals and measures, but has not identified milestones and time frames for developing and implementing them. According to Border Patrol officials, establishing milestones and time frames for the development of performance goals and measures is contingent on the development of key elements of its new strategic plan, such as a risk assessment tool, and the agency’s time frames for implementing these key elements—targeted for fiscal years 2013 and 2014—are subject to change. We recommended that CBP establish milestones and time frames for developing a performance goal, or goals, for border security between POEs that defines how border security is to be measured, and a performance measure, or measures, for assessing progress made in securing the border between POEs and informing resource identification and allocation efforts. DHS concurred with our recommendations and stated that it plans to set a date for when it will establish such milestones and time frames by November 2013. CBP Has Strengthened POE Inspection Programs and Officer Training, and Has Additional Actions Planned or Under Way As part of its homeland security and legacy customs missions, CBP inspects travelers arriving at POEs to counter threats posed by terrorists and others attempting to enter the country with fraudulent or altered travel documents and to prevent inadmissible aliens, criminals, and goods from entering the country. In fiscal year 2012, CBP inspected about 352 million travelers, and over 107 million cars, trucks, buses, trains, vessels, and aircraft at over 329 air, sea, and land POEs. We have previously identified vulnerabilities in the traveler inspection program and made recommendations to DHS for addressing these vulnerabilities, and DHS implemented these recommendations. For example, we reported in January 2008 on weaknesses in CBP’s inbound traveler inspection program, including challenges in attaining budgeted staffing levels because of attrition and lack of officer compliance with screening procedures, such as those used to determine citizenship and admissibility of travelers entering the country as required by law and CBP policy.Factors that contributed to these challenges included lack of focus, complacency, lack of supervisory presence, and lack of training. We recommended that CBP enhance internal controls in the inspection process, implement performance measures for apprehending inadmissible aliens and other violators, and establish measures for training provided to CBP officers and new officer proficiency. DHS concurred with these recommendations and has implemented them. Specifically, in January 2008, CBP reported, among other things, that all land port directors are required to monitor and assess compliance with eight different inspection activities using a self-inspection worksheet that is provided to senior CBP management. At that time, CBP also established performance measures related to the effectiveness of CBP interdiction efforts. Additionally, in June 2011, CBP began conducting additional classroom and on-the-job training, which incorporated ongoing testing and evaluation of officer proficiency. In December 2011, we reported that CBP had revised its training program for newly hired CBP officers in accordance with its own training development standards. Consistent with these standards, CBP convened a team of subject-matter experts to identify and rank the tasks that new CBP officers are expected to perform. As a result, the new curriculum was designed to produce professional law enforcement officers capable of protecting the homeland from terrorist, criminal, biological, and agricultural threats. We also reported that CBP took some steps to identify and address the training needs of its incumbent CBP officers but could do more to ensure that these officers were fully trained. For example, we examined CBP’s results of covert tests of document fraud detection at POEs conducted over more than 2 years and found weaknesses in the CBP inspection process at the POEs that were tested. In response to these tests, CBP developed a “Back to Basics” course in March 2010 for incumbent officers, but had no plans to evaluate the effectiveness of the training. We also reported that CBP had not conducted an analysis of all the possible causes or systemic issues that may have contributed to the covert test results. We recommended in December 2011 that CBP analyze covert tests and evaluate the “Back to Basics” training course, and DHS concurred with these recommendations. In April 2012, CBP officials reported that they had completed an evaluation of the “Back to Basics” training course and implemented an updated, subsequent training course. Further, in November 2012, CBP officials stated that they had analyzed the results of covert tests prior to and since the implementation of the subsequent course. According to these officials, they obtained the results of covert tests conducted before and after the course was implemented to determine to what extent significant performance gains were achieved and to identify any additional requirements for training. In April 2013, CBP provided a copy of its analysis of the covert test results. GAO is reviewing CBP’s analysis of the covert test results and other documentation as part of a congressional mandate to review actions the agency has taken to address GAO recommendations regarding CBP officer training. We expect to report on the status of CBP’s efforts in the late summer of 2013. Further, in July 2012, CBP completed a comprehensive analysis of the results of its document fraud covert tests from fiscal years 2009 through 2011. In addition, we reported that CBP had not conducted a needs assessment that would identify any gaps between identified critical skills and incumbent officers’ current skills and competencies. We recommended in December 2011 that CBP conduct a training needs assessment. DHS concurred with this recommendation. In April 2013, CBP reported to us that it is working to complete a training needs assessment, but has faced challenges in completing such an assessment because of personnel and budget issues, including retirements, attrition, loss of contract support, sequestration, and continuing resolutions. CBP plans to develop a final report on a training needs assessment by August 2013 outlining findings, conclusions, and recommendations from its analysis. DHS Law Enforcement Partners Reported Improved Results for Interagency Coordination, but Challenges Remain DOI and USDA Reported Improved DHS Coordination to Secure Federal Borderlands, but Gaps Remained in Sharing Information for Daily Operations Illegal cross-border activity remains a significant threat to federal lands protected by DOI and USDA law enforcement personnel on the southwest and northern borders and can cause damage to natural, historic, and cultural resources, as well as put agency personnel and the visiting public at risk. We reported in November 2010 that information sharing and communication among DHS, DOI, and USDA law enforcement officials had increased in recent years. For example, interagency forums were used to exchange information about border issues, and interagency liaisons facilitated exchange of operational statistics. Federal agencies also established interagency agreements to strengthen coordination of border security efforts. However, we reported in November 2010 that gaps remained in implementing interagency agreements to ensure law enforcement officials had access to daily threat information to better ensure officer safety and an efficient law enforcement response to illegal activity. For example, Border Patrol officials in the Tucson sector did not consult with federal land management agencies before discontinuing dissemination of daily situation reports that federal land law enforcement officials relied on for a common awareness of the types and locations of illegal activities observed on federal borderlands. Further, in Border Patrol’s Spokane sector, on the northern border, coordination of intelligence information was particularly important because of sparse law enforcement presence and technical challenges that reduced Border Patrol’s ability to fully assess cross-border threats, such as air smuggling of high-potency marijuana. We recommended that DHS, DOI, and USDA provide oversight and accountability as needed to further implement interagency agreements for coordinating information and integrating operations. These agencies agreed with our recommendations, and in January 2011, CBP issued a memorandum to all Border Patrol division chiefs and chief patrol agents emphasizing the importance of USDA and DOI partnerships to address border security threats on federal lands. While this is a positive step, to fully satisfy the intent of our recommendation, DHS would need to take further action to monitor and uphold implementation of the existing interagency agreements to enhance border security on federal lands. Northern Border Partners Reported Interagency Forums Improved Coordination, but DHS Did Not Provide Oversight to Resolve Interagency Conflict in Roles and Responsibilities DHS has stated that partnerships with other federal, state, local, tribal, and Canadian law enforcement agencies are critical to the success of northern border security efforts. We reported in December 2010 that DHS efforts to coordinate with these partners through interagency forums and joint operations were considered successful, according to a majority of these partners we interviewed. In addition, DHS component officials reported that federal agency coordination to secure the northern border had improved. However, DHS did not provide oversight for the number and location of forums established by its components, and numerous federal, state, local, and Canadian partners cited challenges related to the inability to provide resources for the increasing number of forums, raising concerns that some efforts may be overlapping. In addition, federal law enforcement partners in all four locations we visited as part of our work cited ongoing challenges between Border Patrol and ICE, Border Patrol and Forest Service, and ICE and DOJ’s Drug Enforcement Administration in sharing information and resources that compromised daily border security related to operations and investigations. DHS had established and updated interagency agreements to address ongoing coordination challenges; however, oversight by management at the component and local levels has not ensured consistent compliance with provisions of these agreements. We also reported in December 2010 that while Border Patrol’s border security measures reflected that there was a high reliance on law enforcement support from outside the border zones, the extent of partner law enforcement resources that could be leveraged to fill Border Patrol resource gaps, target coordination efforts, and make more efficient resource decisions was not reflected in Border Patrol’s processes for assessing border security and resource requirements. We recommended that DHS provide guidance and oversight for interagency forums and for component compliance with interagency agreements, and develop policy and guidance necessary to integrate partner resources in border security assessments and resource planning documents. DHS agreed with our recommendations and has reported taking action to address one of them. For example, in June 2012, DHS released a northern border strategy, and in August 2012, DHS notified us of other cross-border law enforcement and security efforts taking place with Canada. However, to fully satisfy the intent of our recommendation, CBP would need to develop policy and guidance specifying how partner resources will be identified, assessed, and integrated in DHS plans for implementing the northern border strategy. To address the remaining recommendations, DHS would need to establish an oversight process for interagency forums to ensure that missions and locations of interagency forums are not duplicative and consider the downstream burden on northern border partners, as well as an oversight process that evaluates the challenges and corrective actions needed to ensure Border Patrol and ICE compliance with interagency memorandums. Opportunities Exist to Improve DHS’s Management of Border Security Assets DHS Has Deployed Assets to Secure the Borders, but Has Not Provided Complete Information on Plans, Metrics, and Costs In November 2005, DHS launched the Secure Border Initiative (SBI), a multiyear, multibillion-dollar program aimed at securing U.S. borders and reducing illegal immigration. Through this initiative, DHS planned to develop a comprehensive border protection system using technology, known as the Secure Border Initiative Network (SBInet), and tactical infrastructure—fencing, roads, and lighting. Under this program, CBP increased the number of southwest border miles with pedestrian and vehicle fencing from 120 miles in fiscal year 2005 to about 650 miles as of March 2013. We reported in May 2010 that CBP had not accounted for the impact of its investment in border fencing and infrastructure on border security. Specifically, CBP had reported an increase in control of southwest border miles, but could not account separately for the impact of the border fencing and other infrastructure. In September 2009, we recommended that CBP determine the contribution of border fencing and other infrastructure to border security. DHS concurred with our recommendation and, in response, CBP contracted with the Homeland Security Studies and Analysis Institute to conduct an analysis of the impact of tactical infrastructure on border security. CBP reported in February 2012 that preliminary results from this analysis indicate that an additional 3 to 5 years are needed to ensure a credible assessment. Since the launch of SBI in 2005, we have identified a range of challenges related to schedule delays and performance problems with SBInet. SBInet was conceived as a surveillance technology to create a “virtual fence” along the border, and after spending nearly $1 billion, DHS deployed SBInet systems along 53 miles of Arizona’s border that represent the highest risk for illegal entry. In January 2011, in response to concerns regarding SBInet’s performance, cost, and schedule, DHS canceled future procurements. CBP developed the Arizona Border Surveillance Technology Plan (the Plan) for the remainder of the Arizona border. In November 2011, we reported that CBP does not have the information needed to fully support and implement its Plan in accordance with DHS and Office of Management and Budget (OMB) guidance. In developing the Plan, CBP conducted an analysis of alternatives and outreach to potential vendors. However, CBP did not document the analysis justifying the specific types, quantities, and deployment locations of border surveillance technologies proposed in the Plan. Specifically, according to CBP officials, CBP used a two-step process to develop the Plan. First, CBP engaged the Homeland Security Studies and Analysis Institute to conduct an analysis of alternatives beginning with ones for Arizona. Second, following the completion of the analysis of alternatives, the Border Patrol conducted its operational assessment, which included a comparison of alternative border surveillance technologies and an analysis of operational judgments to consider both effectiveness and cost. While the first step in CBP’s process to develop the Plan—the analysis of alternatives—was well documented, the second step—Border Patrol’s operational assessment—was not transparent because of the lack of documentation. As we reported in November 2011, without documentation of the analysis justifying the specific types, quantities, and deployment locations of border surveillance technologies proposed in the Plan, an independent party cannot verify the process followed, identify how the analysis of alternatives was used, assess the validity of the decisions made, or justify the funding requested. We also reported that CBP officials have not yet defined the mission benefits expected from implementing the new Plan, which could help improve CBP’s ability to assess the effectiveness of the Plan as it is implemented. In addition, we reported that CBP’s 10-year life cycle cost estimate for the Plan of $1.5 billion was based on an approximate order-of-magnitude analysis, and agency officials were unable to determine a level of confidence in their estimate, as best practices suggest. Specifically, we found that the estimate reflected substantial features of best practices, being both comprehensive and accurate, but it did not sufficiently meet other characteristics of a high-quality cost estimate, such as credibility, because it did not identify a level of confidence or quantify the impact of risks. GAO and OMB guidance emphasize that reliable cost estimates are important for program approval and continued receipt of annual funding. In addition, because CBP was unable to determine a level of confidence in its estimate, we reported that it would be difficult for CBP to determine what levels of contingency funding may be needed to cover risks associated with implementing new technologies along the remaining Arizona border. We recommended in November 2011 that, among other things, CBP document the analysis justifying the technologies proposed in the Plan, determine its mission benefits, and determine a more robust life cycle cost estimate for the Plan. DHS concurred with these recommendations, and has reported taking action to address some of the recommendations. For example, in October 2012, CBP officials reported that, through the operation of two surveillance systems under SBInet’s initial deployment in high-priority regions of the Arizona border, CBP has identified examples of mission benefits that could result from implementing technologies under the Plan. Additionally, CBP initiated action to update its cost estimate for the Plan by, among other things, providing revised cost estimates in February and March 2012 for the Integrated Fixed Towers and Remote Video Surveillance System, the Plan’s two largest projects. We currently have ongoing work in this area for congressional requesters and, among other things, are examining DHS’s efforts to address prior recommendations, and expect to issue a report with our final results in the fall of 2013. In March 2012, we reported that the CBP Office of Air and Marine (OAM)—which provides aircraft, vessels, and crew at the request of its customers, primarily Border Patrol—had not documented significant events, such as its analyses to support its asset mix and placement across locations, and as a result, lacked a record to help demonstrate that its decisions to allocate resources were the most effective ones in fulfilling customer needs and addressing threats. OAM issued various plans that included strategic goals, mission responsibilities, and threat information. However, we could not identify the underlying analyses used to link these factors to the mix and placement of resources across locations. OAM did not have documentation that clearly linked the deployment decisions in the plan to mission needs or threats. For example, while the southwest border was Border Patrol’s highest priority for resources in fiscal year 2010, it did not receive a higher rate of air support than the northern border. Similarly, OAM did not document analyses supporting the current mix and placement of marine assets across locations. OAM officials said at the time that while they generally documented final decisions affecting the mix and placement of resources, they did not have the resources to document assessments and analyses to support these decisions. However, we reported that such documentation of significant events could help the office improve the transparency of its resource allocation decisions to help demonstrate the effectiveness of these resource decisions in fulfilling its mission needs and addressing threats. We recommended in March 2012 that CBP document analyses, including mission requirements and threats, that support decisions on the mix and placement of OAM’s air and marine resources. DHS concurred with our recommendation and stated that it plans to provide additional documentation of its analyses supporting decisions on the mix and placement of air and marine resources by 2014. Chairman Chaffetz, Ranking Member Tierney, and members of the subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. GAO Contact and Staff Acknowledgments For further information about this testimony, please contact Rebecca Gambler at (202) 512-8777 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included Lacinda Ayers, Kathryn Bernet, and Jeanette Espinola (Assistant Directors), as well as Jennifer Bryant, Frances Cook, Joseph Dewechter, Alana Finley, Barbara Guffy, and Ashley D. Vaughan. Related GAO Products Border Security: DHS’s Progress and Challenges in Securing U.S. Borders. GAO-13-414T. Washington, D.C.: March 14, 2013. Border Patrol: Goals and Measures Not Yet in Place to Inform Border Security Status and Resource Needs. GAO-13-330T. Washington, D.C.: February 26, 2013. Border Patrol: Key Elements of New Strategic Plan Not Yet in Place to Inform Border Security Status and Resource Needs. GAO-13-25. Washington, D.C.: December 10, 2012. Border Patrol Strategy: Progress and Challenges in Implementation and Assessment Efforts. GAO-12-688T. Washington, D.C.: May 8, 2012. Border Security: Opportunities Exist to Ensure More Effective Use of DHS’s Air and Marine Assets. GAO-12-518. Washington, D.C.: March 30, 2012. Border Security: Additional Steps Needed to Ensure Officers Are Fully Trained. GAO-12-269. Washington, D.C.: December 22, 2011. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. Washington, D.C.: November 4, 2011. Border Security: Preliminary Observations on Border Control Measures for the Southwest Border. GAO-11-374T. Washington, D.C.: February 15, 2011. Border Security: Enhanced DHS Oversight and Assessment of Interagency Coordination is Needed for the Northern Border. GAO-11-97. Washington, D.C.: December 17, 2010. Border Security: Additional Actions Needed to Better Ensure a Coordinated Federal Response to Illegal Activity on Federal Lands. GAO-11-177. Washington, D.C.: November 18, 2010. Secure Border Initiative: DHS Has Faced Challenges Deploying Technology and Fencing Along the Southwest Border. GAO-10-651T. Washington, D.C.: May 4, 2010. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-896. Washington, D.C.: September 9, 2009. Border Security: Despite Progress, Weaknesses in Traveler Inspections Exist at Our Nation’s Ports of Entry. GAO-08-329T. Washington, D.C.: January 3, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
At the end of fiscal year 2004, DHS had about 28,100 personnel assigned to patrol U.S. land borders and inspect travelers at air, land, and sea POEs, with a total security cost of about $5.9 billion. At the end of fiscal year 2011, DHS had about 41,400 personnel assigned to air, land, and sea POEs and along the borders, with a total security cost of about $11.8 billion. DHS has reported that these resources have contributed to stronger enforcement efforts on the border. However, challenges remain to secure the border. In recent years, GAO has reported on a variety of DHS border security programs and operations. As requested, this statement addresses some of the key issues and recommendations GAO has made in the following areas: (1) DHS’s efforts to secure the border at and between POEs; (2) DHS interagency coordination and oversight of border security information sharing and enforcement efforts; and (3) DHS management of infrastructure, technology, and other assets used to secure the border. This statement is based on prior products GAO issued from January 2008 through March 2013, along with selected updates conducted in April 2013. For selected updates, GAO reviewed DHS information on actions it has taken to address prior GAO recommendations. U.S. Customs and Border Protection (CBP), part of the Department of Homeland Security (DHS), has reported progress in stemming illegal cross-border activity, but it could strengthen the assessment of its efforts. For example, since fiscal year 2011, DHS has used the number of apprehensions on the southwest border between ports of entry (POE) as an interim measure for border security. GAO reported in December 2012 that apprehensions decreased across the southwest border from fiscal years 2006 through 2011, generally mirroring a decrease in estimated known illegal entries in each southwest border sector. CBP attributed this decrease in part to changes in the U.S. economy and increased resources for border security. Data reported by CBP's Office of Border Patrol (Border Patrol) show that total apprehensions across the southwest border increased from over 327,000 in fiscal year 2011 to about 357,000 in fiscal year 2012. It is too early to assess whether this increase indicates a change in the trend. GAO testified in February 2013 that the number of apprehensions provides information on activity levels but does not inform program results or resource allocation decisions. Border Patrol is in the process of developing performance goals and measures for assessing the progress of its efforts to secure the border between POEs, but it has not identified milestones and time frames for developing and implementing them, as GAO recommended. DHS concurred with GAO's recommendations and said that it plans to set a date for establishing such milestones and time frames by November 2013. According to DHS law enforcement partners, interagency coordination and information sharing improved, but challenges remain. GAO reported in November 2010 that information sharing and communication among federal law enforcement officials responsible for federal borderlands had increased; however, gaps remained in ensuring law enforcement officials had access to daily threat information. GAO recommended that relevant federal agencies ensure interagency agreements for coordinating information and integrating border security operations are further implemented. These agencies agreed, and in January 2011, CBP issued a memorandum affirming the importance of federal partnerships to address border security threats on federal lands. While this is a positive step, to fully satisfy the intent of GAO's recommendation, DHS needs to take further action to monitor and uphold implementation of the existing interagency agreements. Opportunities exist to improve DHS's management of border security assets. For example, DHS conceived the Secure Border Initiative Network as a surveillance technology and deployed such systems along 53 miles of Arizona's border. In January 2011, in response to performance, cost, and schedule concerns, DHS canceled future procurements, and developed the Arizona Border Surveillance Technology Plan (the Plan) for the remainder of the Arizona border. GAO reported in November 2011 that in developing the Plan, CBP conducted an analysis of alternatives, but it had not documented the analysis justifying the specific types, quantities, and deployment locations of technologies proposed in the Plan, which GAO recommended that it do. DHS concurred with this recommendation. GAO has ongoing work in this area, and among other things, is examining DHS's efforts to address prior recommendations, and expects to issue a report in fall 2013.
Background The National Defense Authorization Act for Fiscal Year 2004 provided DOD with authority to establish (1) a pay and performance management system, (2) an appeals process, and (3) a labor relations system—which together comprise NSPS. The legislation permits significant flexibility for designing NSPS, allowing for a new framework of rules, regulations, and processes to govern how defense civilian employees are hired, compensated, promoted, and disciplined. The law granted DOD certain exemptions from laws governing federal civilian personnel management found in Title 5 of the U.S. Code. The Congress provided these flexibilities in response to DOD’s position that the inflexibility of federal personnel systems was one of the most important constraints to the department’s ability to attract, retain, reward, and develop a civilian workforce to meet the national security mission of the 21st century. Initial NSPS Design Process The initial proposals for NSPS were developed by DOD and were based on a 2002 compilation of best practices generated by demonstration projects that experimented with different personnel management concepts. After these proposals were sent to OPM for review, OPM identified a broad range of legal, policy, and technical concerns, and also noted that the labor- management relations proposal was developed without any prior OPM involvement or union input. OPM also indicated that the initial proposals had been crafted with only token employee involvement, and it noted a high level of concern expressed by congressional oversight committees, stakeholders, and constituent groups. In addition to OPM, assistant secretaries for the military services’ manpower organizations also expressed concerns that NSPS as designed would not work. Subsequently, the Secretary of Defense established a 3-week reassessment of system requirements, process issues, personnel and communication strategies, and program schedules and milestones. The Overarching Integrated Product Team (OIPT), an advisory group co-chaired by the Principal Deputy Under Secretary of Defense for Personnel and Readiness and OPM, and including the military services’ assistant secretaries for manpower and reserve affairs, oversaw this reassessment. Employees Covered by NSPS NSPS labor relations provisions will be implemented across the entire department once final NSPS regulations are issued and effective, and they will apply to all DOD employees currently covered by the labor relations provisions of Title 5, U.S. Code, Chapter 71. In contrast, NSPS regulations governing the new pay and performance management system and appeals process will be phased in and will not apply to some employees, as stipulated by law (e.g., intelligence personnel and employees in DOD’s laboratory demonstration organizations). The authorizing legislation stipulates that these latter regulations may not apply to organizations with more than 300,000 employees until the Secretary of Defense determines and certifies that the department has a performance management system in place that meets the statutory criteria established for NSPS. The first phase of implementation—Spiral One—will provide the basis for this certification prior to the deployment of Spiral Two. Spiral One includes approximately 300,000 general schedule defense civilian employees, who will be converted to the new system over a period of 18 months. DOD currently plans to initiate Spiral One, beginning in early fiscal year 2006. Spiral Two will include the remainder of DOD’s eligible workforce, including wage-grade employees. Spiral Three will apply to demonstration laboratory employees no earlier than October 1, 2008, and then only to the extent the Secretary of Defense determines that NSPS provides greater personnel management flexibilities to the laboratories than those currently implemented. DOD’s Employee Unions According to DOD, almost two-thirds of its more than 700,000 civilian employees are represented by 43 labor unions, including over 1,500 separate bargaining units. Table 1 in appendix II lists current DOD labor unions, the estimated number of employees represented by each union, and which unions belong to the United Defense Workers Coalition. According to a DOD official, since 2000, defense civilian employee membership in DOD’s labor unions has remained about the same; however, the number of unions has dropped from about 60 unions to the current 43 unions, primarily the result of mergers and consolidation among the unions. Practices and Implementation Steps for Mergers and Transformations In our prior work, we identified key practices and lessons learned from major public and private sector organizational mergers, acquisitions, and transformations. This work was undertaken to help federal agencies implement successful cultural transformations in response to governance challenges. While no two mergers or transformation efforts are exactly alike and the "best" approach depends on a variety of factors specific to each context, there was general agreement on a number of key practices, which are as follows: 1. Ensure top leadership drives the transformation. Leadership must set the direction, pace, and tone and provide a clear, consistent rationale that brings everyone together behind a single mission. 2. Focus on a key set of principles and priorities at the outset of the transformation. A clear set of principles and priorities serves as a framework to help the organization create a new culture and drive employee behaviors. 3. Set implementation goals and a timeline to build momentum and show progress from day one. Goals and a timeline are essential because the transformation could take years to complete. 4. Dedicate an implementation team to manage the transformation process. A strong and stable team is important to ensure that the transformation receives the needed attention to be sustained and successful. 5. Establish a communication strategy to create shared expectations and report related progress. The strategy must reach out to employees, customers, and stakeholders and engage them in a two-way exchange. 6. Involve employees to obtain their ideas and gain their ownership for the transformation. Employee involvement strengthens the process and allows them to share their experiences and shape policies. NSPS Design Process Evolved Into a Phased Approach DOD’s current process to design NSPS is divided into four stages: (1) development of options for the personnel system, (2) assessment of the options and translation into recommended proposals, (3) issuance of proposed regulations, and (4) a statutory public comment period, a meet and confer period with employee representatives, and a congressional notification period. As discussed earlier, DOD’s initial process to design NSPS was unrealistic and inappropriate. However, after a 3-week reassessment, DOD adjusted its approach and attempted to create a more cautious and deliberate process that would involve all of the key stakeholders, including OPM. At this time, DOD adopted a management framework to guide the design of NSPS based on DOD’s acquisition management model and adopted an analytical framework to identify system requirements as well as a phased approach to implementing the new system, also based on the acquisition management model. Figure 1 presents the four stages in DOD’s current process in terms of the key organizational elements, inputs, and outputs. In the first stage, the NSPS PEO convened six multidisciplinary design teams—called working groups—that were functionally aligned to cover the following personnel program areas: (1) compensation (classification and pay banding); (2) performance management; (3) hiring, assignment, pay setting, and workforce shaping; (4) employee engagement; (5) adverse action and appeals; and (6) labor relations. The working groups were co- chaired by DOD and OPM, and they were largely staffed from the defense components. The working groups reviewed and analyzed data from alternative federal personnel systems and laboratory and acquisition demonstration projects, research materials from the Department of Homeland Security’s personnel system design process, and private industry practices. According to DOD, the working groups also received input and participation from DOD human resources practitioners, attorneys, financial management experts, and equal employment opportunity specialists. The working groups also reviewed input gathered from DOD employee and employee representatives. The PEO was responsible for conducting outreach to employees and employee representatives, in conjunction with NSPS program managers in the DOD components; their efforts included 106 focus groups, more than 50 town hall meetings worldwide, and 10 meetings with DOD employee representatives. The working groups provided a broad range of options for the OIPT in September and October 2004; they did not prioritize the design options. In the second stage of the design process, OIPT assessed the design options, and then submitted them to the NSPS Senior Executive in November 2004. The Senior Executiveappointed by the Secretary of Defense to design and implement NSPS on his behalf—reviewed and approved the design options and presented them as proposed enabling regulations to submit to the Secretary of Defense and the Director of OPM for a decision. Throughout this period, the OIPT, PEO, and working group members continued to participate, both in drafting and reviewing the proposed regulations. In the third stage, the Secretary of Defense and Director of OPM reviewed the proposals submitted by the NSPS Senior Executive. After finalizing the proposed regulations, the Secretary and Director jointly released them for public comment in the Federal Register on February 14, 2005. In the fourth stage, the NSPS proposed regulations were subjected to a statutory 30-day public comment period, after which DOD held a 30-day meet and confer period (which began on April 18, 2005), with employee representatives to discuss their views; the meetings were facilitated by the Federal Mediation and Conciliation Service. As allowed by statute, DOD extended the meet and confer process. Lastly, DOD is to engage in a 30-day congressional notification period. As called for in the authorizing legislation, the proposed regulations are subject to change based on consideration of formal comments received during the 30-day public comment period and the results of a 30-day meet and confer process with employee representatives. As provided for in the authorizing legislation, DOD can immediately implement those parts of the regulations upon which they have reached agreement with employee representatives. DOD can implement those parts of the proposed regulations not agreed to only after another 30 calendar days have elapsed after (1) notifying the Congress of the decision to proceed with implementation and (2) explaining why implementation is appropriate. DOD’s NSPS Design Process Generally Reflects Practices of Successful Transformations, but Some Key Practices Are Lacking DOD’s NSPS design process generally reflects four of six key practices we identified that have consistently been found at the center of successful transformations. The design process generally reflects the following four practices. First, DOD and OPM have developed a process to design the new personnel system that is supported by top leadership in both organizations. Second, from the outset, a set of guiding principles have guided the NSPS design process. Third, DOD has a dedicated team in place to design and implement NSPS and manage the transformation process, to include program managers from DOD components. Fourth, DOD has established a timeline, albeit ambitious, and implementation goals for implementing its new personnel system. The design process, however, does not fully reflect two other key practices. First, DOD developed and implemented a written communication strategy document, but it is not comprehensive. Second, while the NSPS design has involved employees through town hall meetings and other mechanisms, it has not included employee representatives on the working groups that drafted the design options for the new system. Top DOD and OPM Leadership Drives Human Capital Transformation DOD and OPM have developed a process to design DOD’s new human capital resources management system that is supported by top leadership in both organizations. As previously discussed, DOD’s initial process to design NSPS was unrealistic and inappropriate; however, after a strategic reassessment, DOD adjusted its approach to reflect a more cautious, deliberative process that involved top DOD and OPM leadership. In our report on key practices for successful transformations, we noted that top leadership that is clearly and personally involved in transformations provides stability and an identifiable source for employees to rally around during tumultuous times. In addition, we noted that leadership should set the direction, pace, and tone for the transformation. In our prior reports and testimonies, we observed that top leadership must play a critical role in creating and sustaining high-performing organizations. Senior leaders from DOD and OPM are directly involved in the NSPS design process. For example, the Secretary of Defense tasked the Secretary of the Navy to be the NSPS Senior Executive overseeing the implementation of NSPS. Also, the Under Secretary of Defense for Personnel and Readiness and the NSPS Senior Executive provided an open letter to all DOD civilian employees stating that DOD is tasked to design a transformation system for the department’s civilian employees that supports its national security mission while treating workers fairly and protecting their rights. In addition, the Principal Deputy Under Secretary of Defense for Personnel and Readiness, the Assistant Secretaries for Manpower and Reserve Affairs from each military service, and the OPM Senior Advisor to the Director for the Department of Defense are members of an integrated executive management teamthe OIPTthat, among other things, provides overall policy and strategic advice on the implementation of NSPS. Similarly, senior-level executives from DOD and OPM are members of a group, known as the Senior Advisory Group, that provides advice on general NSPS conceptual, strategic, and implementation issues. Finally, senior leaders from DOD and the military components participated in town hall meetings at DOD installations worldwide to discuss the concept and design elements of NSPS. Experience shows that successful major change management initiatives in large private and public sector organizations can often take at least 5 to 7 years. This length of time and the frequent turnover of political leadership in the federal government have often made it difficult to obtain the sustained and inspired attention to make needed changes. The development of the position of Deputy Secretary of Defense for Management, who would act as DOD’s Chief Management Officer, is essential to elevate, integrate, and institutionalize responsibility for the success of DOD’s overall business transformation efforts, including its new personnel management system. As DOD embarks on a large-scale change initiative, such as DOD’s new personnel management system, ensuring sustained and committed leadership is crucial in developing a vision, initiating organizational change, maintaining open communications, and creating an environment that is receptive to innovation. Without the clear and demonstrated commitment of agency top leadership, organizational cultures will not be transformed and new visions and ways of doing business will not take root. Guiding Principles and Key Performance Parameters Steer Design Process During the strategic reassessment of the NSPS design process, DOD and OPM senior leadership developed a set of guiding principles to direct efforts throughout all phases of NSPS development. We have reported that in bringing together the originating components, the new organization must have a clear set of principles and priorities that serve as a framework to help the organization create a new culture and drive employee behaviors. Principles are the core values of the new organization and can serve as an anchor that remain valid and enduring while organizations, personnel, programs, and processes may change. Focusing on these principles and priorities helps the organization maintain its drive towards achieving the goals of the new transformation. According to DOD, its guiding principles translate and communicate the broad requirements and priorities outlined in the legislation into concise, understandable requirements that underscore the department’s purpose and intent in creating NSPS. The NSPS guiding principles are put mission first—support national security goals and strategic respect the individual—protect rights guaranteed by law, value talent, performance, leadership and commitment to public service, be flexible, understandable, credible, responsive, and executable, ensure accountability at all levels, balance personnel interoperability with unique mission requirements, be competitive and cost effective. Senior DOD and OPM leadership also approved a set of key performance parameters, which define the minimum requirements or attributes of NSPS. The key performance parameters are high-performing workforce and management: employees and supervisors are compensated and retained based on performance and contribution to mission, agile and responsive workforce management: workforce can be easily sized, shaped, and deployed to meet changing mission requirements, credible and trusted: system assures openness, clarity, accountability, fiscally sound: aggregate increases in civilian payroll, at the appropriations level, will conform to Office of Management and Budget fiscal guidance, and managers will have flexibility to manage to budget, supporting infrastructure: information technology support, and training and change management plans are available and funded, and schedule: NSPS will be operational and demonstrate success prior to November 2009. These principles and key performance parameters can serve as core values for human capital management at DODvalues that define the attributes that are intrinsically important to what the organization does and how it will do it. Furthermore, they represent the institutional beliefs and boundaries that are essential to building a new culture for the organization. Finally, they appropriately identify the need to support the mission and employees of the department, protect basic civil service principles, and hold employees accountable for performance. Team Established to Manage the NSPS Design and Implementation Process As previously discussed, DOD established a team to design and implement NSPS and manage the transformation process. Dedicating a strong and stable design and implementation team that will be responsible for the transformation’s day-to-day management is important to ensuring that it receives the focused, full-time attention needed to be sustained and successful. Specifically, the design and implementation team is important to ensuring that various change initiatives are sequenced and implemented in a coherent and integrated way. Because a transformation process is a massive undertaking, the implementation team must have a “cadre of champions” to ensure that changes are thoroughly implemented and sustained over time. Establishing networks can help the design and implementation team conduct the day-to-day activities of the merger or transformation and help ensure that efforts are coordinated and integrated. To be most effective, establishing clearly defined roles and responsibilities within this network assigns accountability for parts of the implementation process, helps reach agreement on work priorities, and builds a code of conduct that will help all teams to work effectively. The Secretary of Defense appointed a NSPS Senior Executive to, among other things, design, develop, and establish NSPS. Under the Senior Executive’s authority, the PEO was established as the central policy and program office to conduct the design, planning and development, deployment, assessment, and full implementation of NSPS. Specifically, its responsibilities include designing the labor relations, appeals, and human resource/pay for performance systems; developing a communication strategy and training strategy; modifying personnel information technology; and drafting joint enabling regulations and internal DOD implementing regulations. As the central DOD-wide program office, the PEO provides direction and oversight of the components’ NSPS program managers who are dual-hatted under their parent component and the NSPS PEO. These program managers also serve as their components’ action officers and participate in the development of NSPS and plan and implement the deployment of NSPS. Figure 2 shows the organization of the NSPS design and implementation team. Ambitious Timeline and Implementation Goals Established DOD established an ambitious 18-month timeline and implementation goals for completing the design process and beginning the phased implementation of NSPS. We have reported that successful practices of mergers and transformations have noted that the establishment of a timeline with specific milestones allows stakeholders to track the organization’s progress towards its goals. Figure 3 shows the current timeline and implementation goals for designing and implementing NSPS. Although DOD established a clear timeline with specific implementation goals, they have allotted about 6 months for completing the design process and beginning implementation of NSPS (as shown in the shaded area of figure 3). Specifically, the authorizing legislation provides for a meet and confer process for not less than 30 calendar days with the employee representatives in order to attempt to reach agreement. However, as allowed by statute, DOD extended the 30-day meet and confer period with employee representatives. After the meet and confer process is concluded, the Secretary of Defense must notify the Congress of DOD’s intent to implement any portions of the proposal where agreement has not been reached, but only after 30 calendar days have elapsed after notifying the Congress of the decision to implement those provisions. In addition, DOD and OPM must jointly develop and issue the final NSPS regulations, which must go through an interagency coordination process before they are published in the Federal Register. Also, DOD must develop and conduct in- depth and varied training for its civilian employees, military and civilian supervisors, and managers. Moreover, DOD must modify its existing automated human resource information systems, including personnel and payroll transaction process systems departmentwide, before NSPS can become operational. Finally, DOD plans to roll out the NSPS labor relations system and establish the National Security Labor Relations Board before the initial roll out of the NSPS performance management system in early fiscal year 2006. The board must be staffed with both board members as well as about 100 professional staff, which will support the board. A large-scale organizational change initiative, such as DOD’s new personnel management system, is a substantial commitment that will take years before it is completed, and therefore must be carefully and closely managed. As a result, it is essential to establish and track implementation goals and establish a timeline to pinpoint performance shortfalls and gaps and suggest midcourse corrections. While it is appropriate to develop and integrate personnel management systems within the department in a quick and seamless manner, moving too quickly or prematurely can significantly raise the risk of doing it wrong. Having an ambitious timeline is reasonable only insofar as it does not impact the quality of the human capital management system that is created. In recent hearings on the NSPS proposed regulations, we testified that DOD’s new personnel management system will have far-reaching implications for the management of the department and for civil service reform across the federal government. We further testified that NSPS could, if designed and implemented properly, serve as a model for governmentwide transformation. However, if not properly designed and implemented, NSPS could impede progress toward a more performance- and results-based system for the federal government as a whole. Communication Strategy Not Comprehensive DOD developed and implemented a written communication strategy document that provides a structured and planned approach to communicate timely and consistent information about NSPS, but this strategy is not comprehensive. It does not contain some elements that we have identified as important to successful communication during transformations. As a result, the written communication strategy document may not facilitate two-way communication between employees, employee representatives, and management, which is central to forming effective partnerships that are vital to the success of any organization. Specifically, the strategy does not identify all key internal stakeholders and their concerns. For example, the strategy acknowledges that employee representatives play an important role in the design and implementation of NSPS, but it does not identify them as a key stakeholder. Instead, DOD’s written communication strategy document characterizes union leadership as a “detractor,” in part due to their criticism of NSPS. Consequently, DOD identified the following four objectives as its most urgent communications priorities, which are to (1) demonstrate the rationale for and the benefits of NSPS, (2) express DOD’s commitment to ensuring that NSPS is applied fairly and equitably throughout the organization, (3) demonstrate openness and transparency in the design and process of converting to NSPS, and (4) mitigate and counter any potential criticism of NSPS from such detractors as unions and their support groups. Experience shows that failure to adequately consider a wide variety of people and cultural issues can lead to unsuccessful transformations. Furthermore, although the written communication strategy document identified key messages for those internal and external stakeholders that are identified, it does not tailor these messages to specific stakeholder groups. For example, the strategy does not tailor key messages to such groups of employees as human resource personnel, DOD executives and flag officers, supervisors, and managers, even though these employees may have divergent interests and information needs. Tailoring information helps employees to feel that their concerns are specifically addressed. We have reported that organizations undergoing a transformation should develop a comprehensive communications strategy that reaches out to employees, customers, and stakeholders and seeks to genuinely engage them in the transformation process and facilitate a two-way honest exchange with and allow for feedback from employees, customers, and stakeholders. NSPS Design Process has Involved Employees While the design process has involved employees through many mechanisms, including focus groups, town hall meetings, a NSPS Web site for employee comments, and meetings with employee representatives, it has not included employee representatives on the working groups that drafted the design options. The composition of the team is important because it helps employees see that they are being represented and that their views are being considered in the decision-making process. A successful transformation must provide for meaningful involvement by employees and their representatives to, among other things, gain their input into and understanding of the changes that are occurring in the organization. Employee involvement strengthens the transformation process by including frontline perspectives and experiences. Further, employee involvement helps increase employee’s understanding and acceptance of organizational goals and objectives, and gain ownership for new policies and procedures. Involving employees in planning helps to develop agency goals and objectives that incorporate insights about operations from a front-line perspective. It can also serve to increase employees’ understanding and acceptance of organizational goals and improve motivation and morale. The PEO sponsored a number of focus group sessions and town hall meetings at various sites across DOD and around the world to provide employees and managers an opportunity to participate in the development of NSPS. During a 3-week period beginning in July 2004, over 100 focus groups were held throughout DOD, including overseas locations. The purpose of the focus groups was to elicit perceptions and concerns about current personnel policies and practices as well as new ideas from the DOD workforce to inform the NSPS design process. Separate focus groups were held for employees, civilian and military supervisors, and managers and practitioners from the personnel, legal, and equal employment opportunity communities. According to DOD officials, bargaining unit employees and employee representatives were invited to participate. DOD officials stated that over 10,000 comments, ideas, and suggestions were received during the focus group sessions and were summarized and provided to NSPS working groups for use in developing options for the labor relations, appeals, adverse actions, and personnel design elements of NSPS. In addition, town hall meetings were held and, according to DOD, are still being conducted at DOD facilities around the world. According to DOD officials, these town hall meetings have provided an opportunity to communicate with the workforce, provide the status of the design and development of NSPS, and solicit thoughts and ideas. The format for town hall meetings included an introductory presentation by a senior leader followed by a question and answer session where any employee in the audience was free to ask a question or make a comment. To facilitate the widest possible dissemination, some of the town hall meetings were broadcast live, as well as videotaped and rebroadcast on military television channels and Web sites. DOD’s NSPS Web site was available for DOD employees as well as interested parties to view and comment on the proposed regulations as well as for the most recent information and announcements regarding NSPS. After the proposed NSPS regulations were published in the Federal Register, there was a 30-day public comment period, providing all interested parties the opportunity to submit comments and recommendations on the content of the proposal. The proposed regulations were published on February 14, 2005, and the 30-day comment period ended on March 16, 2005. During this time period, according to DOD, it received more than 58,000 comments. Prior to the publication of the proposed NSPS regulations, DOD and OPM conducted 10 joint meetings with officials of DOD’s 43 labor unions to discuss NSPS design elements. According to DOD officials, these meetings involved as many as 80 union leaders at any one time, addressed a variety of topics, including (1) the reasons change is needed and the department’s interests; (2) the results of departmentwide focus group sessions held with a broad cross-section of DOD employees; (3) the proposed NSPS implementation schedule; (4) employee communications; and (5) proposed design options in the areas of labor relations and collective bargaining, adverse actions and appeals, and pay and performance management. According to DOD officials, these meetings provided the opportunity to discuss the design elements, proposals under consideration for NSPS, and solicit employee representative feedback. According to DOD, the focus group sessions and town hall meetings, as well as the working groups and union meetings, assured that DOD employees, managers, supervisors, employee representatives, and other stakeholders were involved in and given ample opportunity to provide input into the design and implementation of NSPS. Opportunities for employee involvement were limited between the conclusion of the town hall meetings and focus groups in July 2004 and the publishing of the proposed NSPS regulations in February 2005; the primary means for employees to provide feedback during this time was through the NSPS Web site. DOD Faces Multiple Challenges in Implementing NSPS As DOD implements its new personnel management system, it will face multiple implementation challenges in both the early and later stages of implementation. At recent hearings on the proposed NSPS regulations, we highlighted multiple challenges: (1) establishing an overall communications strategy, (2) providing adequate resources for the new system, (3) involving employees and other stakeholders in implementing the system, (4) ensuring sustained and committed leadership, and (5) evaluating the new personnel management system after it has been implemented. Early Implementation Challenges Establishing an overall communications strategy. A significant challenge for DOD is to ensure an effective and ongoing two-way communications strategy, given its size, geographically and culturally diverse audiences, and different command structures across DOD organizations. We have reported that a communications strategy that creates shared expectations about, and reports related progress on, the implementation of the new system is a key practice of a change management initiative. The communications strategy must include the active and visible involvement of a number of key players, including the Secretary of Defense, and a variety of communication means and mediums for successful implementation of the system. DOD acknowledges that a comprehensive outreach and communications strategy is essential for designing and implementing its new personnel management system, but the proposed regulations do not identify a process for continuing involvement of employees in the planning, development, and implementation of NSPS. Providing adequate resources for implementing the new system. Experience has shown that additional resources are necessary to ensure sufficient planning, implementation, training, and evaluation for human capital reform. According to DOD, the implementation of NSPS will result in costs for, among other things, developing and delivering training, modifying automated personnel information systems, and starting up and sustaining the National Security Labor Relations Board. Major cost drivers in implementing pay-for-performance systems are the direct costs associated with salaries and training. DOD estimates that the overall cost associated with implementing NSPS will be approximately $158 million through fiscal year 2008. However, it has not completed an implementation plan for NSPS, including an information technology plan and a training plan; thus, the full extent of the resources needed to implement NSPS may not be well understood at this time. Involving employees and other stakeholders in implementing the system. DOD faces a significant challenge in involvingand continuing to involveits employees, employee representatives, and other stakeholders in implementing NSPS. DOD’s proposed NSPS regulations, while providing for continuing collaboration with employee representatives, do not identify a process for the continuing involvement of employees and other stakeholders in the planning, development, and implementation of NSPS. The active involvement of all stakeholders will be critical to the success of NSPS. The involvement of employees and their representatives both directly and indirectly is crucial to the success of new initiatives, including implementing a pay- for-performance system. High-performing organizations have found that actively involving employees and stakeholders, such as unions or other employee associations, when developing results-oriented performance management systems helps improve employees’ confidence and belief in the fairness of the system and increases their understanding and ownership of organizational goals and objectives. This involvement must be early, active, and continuing if employees are to gain a sense of understanding and ownership of the changes that are being made. Later Implementation Challenges Ensuring sustained and committed leadership. As DOD implements this massive human capital reform, its challenge will be to elevate, integrate, and institutionalize leadership responsibility for NSPS to ensure its success. DOD may face a future leadership challenge when the NSPS Senior Executive and the PEO transition out of existence once NSPS is fully implemented. According to a PEO official, at that time, ongoing implementation responsibility for NSPS would come under the Civilian Personnel Management Service, which is part of the Office of the Under Secretary of Defense for Personnel and Readiness. In recent testimony on the transformation of DOD business operations, we stated that as DOD embarks on large-scale business transformation efforts, such as NSPS, the complexity and long-term nature of these efforts requires the development of an executive position capable of providing strong and sustained change management leadership across the department—and over a number of years and various administrations. One way to ensure such leadership would be to create by legislation a full-time executive-level II position for a chief management official, who would serve as the Deputy Secretary of Defense for Management. This position would elevate, integrate, and institutionalize the high-level attention essential for ensuring that a strategic business transformation plan—as well as the business policies, procedures, systems, and processes that are necessary for successfully implementing and sustaining overall business transformation efforts, like NSPS, within DOD—are implemented and sustained. In previous testimony on DOD’s business transformation efforts, we identified the lack of clear and sustained leadership for overall business transformations as one of the underlying causes that has impeded prior DOD reform efforts. Evaluating the new personnel management system. Evaluating the impact of NSPS will be an ongoing challenge for DOD. This is especially important because NSPS would give managers more authority and responsibility for managing the new personnel system. High-performing organizations continually review and revise their human capital management systems based on data-driven lessons learned and changing needs in the work environment. Collecting and analyzing data will be the fundamental building block for measuring the effectiveness of these approaches in support of the mission and goals of the department. According to DOD, the department is planning to establish procedures to evaluate the implementation of its new personnel management system. During testimony on the proposed NSPS regulations, we stated that DOD should consider conducting evaluations that are broadly modeled on demonstration projects. Under the demonstration project authority, agencies must evaluate and periodically report on results, implementation of the demonstration project, costs and benefits, impacts on veterans and other equal employment opportunity groups, adherence to merit system principles, and the extent to which the lessons learned from the project can be applied governmentwide. We further testified that a set of balanced measures addressing a range of results, and customer, employee, and external partner issues may also prove beneficial. An evaluation such as this would facilitate congressional oversight; allow for any midcourse corrections; assist DOD in benchmarking its progress with other efforts; and provide for documenting best practices and lessons learned with employees, stakeholders, other federal agencies, and the public. Conclusions DOD’s efforts to design and implement a new personnel management system represent a huge undertaking. However, if not properly designed and implemented, the new system could severely impede DOD’s progress toward a more performance- and results-based system that it is striving to achieve. Although DOD’s process to design its new personnel management system represents a phased, deliberative process, it does not fully reflect some key practices of successful transformations. Because DOD has not fully addressed all of these practices, it does not have a comprehensive written communication strategy document that effectively addresses employee concerns and their information needs, and facilitates two-way communication between employees, employee representatives, and management. Without a comprehensive written communication strategy document, DOD may be hampered in achieving employee buy-in, which could lead to an unsuccessful implementation of the system. In addition, evaluating the impact of NSPS will be an ongoing challenge for DOD. Although DOD has plans to establish procedures to evaluate NSPS, it is critical that these procedures be adequate to fully measure the effectiveness of the program. Specifically, adequately designed evaluation procedures include results-oriented performance measures and reporting requirements that facilitate DOD’s ability to effectively evaluate and report on NSPS’s results. Without procedures that include outcome measures and reporting requirements, DOD will lack the visibility and oversight needed to benchmark progress, make system improvements, and provide the Congress with the assessments needed to determine whether NSPS is truly the model for governmentwide transformation in human capital management. Recommendations for Executive Action To improve the comprehensiveness of the NSPS communication strategy, we recommend that the Secretary of Defense direct the NSPS Senior Executive and NSPS Program Executive Office to take the following two actions: Identify all internal stakeholders and their concerns. Tailor and customize key messages to be delivered to groups of employees to meet their divergent interests and information needs. To evaluate the impact of DOD’s new personnel management system, we recommend that the Secretary of Defense direct the NSPS Senior Executive and NSPS Program Executive Office to take the following action: Develop procedures for evaluating NSPS that contain results-oriented performance measures and reporting requirements. These evaluation procedures could be broadly modeled on the evaluation requirements of the OPM demonstration projects. Agency Comments and Our Evaluation DOD provided written comments on a draft of this report. The department did not concur with our recommendation to identify all key internal stakeholders and their concerns. The department partially concurred with our recommendation to tailor and customize key messages to be delivered to groups of employees to meet their divergent interests and information needs. Also, the department partially concurred with our recommendation to develop procedures for evaluating NSPS that contain results-oriented performance measures and reporting requirements. DOD did not concur with our recommendation that the department identify all key internal stakeholders and their concerns. The department stated that, among other things, it adopted a broad-based, event-driven approach to the design and implementation of NSPS that included a multifaceted communications outreach strategy to inform and involve key stakeholders, and that it took great care to ensure that materials and messages addressed stakeholders’ concerns, both known and anticipated. However, our review of DOD’s written communication strategy document showed that not all key internal stakeholders and their concerns were identified. For example, the written communication strategy document does not identify employee representatives as a key stakeholder but, instead, characterizes union leadership as “NSPS’ biggest detractor.” Since the development and implementation of the written communication strategy document, DOD notes that specific plans were developed to identify key internal and external stakeholders and provided key messages and communications products to inform those groups. DOD provided us with these plans after we provided the department with our draft report for comment. Our review of these plans shows that they are not comprehensive. For example, the plans for the most part do not identify employee representatives as a key stakeholder or identify their concerns. Consequently, we continue to believe that our recommendation has merit and should be implemented. DOD partially concurred with our recommendation that the department tailor and customize key messages to be delivered to groups of employees to meet their divergent interest and information needs. The department stated that it believes that it has been successful so far in developing, customizing, and delivering key messages to employees and provided us with several examples to illustrate its efforts. Although DOD’s written communication strategy document contained key messages for some employee groups, the messages were general in content and not tailored to specific employee groups. DOD acknowledges that each stakeholder group has a unique focus and recently released NSPS brochures tailored to such groups of employees as human resource personnel, senior leaders, supervisors and managers, and employees. DOD provided us with these brochures after we provided the department with our draft report for comment. Our review of these brochures shows that they do in fact tailor and customize key messages for some, but not all, employee groups. Furthermore, we believe that DOD’s written communication strategy document should serve as the single, comprehensive source of DOD’s key messages, which are tailored to and customized for groups of employees. Consequently, we continue to believe that this recommendation has merit and should be implemented. DOD partially concurred with our recommendation to develop procedures for evaluating NSPS that contain results-oriented performance measures and reporting requirements that could be broadly modeled on the evaluation requirements of the OPM demonstration projects. The department stated that it has begun developing an evaluation plan and will ensure that the plan contains results-oriented performance measures and reporting mechanisms. If the department follows through with this effort, we believe that it will be responsive to our recommendation. DOD’s comments are reprinted in appendix III. DOD also provided technical comments, which we have incorporated in the final report where appropriate. We are sending copies of this report to the Chairman and Ranking Member, Senate Committee on Armed Services; the Chairman and Ranking Member, Senate Committee on Homeland Security and Governmental Affairs; the Chairman and Ranking Member, Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia, Senate Committee on Homeland Security and Governmental Affairs; the Chairman and Ranking Member, House Committee on Armed Services; the Chairman and Ranking Member, Subcommittee on the Federal Workforce and Agency Organization, House Committee on Government Reform; and other interested congressional parties. We also are sending copies to the Secretary of Defense and Director of the Office of Personnel Management. We will make copies available to other interested parties upon request. This report also will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5559 or by e-mail at [email protected]. For further information on governmentwide human capital issues, please contact Eileen R. Larence, Director, Strategic Issues, at (202) 512-6512 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. GAO staff who made major contributions to this report are listed in appendix IV. Scope and Methodology In conducting our review of the Department of Defense’s (DOD) National Security Personnel System (NSPS), we met with officials in key offices within DOD and the Office of Personnel Management (OPM) that have responsibility for designing and implementing DOD’s new performance management system. We also met with DOD employee representatives, whose members are affected by the transformation. We conducted our work in Washington, D.C., at DOD, including the NSPS Program Executive Office (PEO) and NSPS Program Management Offices in the Army, the Navy, the Marine Corps, the Air Force, and Washington Headquarters Service. We also met with members of the NSPS Overarching Integrated Product Team (OIPT) and Senior Advisory Group. At OPM, we met with the Senior Advisor to the Director for the Department of Defense and Senior Policy Advisor and Chief Human Capital Officer in the Office of the Director. We also met with key officials in OPM’s Office of Congressional Relations, Division for Strategic Human Resources Policy, Homeland Security and Intelligence Group in the Division for Human Capital Leadership and Merit System Accountability, and the Office of the Chief Financial Officer. In addition, we met with the OPM co-chairs of each of the DOD working groups that designed NSPS. We met with representatives from the United Defense Workers Coalition, which represents 36 DOD employee unions, as well as employee representatives for the Fraternal Order of Police and National Association of Independent Labor, which are not members of the Coalition. We contacted the other non-Coalition unions, but their representatives told us that they had not been actively involved in the NSPS design process and, therefore, declined our offer to meet with them. Finally, we met in Washington, D.C., with key officials in other federal agencies that are statutorily involved in the NSPS design process: Federal Labor Relations Authority, Federal Mediation and Conciliation Service, and U.S. Merit Systems Protection Board. To describe DOD’s design process, we examined the authorizing legislation and other applicable laws and regulations and collected and analyzed documentary and testimonial information from key sources. We met with the Director and Deputy Director of the NSPS PEO and the DOD and OPM co-chairs of all six working groups; members of the OIPT, including the OPM co-chair, and Senior Advisory Group; DOD employee representatives; and experts in federal labor relations and federal adverse actions and personnel appeals systems. We also examined NSPS policy guidance, directives, draft regulations, instructions, manuals, and memorandums related to the design process and NSPS charters outlining the roles and responsibilities of the OIPT and PEO. To evaluate the extent to which DOD’s process reflects elements of successful transformations, we reviewed prior GAO reports, testimonies, and forums on mergers and organizational transformations to identify assessment criteria, and we applied those criteria to the descriptive information collected for the first objective. Although there are a total of nine key practices of successful transformations, our evaluation focused on six key practices: (1) ensure top leadership drives the transformation, (2) focus on a key set of principles and priorities at the outset of the transformation, (3) set implementation goals and a timeline to build momentum and show progress from day one, (4) dedicate an implementation team to manage the transformation process, (5) establish a communication strategy to create shared expectations and report related progress, and (6) involve employees to obtain their ideas and gain their ownership for the transformation. We did not evaluate the key practice “establishes a coherent mission and integrated strategic goals to guide the transformation” because we have previously reported on the department’s strategic planning efforts for civilian personnel and assessed whether DOD and selected defense components’ goals and objectives contained in strategic plans for civilian personnel were aligned with overarching missions of the organizations. We did not apply two other key practices, “uses a performance management system to define responsibility and assure accountability for change” and “builds a world-class organization” because it would be premature to apply them to the NSPS design process given that DOD has considerable work ahead to design and implement NSPS and assess the overall system. To identify the most significant challenges DOD faced in developing NSPS, we interviewed officials from DOD, OPM, and other federal agencies as well as representatives from DOD unions. We also examined related documentation, previously identified, and reviewed prior GAO reports, testimonies, and observations related to these challenges. Data on DOD labor unions and the number of employees associated with each union were compiled by DOD from three sources: (1) the OPM book, entitled Union Recognition in the Federal Government, (2) data from the Defense Civilian Personnel Data System, and (3) a DOD survey of the military departments and defense agencies. The data are current as of June 2005. To assess the reliability of these data, we interviewed the DOD official responsible for compiling the data and performed some basic reasonableness checks of the data against other sources of information (e.g., previous DOD reports that identified DOD labor unions in past years and information directly from unions). However, we were unable to determine the reliability of the precise numbers of employees represented by each union. Because of this, and since some of the data are not current, these data are only sufficiently reliable for use as estimates rather than precise numbers of union employees. We use these data in appendix II to identify current DOD labor unions, an estimate of the number of employees represented by each union, and which unions belong to the United Defense Workers Coalition. We conducted our review from October 2004 through June 2005 in accordance with generally accepted government auditing standards. We include a comprehensive list of related GAO products on DOD’s civilian personnel management at the end of this report. DOD Labor Unions, Estimated Number of Employees Represented, and Membership in the United Defense Workers Coalition Table 1 lists current DOD labor unions, the estimated number of employees represented by each union, and which unions belong to the United Defense Workers Coalition. Comments from the Department of Defense GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Sandra F. Bell, Renee S. Brown, Rebecca L. Galek, Barbara L. Joyce, Julia C. Matta, Mark A. Pross, William J. Rigazio, John S. Townes, and Susan K. Woodward made key contributions to this report. Related GAO Products Questions for the Record Related to the Department of Defense’s National Security Personnel System. GAO-05-771R. Washington, D.C.: June 14, 2005. Questions for the Record Regarding the Department of Defense’s National Security Personnel System. GAO-05-770R. Washington, D.C.: May 31, 2005. Post-hearing Questions Related to the Department of Defense’s National Security Personnel System. GAO-05-641R. Washington, D.C.: April 29, 2005. Defense Management: Key Elements Needed to Successfully Transform DOD Business Operations. GAO-05-629T. Washington, D.C.: April 28, 2005. Human Capital: Preliminary Observations on Proposed Regulations for DOD’s National Security Personnel System. GAO-05-559T. Washington, D.C.: April 14, 2005. Human Capital: Preliminary Observations on Proposed Department of Defense National Security Personnel System Regulations. GAO-05-517T. Washington, D.C.: April 12, 2005. Human Capital: Preliminary Observations on Proposed DOD National Security Personnel System Regulations. GAO-05-432T. Washington, D.C.: March 15, 2005. Department of Defense: Further Actions Are Needed to Effectively Address Business Management Problems and Overcome Key Business Transformation Challenges. GAO-05-140T. Washington, D.C.: November 18, 2004. DOD Civilian Personnel: Comprehensive Strategic Workforce Plans Needed. GAO-04-753. Washington, D.C.: June 30, 2004. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. Human Capital: Building on DOD’s Reform Efforts to Foster Governmentwide Improvements. GAO-03-851T. Washington, D.C.: June 4, 2003. Human Capital: DOD’s Civilian Personnel Strategic Management and the Proposed National Security Personnel System. GAO-03-493T. Washington, D.C.: May 12, 2003. Defense Transformation: DOD’s Proposed Civilian Personnel System and Governmentwide Human Capital Reform. GAO-03-741T. Washington, D.C.: May 1, 2003. Defense Transformation: Preliminary Observations on DOD’s Proposed Civilian Personnel Reforms. GAO-03-717T. Washington, D.C.: April 29, 2003. DOD Personnel: DOD Actions Needed to Strengthen Civilian Human Capital Strategic Planning and Integration with Military Personnel and Sourcing Decisions. GAO-03-475. Washington, D.C.: March 28, 2003.
The Department of Defense's (DOD) new personnel system--the National Security Personnel System (NSPS)--will have far-reaching implications not just for DOD, but for civil service reform across the federal government. The National Defense Authorization Act for Fiscal Year 2004 gave DOD significant authorities to redesign the rules, regulations, and processes that govern the way that more than 700,000 defense civilian employees are hired, compensated, promoted, and disciplined. In addition, NSPS could serve as a model for governmentwide transformation in human capital management. However, if not properly designed and effectively implemented, it could severely impede progress toward a more performance- and results-based system for the federal government as a whole. This report (1) describes DOD's process to design its new personnel management system, (2) analyzes the extent to which DOD's process reflects key practices for successful transformations, and (3) identifies the most significant challenges DOD faces in implementing NSPS. DOD's current process to design its new personnel management system consists of four stages: (1) development of design options, (2) assessment of design options, (3) issuance of proposed regulations, and (4) a statutory public comment period, a meet and confer period with employee representatives, and a congressional notification period. DOD's initial design process was unrealistic and inappropriate. However, after a strategic reassessment, DOD adjusted its approach to reflect a more cautious and deliberative process that involved more stakeholders. DOD's NSPS design process generally reflects four of six selected key practices for successful organizational transformations. First, DOD and OPM have developed a process to design the new personnel system that is supported by top leadership in both organizations. Second, from the outset, a set of guiding principles and key performance parameters have guided the NSPS design process. Third, DOD has a dedicated team in place to design and implement NSPS and manage the transformation process. Fourth, DOD has established a timeline, albeit ambitious, and implementation goals. The design process, however, is lacking in two other practices. First, DOD developed and implemented a written communication strategy document, but the strategy is not comprehensive. It does not identify all key internal stakeholders and their concerns, and does not tailor key messages to specific stakeholder groups. Failure to adequately consider a wide variety of people and cultural issues can lead to unsuccessful transformations. Second, while the process has involved employees through town hall meetings and other mechanisms, it has not included employee representatives on the working groups that drafted the design options. It should be noted that 10 federal labor unions have filed suit alleging that DOD failed to abide by the statutory requirements to include employee representatives in the development of DOD's new labor relations system authorized as part of NSPS. A successful transformation must provide for meaningful involvement by employees and their representatives to gain their input into and understanding of the changes that will occur. DOD will face multiple implementation challenges. For example, in addition to the challenges of continuing to involve employees and other stakeholders and providing adequate resources to implement the system, DOD faces the challenges of ensuring an effective, ongoing two-way communication strategy and evaluating the new system. In recent testimony, GAO stated that DOD's communication strategy must include the active and visible involvement of a number of key players, including the Secretary of Defense, for successful implementation of the system. Moreover, DOD must ensure sustained and committed leadership after the system is fully implemented and the NSPS Senior Executive and the Program Executive Office transition out of existence. To provide sustained leadership attention to a range of business transformation initiatives, like NSPS, GAO recently recommended the creation of a chief management official at DOD.
Background MRI and CT services are two types of medical imaging that aid in the diagnosis and treatment of myriad diseases and disorders. Medicare reimburses providers for performing the services and, subsequently, interpreting the results. Payment for the performance of the service can be made through different payment systems, depending on where the service is performed. In 2010, 6.8 million MRI and CT services were performed in a physician office or IDTF, representing about 23 percent of all MRI and CT services received by Medicare FFS beneficiaries. CMS has implemented several policies to limit self-referral, and MedPAC and other researchers have proposed further reforms. MRI and CT Services Medical imaging is a noninvasive process used to obtain pictures of the internal anatomy or function of the anatomy using one of many different types of imaging equipment and media for creating the image. MRI and CT services are two of the six medical imaging modalities. MRI services use magnets, radio waves, and computers to create images of internal body tissues. CT services use ionizing radiation and computers to produce cross-sectional images of internal organs and body structures. For certain advanced imaging services, contrast agents, such as barium or iodine solutions, are administered to patients orally or intravenously. By using contrast, sometimes referred to as “dye,” as part of the imaging examination, providers can view soft tissue and organ function more clearly. MRI and CT services help diagnose and treat many diseases and disorders such as different types of cancer, cardiovascular diseases, and musculoskeletal disorders. They can also reduce the need for more- invasive medical procedures and improve patient outcomes. Medicare Billing and Payment Policies for Advanced Imaging Services Medicare payments for advanced imaging services are separated into two components—the technical component (TC) and the professional component (PC). The TC is intended to cover the cost of performing a test, including the costs for equipment, supplies, and nonphysician staff. The PC is intended to cover the provider’s time in interpreting the image and writing a report on the findings. The PC and TC can be billed together, on what is called a global claim. The components can also be billed separately. For instance, a global claim could be billed if the same provider performs and interprets the examination, whereas the TC and PC could be billed separately if the performing and interpreting providers are different. Typically, the Medicare payment for the TC is substantially higher than the payment for the PC. For instance, for a CT of the pelvis with dye billed under the 2010 Medicare physician fee schedule, the TC accounted for 79 percent of the total payment, and the PC accounted for 21 percent. Medicare reimburses providers through different payment systems depending on where the advanced imaging service is performed. When an advanced imaging service is performed in a provider’s office or an IDTF, both the PC and TC are reimbursed under the Medicare physician fee schedule. Alternatively, when the service is performed in an institutional setting, such as a hospital outpatient or inpatient department, the provider is reimbursed under the Medicare physician fee schedule for the PC, while the TC is reimbursed under a different Medicare payment system, according to the setting in which the service was provided. For instance, the TC of an advanced imaging service performed in a hospital outpatient department is reimbursed under the Medicare hospital outpatient payment system, while a service performed in a hospital inpatient setting is reimbursed through a facility payment paid under Medicare Part A. 2010 Advanced Imaging Utilization by Setting and Medicare Physician Fee Schedule Expenditures In 2010, Medicare FFS beneficiaries received 30.0 million advanced imaging services, approximately 6.8 million (23 percent) of which were performed in an IDTF or physician’s office. Of the 6.8 million advanced imaging services performed in an IDTF or physician’s office, 2.9 million were MRI services and 3.9 million were CT services. The remaining 23.2 million advanced imaging services were performed in other settings, such as hospital inpatient or outpatient departments, and their associated TCs were billed through different payment systems (see fig. 1). The total expenditures for all advanced imaging services billed under the Medicare physician fee schedule, including TCs and PCs, reached $4.2 billion in 2010. Numerous policies have been implemented or proposed by CMS, MedPAC, or other researchers that are designed to limit self-referral or reduce inappropriate utilization of advanced imaging services. These policies can affect self-referral or advanced imaging utilization through various means such as prohibiting different types of physician self- referral, informing beneficiaries of physician self-referral, mandating accreditation of staff performing MRI and CT services, improving payment accuracy, reducing payments for self-referred services, and ensuring services are clinically appropriate. One type of physician self-referral arrangement that CMS has prohibited is “per-click” self-referral arrangements where, for instance, a physician leases an imaging machine to a hospital, refers patients for imaging services, and then is paid on a per-service basis by the hospital. CMS has also solicited comments on prohibiting self-referral of diagnostic tests provided as an ancillary service in a physician’s office that are not usually provided during an office visit, because a key rationale for permitting self-referral of such services is that receiving a diagnostic service during the same office visit when a physician orders a test is convenient for beneficiaries. MedPAC, in its June 2010 report to Congress, noted that MRI and CT services were performed on the same day as an office visit less than a quarter of the time, with only 8.4 percent of MRI services of the brain being performed on the same day as an office visit. Appendix II lists a select number of such policies in addition to these two policies that have been implemented or put forth by CMS, MedPAC, and other researchers. Self-Referred MRI and CT Services and Expenditures Grew Overall, While Non- Self-Referred Services and Expenditures Grew Slower or Decreased From 2004 through 2010, the number of self-referred MRI and CT services performed in a provider’s office and non-self-referred MRI and CT services performed in a provider’s office or IDTF increased, with the larger increase for self-referred services. Similarly, expenditures for self- referred advanced imaging services also increased over this period, and this increase was larger than the changes in expenditures for advanced imaging services that were not self-referred. Over the period we reviewed, the share of advanced imaging services that were self-referred also increased overall and across all provider specialties we examined. Number of Self-Referred and Non-Self-Referred MRI and CT Services Increased Overall from 2004 to 2010, with the Larger Increase among Self-Referred Services While the number of self-referred MRI services performed in a provider’s office and non-self-referred MRI services performed in a provider’s office or IDTF both increased from 2004 through 2010, a significantly larger Specifically, the increase occurred among the self-referred services.number of self-referred MRI services increased from about 380,000 services in 2004 to about 700,000 services in 2010—an increase of more than 80 percent (see fig 2). In contrast, the number of non-self-referred MRI services grew about 12 percent over the same time period, from about 1.97 million services in 2004 to about 2.21 million services in 2010. Despite an overall increase during this time, both self-referred and non- self-referred services declined at some point during the years of our study. However, the number of self-referred services grew faster in the earlier years and declined less in the later years than the number of non- self-referred services. Similar to MRI services, the number of self-referred and non-self-referred CT services both increased from 2004 through 2010, with a considerably larger increase occurring in self-referred services. Specifically, the number of self-referred CT services more than doubled from 2004 through 2010, growing from about 700,000 services to about 1.45 million services (see fig. 3). In contrast, the number of non-self-referred CT services increased about 30 percent during these years, from about 1.90 million services to about 2.48 million services. Although the number of both self-referred and non-self-referred CT services increased over the period of our study, the number of non-self-referred CT services decreased from 2009 through 2010. The number of self-referred advanced imaging services increased from 2004 through 2010, even after accounting for change in the number of Medicare FFS beneficiaries. Specifically, the number of self-referred MRI services per 1,000 Medicare FFS beneficiaries grew from 10.8 in 2004 to 20.0 in 2010—an increase of about 85 percent. Similarly, the number of self-referred CT services per 1,000 Medicare FFS beneficiaries more than doubled, growing from about 19.6 in 2004 to 41.2 in 2010. Self-Referred MRI and CT Expenditures Grew More Than Non-Self-Referred Expenditures Overall with Non-Self-Referred MRI Expenditures Declining Expenditures for self-referred MRI services grew overall from 2004 through 2010, while expenditures for non-self-referred MRI services declined. Specifically, self-referred MRI expenditures grew about 55 percent during the time of our review, from approximately $239 million in 2004 to about $370 million in 2010 (see fig. 4). In contrast, expenditures for non-self-referred MRI services decreased about 8.5 percent during the same period. Expenditures for both self-referred and non-self-referred MRI services increased rapidly from 2004 through 2006, then decreased sharply in 2007. These declines in 2007 corresponded with the first year of implementation of a DRA provision that reduced fees for certain advanced imaging services. Since the declines in 2007, expenditures for non-self-referred MRI services have declined further while self-referred expenditures have increased. Relative to 2004, expenditures for both self-referred and non-self-referred CT services have grown through 2010, but the increase was larger for self-referred CT services (see fig. 5). Specifically, expenditures for self- referred CT services increased from $204 million in 2004 to about $340 million in 2010, an increase of about 67 percent. In contrast, expenditures for non-self-referred CT services increased from about $609 million in 2004 to about $642 million in 2010, an increase of about 5 percent. Because the self-referred advanced imaging services grew at a greater rate than non-self-referred services from 2004 through 2010, the proportion of MRI and CT services that were self-referred increased during that time period. Specifically, the proportion of MRI services that were self-referred increased from 16.3 percent in 2004 to 24.2 percent in 2010. Similarly, the proportion of CT services that were self-referred grew from 26.8 percent in 2004 to 37.0 percent in 2010. Consistent with the overall trend, the proportion of MRI and CT services that were self- referred increased from 2004 through 2010 for all provider specialties that we studied.specialties, see appendix III. Self-Referring Providers Referred Substantially More Advanced Imaging Services on Average Than Did Other Providers We found that, in 2010, providers that self-referred beneficiaries for MRI and CT services referred substantially more of those services than did providers who did not self-refer these services, even after we accounted for differences in practice size, specialty, geography, and patient characteristics. We also found that the year after providers purchased MRI or CT equipment, leased MRI or CT equipment, or joined a group practice that self-referred, they increased the number of services they referred when compared with providers that did not begin to self-refer advanced imaging services. Self-Referring Providers Referred Substantially More MRI and CT Services Than Other Providers, Regardless of Practice or Patient Characteristics In 2010, self-referring providers referred substantially more advanced imaging services than providers who did not self-refer such services that year.an MRI service in 2010 averaged 36.4 MRI referrals, compared with an average of 14.4 MRI referrals for non-self-referrers. Similarly, providers that self-referred at least one beneficiary for a CT service in 2010 averaged 73.2 CT referrals, or 2.3 times as many as the 32.3 CT referrals averaged by non-self-referring providers. About 10 percent of all MRI and CT services referred by self-referring providers in 2010 were ordered, performed, and interpreted by the same provider. Certain efficiencies may be gained when the same provider orders, performs, and interprets an advanced imaging service, such as reviewing a patient’s clinical history only once. CMS has taken steps to ensure that fees for services paid under the physician fee schedule take into account efficiencies that Specifically, providers that self-referred at least one beneficiary for resulted from how the services are provided, and we recently recommended that CMS expand these efforts. Differences in advanced imaging referrals between self-referring and non- self-referring providers persisted after accounting for differences in practice size, specialty, geography, or patient characteristics. Practice Size Self-referring providers referred more MRI and CT services than did non- self-referring providers, regardless of differences in practice size. In general, self-referring providers tend to work in practices with a larger number of Medicare beneficiaries. However, in 2010, self-referring providers referred more MRI and CT services than non-self-referring providers regardless of practice size, and the difference in number of services referred generally increased as provider size increased (see table 1). For example, self-referring providers that had 50 or fewer patients referred 1.8 times as many MRI services as did non-self-referring providers. In comparison, self-referring providers with 500 or more patients referred 2.4 times as many MRI services as non-self-referring providers did. Self-referring providers generally referred more MRI and CT services than did non-self-referring providers, regardless of differences in specialties. Self-referring providers were more likely than non-self-referring providers to belong to specialties that had a greater-than-average number of referrals per physician for advanced imaging services in 2010. However, for the 7 specialties that had at least 1,000 providers that self-referred beneficiaries for MRI services, self-referring providers generally averaged more referrals for MRI services than did non-self-referring providers, regardless of practice size. Similarly, self-referring providers in 9 of the 13 specialties that had at least 1,000 self-referring CT providers generally referred more beneficiaries for CT services than non-self-referring providers, regardless of practice size. Geography Self-referring providers referred more MRI and CT services than non-self- referring providers, regardless of differences in geography. Providers that self-referred MRI services averaged 36.3 MRI referrals and 37.3 MRI referrals in urban and rural locations, respectively. In comparison, non- self-referring providers averaged 14.3 MRI referrals in urban locations and 15.2 MRI referrals in rural locations. Providers that self-referred beneficiaries for CT services averaged 72.7 referrals in urban locations and 77.2 referrals in rural locations, while non-self-referring providers averaged 31.1 CT referrals in urban locations and 40.7 referrals in rural locations. We found that differences in the number of MRI and CT referrals made by self-referring and non-self-referring providers persisted when accounting for provider size along with geography (see table 2). Self-referring providers referred more MRI and CT services than non-self- referring providers, in spite of similarities in patient characteristics. Specifically, the patient populations of self-referring and non-self-referring MRI and CT providers were similar in terms of most patient characteristics, with self-referring providers having slightly healthier patients than non-self-referring providers, as indicated by their lower average risk score (see table 3). If self-referring providers had patients that were older or sicker, it could have explained why self-referring providers referred their patients for more services than non-self-referring providers. Our analysis indicated that providers’ referrals for MRI and CT services substantially increased the year after they began to self-refer. In our analysis, we compared the number of MRI and CT referrals for switchers—those providers that did not self-refer in 2007 or 2008 but did self-refer in 2009 and 2010—to providers that did not change their self- referral status during the same time period. Providers could self-refer by purchasing imaging equipment, leasing equipment, or joining a group practice that already self-referred. Overall, the switcher group of providers who began self-referring in 2009 increased the average number of MRI and CT referrals they made by about 67 percent in 2010 compared to the average in 2008. In the case of MRIs, the average number of referrals switchers made for MRI services increased from 25.1 in 2008 to 42.0 in 2010. In contrast, the average number of MRI and CT referrals declined for providers that did not self-refer and providers who self-referred from 2008 through 2010. This comparison suggests that the increase in the average number of referrals for switchers from 2008 to 2010 was not due to a general increase in the use of imaging services among all providers. (See table 4.) The increase in MRI and CT referrals for providers that began self- referring in 2009 cannot be explained exclusively by factors such as providers joining practices with higher patient volumes, different patient populations, or different practice cultures. Specifically, providers that remained in the same practice from 2007 through 2010, but began self- referring in 2009, also had a bigger increase in the number of MRI and CT referrals than did providers that did not change their self-referral Providers that remained in the same practice from 2008 through status.2010, but began self-referring in 2009 had a 21.0 percent increase in MRI referrals and a 14.4 percent increase in CT referrals. Higher Use of Advanced Imaging Services by Self- Referring Providers Results in Substantial Costs to Medicare On the basis of our estimates, Medicare spent about $109 million more in 2010 than the program would have if self-referring providers referred advanced imaging services at the same rate as non-self-referring providers of the same specialty and provider size (see fig. 6). This additional spending can be attributed to the fact that self-referring providers referred over 400,000 more MRI and CT services in 2010 than if they had referred at the same rate as non-self-referring providers of the same size and specialty. Specifically, we estimate there were 143,303 additional referrals for MRI services and 283,725 additional referrals for CT services. The additional Medicare imaging expenditures attributed to self-referring providers is likely higher than $109 million in 2010.significant portion of self-referring providers are not included in this estimate. Specifically, we limited our analysis to those specialties that had at least 1,000 self-referring providers. Approximately 34 percent of the This is because a providers who self-referred beneficiaries for MRI services and 19 percent of the providers who self-referred beneficiaries for CT services belonged to a specialty other than those that met the 1,000 self-referring providers criteria. Conclusions Advanced imaging services can help in the early detection and aid in the treatment of certain diseases, resulting in less-invasive treatments and improved patient outcomes. The ability of providers to self-refer beneficiaries for these services can, for example, improve coordination of care and help ensure convenient access to these services among beneficiaries. However, our review indicates that some factor or factors other than the health status of patients, provider practice size or specialty, or geographic location (i.e., rural or urban) helped drive the higher advanced imaging referral rates among self-referring providers compared to non-self-referring providers. We found that providers who began to self-refer advanced imaging services—after purchasing or leasing imaging equipment or joining practices that self-referred—substantially increased their referrals for MRI and CT services relative to other providers. This suggests that financial incentives for self-referring providers may be a major factor driving the increase in referrals. These financial incentives likely help explain why, in 2010, providers who self- referred made 400,000 more referrals for advanced imaging services than they would have if they were not self-referring. These additional referrals cost CMS more than $100 million in 2010 alone. To the extent that these additional referrals are unnecessary, they pose an unacceptable risk for beneficiaries, particularly in the case of CT services, which involve the use of ionizing radiation. Given the challenges to the long-range fiscal sustainability of Medicare, it is imperative that CMS develop policies to address the effect of self- referral on the utilization of and expenditures for advanced imaging services. CMS first needs to improve its ability to identify services that are self-referred. Claims do not include an indicator or “flag” that identifies whether services are self-referred or non-self-referred, and CMS does not currently have a method for easily identifying such services. A systematic method for identifying self-referred advanced imaging services would give CMS the ongoing ability to determine the extent to which these services are self-referred and help the agency identify those services that may be inappropriate, unnecessary, or potentially harmful to beneficiaries. Including a self-referral flag on Medicare Part B claims submitted by providers who bill for advanced imaging services is likely the easiest and most cost-effective approach. Second, we found that about 10 percent of advanced imaging services referred by self-referring physicians in 2010 were also performed and interpreted by the same physician. Certain efficiencies may be gained when the same provider orders, performs, and interprets an advanced imaging service, such as reviewing a patient’s clinical history only once. MedPAC recommended in 2011 that CMS should reduce its payments for advanced imaging services in which the same provider refers and performs the service, to account for efficiencies that are realized in these circumstances. This is consistent with previous efforts by CMS to reduce fees for services paid under the physician fee schedule when efficiencies are realized and with our previous recommendation that CMS expand these efforts. Third, if CMS were able to easily identify self-referred services, the agency may be better positioned to implement an approach that ensures the appropriateness of advanced imaging services that Medicare beneficiaries receive—beyond examining the feasibility of such methods, as we recommended in our 2008 report. Approaches for managing advanced imaging utilization could be “front-end” or used before CMS issues payment, such as prior authorization. CMS could also explore back-end approaches used after CMS issues payment, such as targeted audits of self-referring providers that refer a high volume of services. Recommendations for Executive Action In order to improve CMS’s ability to identify self-referred advanced imaging services and help CMS address the increases in these services, we recommend that the Administrator of CMS take the following three actions: 1. Insert a self-referral flag on its Medicare Part B claims form and require providers to indicate whether the advanced imaging services for which a provider bills Medicare are self-referred or not. 2. Determine and implement a payment reduction for self-referred advanced imaging services to recognize efficiencies when the same provider refers and performs a service. 3. Determine and implement an approach to ensure the appropriateness of advanced imaging services referred by self-referring providers. Agency Comments and Our Evaluation HHS reviewed a draft of this report and provided written comments, which are reprinted in appendix IV. In its comments, HHS stated that it would consider one of our recommendations but did not concur with our other two recommendations. HHS did not comment on our findings that self- referring providers referred substantially more advanced imaging services than non-self-referring providers or our conclusion that financial incentives for self-referring providers may be a major factor driving the increase in referrals for advanced imaging services. HHS noted that it would consider our recommendation that CMS determine and implement an approach to ensure the appropriateness of advanced imaging services referred by self-referring providers. According to HHS, CMS would consider this recommendation when refining its medical review strategy for advanced imaging services. HHS also indicated that CMS does not have statutory authority to implement some of the approaches discussed in the report. We are pleased that CMS plans to consider this recommendation and note that we did not identify a specific approach, having identified several examples in our report of both front-end and back-end approaches to managing utilization of advanced imaging services. As we reported, CMS could explore back-end approaches used after CMS issues payment, such as targeted audits of self-referring providers. CMS could also explore other approaches the agency determines are within its statutory authority. Further, if deemed necessary, CMS could seek legislative authority to implement promising approaches to managing advanced imaging utilization. HHS did not concur with our recommendation that CMS insert a self- referral flag on its Medicare Part B claims and require providers to indicate whether the advanced imaging services for which a provider bills Medicare are self-referred or not. According to HHS, CMS believes that a new checkbox on the claim form identifying self-referral would be complex to administer and providers may not characterize referrals accurately. CMS believes that other payment reforms, such as paying networks of providers, hospitals, or other entities that share responsibility for providing care to patients, would better address overutilization. We continue to believe that including an indicator or flag on the claims would likely be the easiest and most cost-effective approach to improve CMS’s ability to identify self-referred advanced imaging services. We do not suggest, nor did we intend, that CMS use the self-referral flag or indicator we recommended to determine compliance with the physician self-referral law. Without a self-referral flag or indicator, CMS will not be able to monitor trends in utilization and expenditures associated with physician self-referral without considerable time and effort. Further, a self-referral flag does not have to be a “checkbox” on the claim and could be a modifier, similar to other modifiers that CMS uses to characterize claims. In addition, HHS did not provide reasons to support CMS’s contention that such a flag would be complex to administer. HHS also did not concur with our recommendation that CMS determine and implement a payment reduction for self-referred advanced imaging services to recognize efficiencies when the same provider refers and performs a service. According to HHS, CMS’s multiple procedure payment reduction already captures efficiencies inherent in providing multiple advanced imaging services by the same physician or group practice during the same session. CMS also noted that a further payment reduction may reduce, but not eliminate, the underlying financial incentive to self-refer advanced imaging services and may cause providers to refer more services, in an effort to maintain their income. CMS also noted that providers in a group practice could easily avoid this reduction by having one physician order the service while another furnishes the service. According to HHS, CMS also questions its statutory authority to impose the payment reduction for the subset of physicians who self-refer, citing a prohibition on paying a differential by physician specialty for the same service. Our report shows that self-referring providers generally referred more MRI and CT services, regardless of differences in specialties, and CMS did not indicate how this recommendation would implicate the prohibition on paying a differential by specialty. Additionally, while HHS cites the multiple procedure payment reduction as a means to address certain efficiencies in the delivery of advanced imaging services, these are not the efficiencies targeted by our recommendation. Instead, as noted in our report, our recommended payment reduction would capture those efficiencies gained when the same provider orders and performs an advanced imaging service. Such efficiencies could be captured in a single—rather than multiple—advanced imaging service. This recommendation is also consistent with a 2011 MedPAC recommendation. As noted in our report, this payment reduction would affect about 10 percent of advanced imaging services referred by self- referring providers. As for CMS’s concern about overutilization of advanced imaging services resulting from a payment reduction, CMS could help address this issue by implementing our recommendation to use a flag indicating self-referral to monitor utilization of these services. On the basis of HHS’s written response to our report, we are concerned that neither HHS nor CMS appears to recognize the need to monitor the self-referral of advanced imaging services on an ongoing basis and determine those services that may be inappropriate, unnecessary, or potentially harmful to beneficiaries. HHS did not comment on our key finding that self-referring physicians referred about two times as many advanced imaging services, on average, as providers who did not self refer. Nor did HHS comment on our estimate that these additional referrals for advanced imaging services cost CMS more than $100 million in 2010 alone. Given these findings, we continue to believe that CMS should take steps to monitor the utilization of advanced imaging services and ensure that the services for which Medicare pays are appropriate. HHS also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS, interested congressional committees, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: Scope and Methods This section describes the scope and methodology used to analyze our three objectives: (1) trends in the number of and expenditures for self- referred and non-self-referred advanced imaging services from 2004 through 2010, (2) the extent to which the provision of advanced imaging services differs for providers who self-refer when compared with other providers, and (3) the implications of self-referral for Medicare spending on advanced imaging services. For all three objectives, we used the Medicare Part B Carrier File, which contains final action Medicare Part B claims for noninstitutional providers, such as physicians. Claims can be for one or more services or for individual service components. Each service or service component is identified on a claim by its Healthcare Common Procedure Coding System (HCPCS) code, which the Centers for Medicare & Medicaid Services (CMS) assigns to products, supplies, and services for billing purposes. HCPCS codes are also categorized by CMS using the Berenson-Eggers Type of Service (BETOS) categorization system, which assigns HCPCS to broad service categories. We limited our universe of services and service components for our study to those for magnetic resonance imaging (MRI) and computed tomography (CT) services. We classified MRI and CT services and service components as those with HCPCS codes included in a BETOS category where the first two digits were equal to “I2”, defined as advanced imaging services. We further limited our universe to only those MRI and CT services that were considered designated health services—services for which, in the absence of an exception, a physician may not make a referral to furnish to an entity with which he has a financial relationship without implicating the Stark law.designated health services as part of the physician fee schedule. We also restricted our universe to those HCPCS codes that involved the performance of an advanced imaging service, which can be billed with or separately from the interpretation of a MRI or CT imaging service. We identified 125 HCPCS codes that met these criteria. Annually, CMS publishes a list of Because there is no indicator or “flag” on the claim that identifies whether services were self-referred or non-self-referred, we developed a claims- based methodology for identifying self-referred services. Specifically, we classified services as self-referred if the provider that referred the beneficiary for a MRI or CT service and the provider that performed the MRI or CT service was identical or had a financial relationship with the same entity. We used taxpayer identification number (TIN), an identification number used by the Internal Revenue Service, to determine providers’ financial relationships. The TIN could be that of the provider, the provider’s employer, or another entity to which the provider reassigns payment. performing providers, we created a crosswalk of the performing provider’s unique physician identification number or national provider identifier (NPI) Some providers may be associated with TINs with which they do not have a direct or indirect financial relationship and thus would not have the same incentives as other self- referring providers. We anticipate that relatively few providers in our self-referring group meet this description but to the extent that they do, it may have limited the differences we found in utilization and expenditure rates between self-referring and non-self-referring providers. to the TIN that appeared on the claim and used that to assign TINs to the referring and performing providers. We considered global services and separately-billed TCs to be self- referred if one or more of the TINs of the referring and performing provider matched. However, we did not consider separately-billed PCs to be self-referred, even if they met the same criterion. Compared to the payment for the TC of an advanced imaging service, the payment for the PC is relatively small, and thus there is little incentive for providers to only self-refer the PC of a service. As part of developing this claims-based methodology to identify self-referred services, we interviewed officials from CMS, provider groups, and other researchers. To describe the trends in the number of and expenditures for self-referred and non-self-referred advanced imaging services from 2004 through 2010, we used the Medicare Part B Carrier File to calculate utilization and expenditures for self-referred and non-self-referred MRI and CT services, both in aggregate and per beneficiary. We limited this portion of our analysis to global claims or claims for a separately-billed TC, which indicates that the performance of the imaging service was billed under the physician fee schedule. As a result, the universe for this portion of our analysis are those advanced imaging services performed in a provider’s office or in an independent diagnostic testing facility (IDTF), which both bill for the performance of an advanced imaging service under the physician fee schedule. We focused on these settings because our previous work showed rapid growth among such services and because the financial incentive for providers to self-refer is most direct when the service is performed in a physician office. Approximately one-fifth of all advanced imaging services provided to Medicare FFS beneficiaries were performed in a physician office or IDTF. To calculate the number of Medicare beneficiaries from 2004 through 2010 needed for per beneficiary calculations, we used the Denominator File, a database that contains enrollment information for all Medicare beneficiaries enrolled in a given year. Because radiologists and IDTFs are limited in their ability to generate referrals for advanced imaging services, we removed services referred by an IDTF or radiologist. To determine the extent to which the provision of advanced imaging services differs for providers who self-refer when compared with other providers, we first classified providers on the basis of the type of referrals they made. Specifically, we classified providers as self-referring if they self-referred at least one beneficiary for an advanced imaging service. We classified providers as non-self-referring if they referred a beneficiary for an advanced imaging service, but did not self-refer any of the services. Because radiologists and providers in IDTFs predominantly perform advanced imaging services and have limited ability to refer beneficiaries for advanced imaging services, we removed those providers from our analysis. Additionally, because emergency medicine providers generally did not practice in provider offices, they were removed from our analysis. We assigned to each provider the MRI and CT service and service-components that he or she referred, including those for the performance of an imaging service and those for the interpretation of the imaging service result. If the TC and PC were billed separately for the same beneficiary, we counted these two components as one referred service. As a result, we counted all services that a provider referred, regardless of whether it was performed in a provider office, IDTF, or other setting. We then performed two separate analyses. First, we compared the provision—that is, the number of referrals made— of MRI and CT services by self-referring providers and non-self-referring providers in 2010, after accounting for factors such as practice size (i.e., the number of Medicare beneficiaries), provider specialty, geography (i.e., urban or rural), and patient characteristics. We used the number of unique Medicare fee-for-service (FFS) beneficiaries for which providers provided services in 2010 as a proxy for practice size, which we identified using 100 percent of providers’ claims from the Medicare Part B Carrier File. We defined urban settings as metropolitan statistical areas, a geographic entity defined by the Office of Management and Budget as a core urban area of 50,000 or more population. We used rural-urban commuting area codes—a Census tract-based classification scheme that utilizes the standard Bureau of Census Urbanized Area and Urban Cluster definitions in combination with work-commuting information to characterize all of the nation’s Census tracts regarding their rural and urban status—to identify providers as practicing in metropolitan statistical areas. We considered all other settings to be rural. We identified providers’ specialties on the basis of the specialties listed on the claims. These specialty codes include physician specialties, such as cardiology and hematology/oncology, and nonphysician provider types, such as nurse practitioners and physician assistants. We also examined the extent to which the characteristics of the patient populations served by self-referring and non-self-referring providers differed. We used CMS’s risk score file to identify average risk score, which serves as a proxy for beneficiary health status. Information on additional patient characteristics, such as age and sex, came from the Medicare Part B Carrier File claims. To calculate the percentage of advanced imaging services referred by self-referring providers that were referred, performed, and interpreted by the same provider, we summed global advanced imaging claims where the referring and performing provider were the same and claims where the TC and PC were referred and performed separately for the same beneficiary by the same provider. We then divided the total by the number MRI and CT services referred by self-referring providers. We used 4 years of experience (2007 through 2010) to categorize providers even though we compared referrals in 2008 to 2010 because we wanted to ensure that providers that began self-referring in 2009 did not self-refer for at least the 2 prior years. year before the switchers began self-referring) to 2010 (i.e., the year after they began self-referring). We compared the change in the number of referrals made by these providers to the change in the number of referrals made over the same time period by providers who did not change whether or not they self-referred advanced imaging services. Specifically, we compared the change in the number of referrals made by switchers to those made by (1) self-referring providers—providers that self-referred in years 2007 through 2010, and (2) non-self-referring providers—providers that did not self-refer in years 2007 through 2010. For each provider, we also identified the most common TIN to which they referred MRI or CT services. If the TIN was the same for all 4 years, we assumed that they remained part of the same practice for all 4 years. We calculated the number of referrals in 2008 and 2010 separately for providers that met this criterion. To determine the implications of self-referral for Medicare spending on advanced imaging services, we summed the number of and expenditures for all MRI and CT services performed in 2010 by providers of those specialties with at least 1,000 self-referring providers. We then created an alternative scenario in which self-referring providers referred the same number of services as non-self-referring providers of the same provider size and specialty and calculated how this affected expenditures. To do this, we calculated the number of advanced imaging services non-self- referring providers referred per unique Medicare FFS beneficiary for each specialty and practice size. We then multiplied the referral rate times the number of patients seen by self-referring providers of the same practice size and specialty, representing the number of services self-referring providers would have referred if they referred at the non-self-referring rate. To calculate the cost of additional services to Medicare, we multiplied the difference between the self-referred services and the number of services they would have referred if they referred at the same rate as non-self-referring providers by the average expenditures for a MRI or CT service. We took several steps to ensure that the data used to produce this report were sufficiently reliable. Specifically, we assessed the reliability of the CMS data we used by interviewing officials responsible for overseeing these data sources, reviewing relevant documentation, and examining the data for obvious errors. We determined that the data were sufficiently reliable for the purposes of our study. We conducted this performance audit from May 2010 through September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Select Implemented or Proposed Policies Designed to Address Self-Referral or the Utilization of Advanced Imaging Services Appendix II: Select Implemented or Proposed Policies Designed to Address Self-Referral or the Utilization of Advanced Imaging Services Description and examples of policies Effective January 1, 2011, the Patient Protection and Affordable Care Act of 2010 (PPACA) requires physicians who self-refer MRI, CT, or positron emission tomography services under certain circumstances to inform their patients that they may obtain these services from another provider and provide their patients with a list of alternative providers in their area. The effect of this requirement on physician self-referral is unclear. The American College of Radiology reports that multiple states had similar requirements in place before the implementation of PPACA. However, the Medicare Improvements for Patients and Providers Act of 2008 requires physicians and other providers to be accredited by a CMS-approved national accreditation organization by January 1, 2012, in order to continue to furnish the technical component of services such as MRI and CT services. While the intent of this requirement was to improve quality of care, this policy could reduce the number of providers who self-refer if they fail to gain accreditation. However, this policy’s actual effect on self-referral is unclear. The Medicare Payment Advisory Commission (MedPAC) has noted that improving the payment accuracy of services could reduce the incentive to self-refer those services by making them less financially beneficial. Consistent with our previous recommendations, payment rates for MRI and CT services have been reduced several times over the last few years to reflect efficiencies that occur when the same provider performs multiple services on the same patient on the same day. In its June 2010 report, MedPAC noted that reducing payments for physician self-referred services could limit Medicare expenditures when self-referral occurs and reduce the incentive to self-refer by making it less financially beneficial. One option put forth in the report is reducing payments for certain self-referred services by an amount equal to the percent expenditures increase due to self-referral. Another option discussed is reducing the payment for self-referred services when they include activities already performed by self-referring physicians, such as reviewing the medical history of a beneficiary. In addition to a similar recommendation from MedPAC, we have recommended CMS consider expanding its front-end management capabilities, such as prior authorization— an approach whereby providers must seek some sort of approval before ordering an advanced imaging service. Such policies could limit the increased utilization associated with self-referral by ensuring that self-referred services are clinically appropriate. One researcher suggested expanding postpayment reviews by making imaging a subject for medical review by recovery audit contractors. Description and examples of policies CMS has prohibited different types of physician self-referral that the agency deemed particularly susceptible to abuse. Effective October 1, 2009, CMS prohibits “per-click” self-referral arrangements where, for instance, a physician leases an imaging machine to a hospital, refers patients to that hospital in order to receive imaging services, and then is paid on a per service basis by the hospital. In 2008, CMS considered but did not prohibit “block time” self-referral arrangements where, for instance, a physician leases a block of time on a facility’s MRI or CT machine, refers his or her patients to receive services on the facility’s machine, and then bills for the services. CMS has also solicited comments on a prohibition against physician self-referral for diagnostic tests provided in physician offices when those tests are not needed at the time of a patient’s office visit in order to assist the physician in determining an appropriate diagnosis or plan of treatment. MedPAC has found that MRI and CT services are performed on the same day as an office visit less than a quarter of the time, with only 8.4 percent of MRIs of the brain being performed on the same day as an office visit. Another policy, discussed in MedPAC’s June 2010 report, that would limit physician self- referral is restricting certain types of self-referral to only those practices that are clinically integrated. Maryland prohibits providers from making self-referrals for certain MRI and CT services. American College of Radiology, State-by-State Comparison of Physician Self-Referral Laws, accessed July 26, 2010. Appendix III: Self-Referral of MRI and CT Services, by Provider Specialty, in 2004 and 2010 The proportion of MRI services and CT services that were self-referred increased from 2004 through 2010 for all provider specialties we examined for our study. We examined all provider specialties that performed a minimum proportion of either self-referred MRI or CT services in 2004 and 2010. While this increase across provider specialties is consistent with the overall trend of increased self-referral, the increases varied among provider specialties. For MRI services, increases in the self-referral rate for provider specialties ranged from about 4 percentage points (Internal Medicine) to about 19 percentage points for Hematology/Oncology. Similarly, for CT services, increases in the self-referral rates for provider specialties ranged from about 2 percentage points (Internal Medicine) to over 38 percentage points (Radiation Oncology). (see table 5). Appendix IV: Comments from the Department of Health and Human Services Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jessica Farb, Assistant Director; Thomas Walke, Assistant Director; Manuel Buentello; Krister Friday; Gregory Giusto; Brian O’Donnell; and Daniel Ries made key contributions to this report.
Medicare Part B expenditures--which include payment for advanced imaging services--are expected to continue growing at an unsustainable rate. Questions have been raised about self-referral's role in this growth. Self-referral occurs when a provider refers patients to entities in which the provider or the provider's family members have a financial interest. GAO was asked to examine the prevalence of advanced imaging self-referral and its effect on Medicare spending. This report examines (1) trends in the number of and expenditures for self-referred and non-self-referred advanced imaging services, (2) how provision of these services differs among providers on the basis of whether they self-refer, and (3) implications of self-referral for Medicare spending. GAO analyzed Medicare Part B claims data from 2004 through 2010 and interviewed officials from the Centers for Medicare & Medicaid Services (CMS) and other stakeholders. Because Medicare claims lack an indicator identifying self-referred services, GAO developed a claims-based methodology to identify self-referred services and expenditures and to characterize providers as self-referring or not. From 2004 through 2010, the number of self-referred and non-self-referred advanced imaging services--magnetic resonance imaging (MRI) and computed tomography (CT) services--both increased, with the larger increase among self-referred services. For example, the number of self-referred MRI services increased over this period by more than 80 percent, compared with an increase of 12 percent for non-self-referred MRI services. Likewise, the growth rate of expenditures for self-referred MRI and CT services was also higher than for non-self-referred MRI and CT services. GAO's analysis showed that providers' referrals of MRI and CT services substantially increased the year after they began to self-refer--that is, they purchased or leased imaging equipment, or joined a group practice that already self-referred. Providers that began self-referring in 2009--referred to as switchers--increased MRI and CT referrals on average by about 67 percent in 2010 compared to 2008. In the case of MRIs, the average number of referrals switchers made increased from 25.1 in 2008 to 42.0 in 2010. In contrast, the average number of referrals made by providers who remained self-referrers or non-self-referrers declined during this period. This comparison suggests that the increase in the average number of referrals for switchers was not due to a general increase in the use of imaging services among all providers. GAO's examination of all providers that referred an MRI or CT service in 2010 showed that self-referring providers referred about two times as many of these services as providers who did not self-refer. Differences persisted after accounting for practice size, specialty, geography, or patient characteristics. These two analyses suggest that financial incentives for self-referring providers were likely a major factor driving the increase in referrals. GAO estimates that in 2010, providers who self-referred likely made 400,000 more referrals for advanced imaging services than they would have if they were not self-referring. These additional referrals cost Medicare about $109 million. To the extent that these additional referrals were unnecessary, they pose unacceptable risks for beneficiaries, particularly in the case of CT services, which involve the use of ionizing radiation that has been linked to an increased risk of developing cancer.
Background WIA sets forth various requirements for the Secretary of Labor relating to research and evaluation of federally funded employment-related programs and activities. The law calls upon the Secretary of Labor to publish in the Federal Register every 2 years a plan that describes its pilot, demonstration, and research priorities for the next 5 years regarding employment and training. Specifically, WIA requires the Secretary to develop the research plan after consulting with states, localities, and send the plan to the appropriate committees of Congress; and take into account such factors as the likelihood that the results of the projects will be useful to policymakers and stakeholders in addressing employment and training problems. Within ETA, OPDR’s Division of Research and Evaluation plans, conducts, and disseminates employment and training-related research and evaluations. Nearly all of the agency’s research and evaluation studies are conducted under contract; these contractors represent a range of research organizations and academic institutions. Furthermore, OPDR plans and conducts its research and evaluation activities in consultation with ETA’s program offices, such as the Office of Workforce Investment and the Office of Trade Adjustment Assistance. ETA’s research and evaluation funding is divided into two separate budget line items: Pilots, demonstrations, and research. Efforts in this category are focused on developing and testing new ways to approach problems and to deliver services. Under WIA, pilots and demonstrations shall be carried out “for the purpose of developing and implementing techniques and approaches, and demonstrating the effectiveness of specialized methods, in addressing employment and training needs.” WIA also states that the Secretary shall “carry out research projects that will contribute to the solution of employment and training problems in the United States.” Evaluations. Efforts in this category are focused on continuing evaluations of certain programs and activities carried out under WIA. These evaluations must address the effectiveness of these programs and activities carried out under WIA in relation to their cost; the effectiveness of the performance measures relating to these programs and activities; the effectiveness of the structure and mechanisms for delivery of services through these programs and activities; the impact of the programs and activities on the community and participants involved, and on related programs and activities; the extent to which such programs and activities meet the needs of various demographic groups; and such other factors as may be appropriate. In program year 2010, ETA’s combined budget appropriation for conducting evaluations and pilots, demonstrations, and research was about $103 million—or nearly $34 million above what the agency requested. (See fig. 1.) About $84 million of the 2010 funds were designated by the Congress for specific projects, including $30 million for Transitional Jobs activities for ex-offenders, and another $5.5 million for competitive grants addressing the employment and training needs of young parents. According to agency documents, in 2008 and 2009, the Congress similarly increased ETA’s requested budget for pilots, demonstrations, and research, at the same time specifically designating how the majority of those funds would be used, including $4.9 million in 2008 and $5 million in 2009 for the young parents’ demonstration. Key Elements of Sound Research and Evaluation Programs While there is no single or ideal way for government agencies to conduct research, several leading national organizations have developed guidelines that identify key elements that promote a sound research program. These guidelines identify five elements as key: agency resources, professional competence, independence, evaluation policies and procedures, and evaluation plans. Resources. Research should be supported through stable, continuous funding sources and through special one-time funds for evaluation projects of interest to executive branch and congressional policymakers. Professional competence. Research should be performed by professionals with appropriate training and experience for the evaluation activity (such as performing a study, planning an evaluation agenda, reviewing evaluation results, or performing a statistical analysis). Independence. Although the heads of federal agencies and their component organizations should participate in establishing evaluation agendas, budgets, schedules, and priorities, the independence of evaluators must be maintained with respect to the design, conduct, and results of their evaluation studies. Evaluation policy and procedures. Each federal agency and its evaluation centers should publish policies and procedures and adopt quality standards to guide evaluations within its purview. Such policies and procedures should identify the kinds of evaluations to be performed and the criteria and administrative steps for developing evaluation plans and setting priorities, including selecting evaluation approaches to use, consulting experts, ensuring evaluation product quality, and publishing reports. Evaluation plans. Each federal agency should require its major program components to prepare annual and multiyear evaluation plans and to update these plans annually. The planning should take into account the need for evaluation results to inform program budgeting, reauthorization, agency strategic plans, program management, and responses to critical issues concerning program effectiveness. These plans should include an appropriate mix of short- and long-term studies to produce results for short- or long-term policy or management decisions. To the extent practical, the plans should be developed in consultation with program stakeholders. Furthermore, leading organizations, including the American Evaluation Association and the National Academy of Sciences, emphasize the need for research programs to establish specific policies and procedures to guide research activities. Based on several key elements identified by these organizations, we developed a framework comprised of five phases—agenda setting, selecting research, designing research, conducting research, and disseminating research results. (See fig. 2.) Agenda setting. Agencies should establish a structured process for developing their research priorities. The process should identify how agencies set research priority areas and provide for updating the areas on a regular basis. The process should also allow for the consideration of critical issues and state how internal and external stakeholders will be included in developing the plan. Selecting research. At this phase, the process should identify how the research program’s staff identifies and selects studies to fund, including the criteria it uses to make those decisions. Steps might describe how the staff assembles a list of potential studies, works with internal program offices, and makes final decisions. Designing research. During the design phase, the process should identify steps taken to select appropriate research approaches and methods and the safeguards in place to ensure appropriate tradeoffs are made between what is desirable and what is practical and between the relative strengths and weaknesses of different methods. Conducting research. At this stage, the process should include policies and procedures to guide the conduct of research. The process should ensure that key events, activities, and time frames are specified and that knowledgeable staff in the sponsoring agency monitor the implementation of the research. Disseminating research. This process should describe how research findings are made available to the public and disseminated to all potential users. These dissemination methods should include safeguards to ensure research findings are disseminated in a timely manner and are accessible through the Internet with user-friendly search and retrieval technologies. Research Terminology in This Report In this report, we use several technical terms in describing ETA’s research designs and study characteristics. (See table 1.) ETA’s Research Areas Generally Reflect Key Issues, but Some Studies Are of Limited Usefulness Experts Thought ETA’s 2007 to 2012 Research Plan Reflected Key Areas, but They Also Suggested New Ones for Future Research Our expert panel generally considered ETA’s research areas to be the right ones for the period the research plan covered. About three-fourths of the panel members reported that ETA’s 2007 to 2012 research agenda reflected key employment and training issues to at least a moderate extent. However, a few experts commented that some of ETA’s research areas may be too broad and lack specificity. The areas in ETA’s 2007 to 2012 research plan covered a range of issues, from job training to postsecondary education. Table 2 illustrates the scope of ETA’s research areas. With regard to the specific studies within these research areas, ETA invested most of its research and evaluation resources in work that focused on increasing the labor market participation of underutilized workers and on UI. Of the estimated $96 million that supported the 58 research reports published between January 2008 and March 2010, more than half—about $56 million—funded research that addressed these two research areas. Other areas received far less funding. For example, funding for studies addressing the methods of expanding U.S. workforce skills and using state-level administrative data to measure progress and outcomes accounted for about $6.5 million, or about 6.7 percent of the cost of studies published during the period we examined. (See table 3.) Overall, the individual studies that ETA funded addressed a wide variety of issues and ranged in cost from about $15,000 to a high of about $22 million. In addition to the research areas covered in ETA’s 2007 to 2012 research plan, experts from our virtual panel suggested that ETA incorporate additional research areas in its future research agenda. Of the research areas identified, over half of our experts (28 of 39) ranked the identification of employment and training approaches that work, and for whom, as one of the top areas that ETA’s future research should address. (See fig. 3.) Without such focus, experts commented that it will be difficult to know how to improve the nation’s workforce system. Other issues ranked at the top by experts included research on job creation strategies and the impact of long-term and short-term training. (See app. III for more information on issue-area rankings.) In addition to identifying overall employment and training areas, including issues related to UI, experts also identified specific aspects of the UI system that could be examined in ETA’s future research. In particular, most experts (34 of 39 respondents) reported that it would be at least moderately important, in the future, for ETA to research the linkage between UI and employment and safety net programs, such as Temporary Assistance for Needy Families or the Supplemental Nutrition Assistance Program. (See fig. 4.) This area of research may be particularly important given the role that these programs play in supporting individuals during economic downturns. In addition, many experts (24 of 39 respondents) mentioned that ETA should make the examination of the incentives and disincentives in the UI system a research priority, given the challenge of supporting unemployed workers during difficult economic times, while promoting self-sufficiency through employment. Experts also reported that it is important to fund research on what works for selected population groups. Of the population groups identified, the experts on our virtual panel most often ranked the long-term unemployed, economically disadvantaged workers, and adults with low basic skills as the top populations on which to focus future research. Specifically, several experts commented that research could help to identify the challenges some of these groups face, as well as identify effective strategies that may help these population groups obtain employment. (See app. III for a complete list of responses to these items.) In addition to population groups, experts also identified several employment and training programs that they believe warrant research attention. In particular, experts most often ranked three components of the WIA program—WIA Adult, WIA Dislocated Workers, and WIA Youth—as key to evaluate in ETA’s future research. Among those three, WIA Adult was ranked the highest. (See app. III for a complete list of experts’ responses on employment and training programs to evaluate.) ETA’s Research Studies Generally Answered the Questions Posed, but Their Usefulness Was Limited Research organizations and academic institutions with responsibility for implementing ETA-funded research generally used methodologies appropriate for the questions posed, but the studies were not always useful for informing policy and practice. From January 2008 through March 2010, ETA published 17 large research and evaluation reports—14 evaluations and 3 research reports—that each cost $1 million or more. Four of these reports were designed to demonstrate what works and for whom. Each of these four reports compared the employment-related outcomes of individuals or regions who participated in training or employment programs with the employment outcomes of similar individuals who did not participate in the programs. The remaining 13 reports were descriptive and were not designed to assess program outcomes. In several studies we examined that cost $1 million or more, we found that, for a number of reasons, ETA’s research studies were limited in their usefulness and in their ability to inform policy and practices. For example, in a study of the Prisoner Re-entry Initiative, shortcomings in the data collection phase limited the strength of the findings and, as a result, limited the study’s opportunity to influence policy directions. Among other things, while the study provided information on employment-centered opportunities for ex-offenders, the study relied on self-reported baseline data, did not account for differences across sites where services were received, lacked the capacity to record differences in the intensity of those services, and researchers failed to ensure that data collectors were properly trained. In another study, researchers did not control for bias in selecting participants, compromising their ability to draw conclusions about the cause and effect of program outcomes. Authors of this study on the Workforce Innovations in Regional Economic Development (WIRED) initiative acknowledged that the study would be unable to attribute outcomes to program services because it did not use random assignment in selecting participating regions. We have previously criticized ETA for failing to adequately provide for evaluating the effectiveness of its WIRED initiative. Moreover, some studies were limited due to observation periods that did not match the needs of the studies’ objectives. For example, an evaluation of an entrepreneurship training project was unable to assess the effectiveness of the project in meeting its long-term goals of increasing business ownership and self-sufficiency because the time frames for the study were too short. In this study, data collection was limited to 18 months after participants were randomly assigned, a period far shorter than the 60-month period recommended by experts. (See app. IV for additional information on the methodological characteristics of these studies.) Experts generally agreed that ETA’s research had limited usefulness in informing policy and practice. Over one-third of the 39 experts reported that over the past 5 years, ETA’s research informed employment and training policy and state and local practices to a little extent or not at all. (See fig. 5.) Some experts commented that the design of these studies and the length of time to complete them and disseminate results reduced their usefulness. For example, many of the reports that we reviewed costing $1 million or more were multiyear projects that took, in most cases, about 3 to 5 years to complete. Some experts commented that the inclusion of shorter-length studies may be useful in times of rapidly changing economic conditions. At least one expert noted that some mixed-methods studies would be useful—studies that would allow for short-term interim findings that could facilitate changes in practice during the course of the research study. Members of our expert panel stressed the importance of ETA incorporating varied methodological approaches into its future research proposals to best position the agency to address key employment and training issues. Twenty-seven of the 39 experts reported it was very important that ETA evaluate its pilots and demonstrations. Twenty-three reported that it was very important that more randomized experimental research designs be integrated into ETA’s future research. (See fig. 6.) While several experts noted that these randomized experiments will allow ETA to identify the effectiveness of particular interventions or strategies, at least one expert suggested that ETA should be strategic in choosing the interventions it tests more rigorously, basing those decisions on what appears most promising in preliminary studies. Furthermore, 16 of the 39 experts also reported that it is very important for ETA to consider including more quasi-experimental studies in the future. As previously discussed, such studies would include designs that compare outcomes between groups with similar characteristics, but do not use random assignment. By including more quasi-experimental designs, ETA may be able to better understand the link between services and outcomes in those settings where random assignment is not possible, ethical, or practical. ETA Has Taken Steps to Improve Its Research Program, but Additional Actions Are Needed Labor Has Taken Steps to Reform Its Research Program Labor has taken several steps designed to improve the way it conducts research, both at the department level and within ETA. Department-level efforts. Labor has changed the organizational structure of research within the department. In 2010, acknowledging the need for better and more rigorous evaluations to inform its policy, Labor established the Chief Evaluation Office to oversee the department’s research and evaluation efforts. The office, which resides within the Office of the Assistant Secretary for Policy, has no authority to direct research within Labor’s agencies, according to officials. It does, however, manage evaluations supported by funds from a departmentwide account, oversee departmentwide evaluations, and provide consultation to Labor agencies, including ETA. Specifically, the office is responsible for creating and maintaining a comprehensive inventory of past, ongoing, and planned evaluation activities within Labor and for ensuring that Labor’s evaluation program and findings are transparent, credible, and accessible to the public. In fiscal year 2010, the Chief Evaluation Office had an estimated budget of $8.5 million, and two of its four staff were on board by the beginning of fiscal year 2011. ETA efforts. ETA has recently made changes to some of its research practices—chief among them is the involvement of stakeholders and outside experts in the research process. We previously criticized ETA for failing to consistently involve a broad range of stakeholders, outside experts, or the general public in deciding what areas of research it should undertake. We recommended that ETA take steps to routinely involve outside experts in the research agenda- setting process. For the upcoming 2010 to 2015 research plan, ETA has awarded a grant to the Heldrich Center at Rutgers University to convene an expert panel to help inform the research plan. The center is expected to issue a report in May 2011 that outlines the panel’s recommendations for research areas to include in the plan. In addition, ETA will work with other Labor agencies, as well as the Departments of Education and Health and Human Services, before finalizing its research agenda. Officials told us that they will also solicit public comments before the research plan is finalized. In addition to engaging stakeholders, ETA has also established a formal research process. As we previously reported, ETA developed and documented its research process in 2007. The agency’s actions were in response to a request by the Office of Management and Budget (OMB) to establish more formal policies and procedures to guide its research—a request that came out of OMB’s concerns about the manner in which ETA’s research was being carried out. Prior to 2007, ETA lacked a documented research process, and its research was often conducted in an ad hoc manner. ETA’s current research process identifies the steps, activities, and time frames it uses to carry out its research. Figure 7 illustrates critical components of ETA’s 8-step research process. ETA’s process contains several of the key elements identified by leading organizations as important for guiding research activities. For example, the process includes specific steps the agency should take to identify the types of evaluations it will perform, as well as the administrative steps it should take to develop evaluation plans and select the research projects to fund. In addition, the process also specifies key events and time frames, and provides for monitoring the implementation of the research. For example, the process stipulates that ETA should alert OMB of research reports that have not been approved for dissemination within 9 months of being submitted and allows contractors to publicly release their research reports within those same time frames. Some Areas of ETA’s Research Program Merit Further Attention Despite ETA’s efforts, more action is needed to improve its research program. While ETA has taken steps to document its research process, its process lacks specific details in some areas, creating ambiguities that could undermine efforts to adhere to a formal process. For example, as we previously reported, its process lacks clear criteria, such as a dollar threshold or a particular methodological design feature, for determining which projects require peer review. And while the process specifies the actions project officers should take if reports are not released in a timely manner, it does not specify the consequences for failing to do so. We previously recommended that ETA establish more specific processes, including time frames for disseminating research reports. ETA has taken some action, such as revising the performance standards for project officers to hold them accountable for meeting time frames, but these steps do not fully satisfy the recommendation because the changes are not yet reflected in the formal research process. Moreover, ETA’s process is missing some critical elements that are needed to ensure that the current improvements become routine practices. Consulting with the Chief Evaluation Officer. ETA’s process lacks a formal provision requiring consultation with the newly established Chief Evaluation Officer at important points in the research process. For example, it contains no provision for consulting with the Chief Evaluation Officer when developing its annual list of research projects or when determining how ETA will invest its research and evaluation resources. Such consultation could help Labor better coordinate its research and evaluation efforts and better leverage its research funding. Moreover, the process contains no provision for involving the Chief Evaluation Officer in the early stages of developing its research projects. In the recent past, Labor officials told us that ETA has had difficulty developing requests for research and evaluation proposals that can pass OMB technical reviews. In particular, OMB has been critical of ETA’s research designs because they failed to provide for adequate sample size and appropriate methodologies that are needed to obtain useful results. In addition, OMB has also expressed concerns with ETA’s reliance on process evaluations rather than focusing on outcomes. These difficulties have resulted in delays in the research process. ETA has begun to consult with the Chief Evaluation Officer; however, these consultations are not a routine component in the formal process. Setting the research agenda. ETA’s current process, as documented, begins with phase two—selecting specific research studies—and misses the important first step of setting the overall research agenda. This first phase of the process should include the steps that ETA will take to establish its research priorities and to update them on a regular basis. It should also include provisions for ensuring critical issues are considered and internal and external stakeholders are included in developing the plan. Officials noted that they plan to incorporate the agenda-setting phase into its formal process, but have not yet done so. Setting the research agenda is key to ensuring that an appropriate mix of studies is included in future research. Failing to make this phase part of the formal process, including the specific steps to involve outside stakeholders that are currently under way, may leave ETA with little assurance that these efforts will continue in the future. Beyond ETA’s process for conducting research, current research practices fall short of ensuring research transparency and accountability—essential elements of a sound research and evaluation program. The research program has few, if any, safeguards to protect it from undue influence. According to officials, at times in the past decade, many key research decisions have been made outside of the office that is responsible for research. For example, decisions about which research studies would and would not be publicly released were made at the highest levels within ETA, and the criteria used to make those decisions were unclear. Of the 34 reports that ETA released to the public in 2008, 20 had waited between 2 and 5 years to be approved for public release. Several reports that had experienced long delays had relatively positive and potentially useful findings for the workforce system, according to our analysis. Among the studies delayed by almost 5 years was an evaluation of labor exchange services in the one-stop system that found certain employment services to be highly cost-effective in some situations. Another study, delayed for about 3.5 years, was a compendium of past and ongoing experimental studies of the workforce system, including early findings and recommendations for future research. In our previous report, we noted that ETA’s research and evaluation center lacked a specific mechanism to insulate it from undue influence. We reported that other federal agencies, such as the Department of Education’s Institute of Education Sciences and the National Science Foundation, engage advisory bodies in the research process. While not without tradeoffs in terms of additional time and effort, such an approach may serve to protect the research program from undue influence and improve accountability. ETA is currently involving outside experts in setting the research agenda for 2010 to 2015, but is not involving experts more broadly on research policy and practices. ETA Has Recently Included More Random Assignment Studies in Its Research Program ETA has recently begun to include more rigorous studies in its ongoing research. Of the 10 large, ongoing studies costing $2 million or more that began during the period of our review, three—the WIA Gold Standard Evaluation of the Adult and Dislocated Worker Programs, the Impact Evaluation of the Young Parents Demonstration, and the Evaluation of Project Growing America Through Entrepreneurship II (Project GATE II)—use experimental design with random assignment, as recommended by our experts. These ongoing studies—which range in cost from $2 million to nearly $23 million—have the potential to determine the effectiveness of some of the program services. Table 4 outlines some key characteristics of these three studies. Experimental designs with random assignment are an important means to understand whether various program components or services are effective, but they are also often difficult to design and implement in real- world settings. For example, in doing evaluations of employment and training programs, researchers often have difficulty in recruiting sample sizes large enough to detect meaningful outcomes. Because employment and training services may vary by location, and participants and their socio-economic environments are diverse, researchers must find ways to standardize procedures and treatment or service options. This often means recruiting relatively large samples. However, studies can be intrusive, often requiring program sites to change how they operate or to increase the resources available to participants. As a result, recruiting sites and sufficient numbers of participants may be difficult. Some of ETA’s ongoing research studies face challenges in recruiting sample sizes large enough to meet the studies’ objectives. For example, based on an OMB review, it was determined that the sample size for the Impact Evaluation of the Young Parents Demonstration had to be much larger in order to be able to assess the effectiveness of the program. At that time, ETA had already awarded two phases of grants. After consulting with the new Chief Evaluation Officer, ETA changed the number of participants required for the third phase from 100 to 400 to obtain a sample large enough to address OMB’s concerns and provide reliable estimates. However, grantees found it difficult to recruit even the 100 participants in the smaller sample, and it remains unclear whether they will be able to recruit all of the needed participants for the expanded design. The WIA Gold Standard Evaluation of the Adult and Dislocated Worker Programs The WIA Gold Standard Evaluation illustrates ETA’s difficulties in planning and executing large-scale, rigorous random assignment studies. WIA required that the Secretary of Labor conduct at least one multi-site control-group evaluation of the services and programs under WIA by the end of fiscal year 2005. ETA, however, delayed executing such a study, finally soliciting proposals in November 2007 and awarding the contract in June 2008. The contractor submitted the initial design report in January 2009 and provided ETA with design revisions in May 2010. Officials tell us researchers will soon begin randomly assigning participants. ETA expects to receive the first report (on implementation) during the winter of 2012- 2013 and the final report in 2015—10 years later than the WIA-mandated time frame. An OMB-selected panel of government experts—a technical working group composed of experts chosen by ETA, the evaluation contractor, and OMB staff—reviewed the original design for this study. Reviewers agreed the design contained many strengths, including the selection of an experimental design and a net impact approach; the addition of a process or implementation study to evaluate differences among sites and other implementation and data collection issues; the use of administrative and survey data; the collection of information on services received by participants in the control group; and the collection of a wide range of outcome data for participants. However, reviewers raised several concerns regarding the design. For example, they were skeptical that the researchers would be able to obtain a sufficiently large and representative sample to draw meaningful conclusions about the effectiveness of the national workforce system. In order to maximize participation, officials told us that the Assistant Secretary of ETA made personal phone calls to all selected sites to emphasize the importance of the study, offered an open door policy to site officials to discuss issues, and followed up with an appreciation letter. Furthermore, ETA required the evaluation contractor to provide reimbursement payments to each site to offset implementation costs. Reviewers also had several other concerns regarding which groups would be included in the study and which groups would not. For example, some experts raised concerns about getting accurate information on the youth program because of the large, one-time infusion of funds the program received from the American Recovery and Reinvestment Act of 2009. Reviewers were further concerned about the appropriateness of the evaluation objectives, the adequacy of steps taken to account for the effect of variation in services across sites on evaluation outcomes, and the external validity or generalizability of the study. In order to address these concerns, ETA made substantial adjustments to the original design. Specifically, ETA officials told us that based on an agreement with OMB, they instructed the contractor to drop the youth component from the evaluation and to focus only on the Adult and Dislocated Worker programs. While we received information on the new design and time frames for the WIA Gold Standard Evaluation, a finalized design plan is not yet available. According to officials, a finalized design is being prepared and will be available in June 2011. ETA Has Improved the Availability of Its Research but There Are Opportunities to Improve Its Search Page and Dissemination Methods ETA Has Improved the Timeliness of Its Disseminated Research ETA has recently improved the timeliness with which it disseminates its research reports. In our last review in January 2010, we found that 20 of the 34 reports that ETA disseminated in 2008 had been waiting 2 to 5 years to be publicly released. The 34 research reports published by ETA in 2008 took, on average, 804 days from the time the report was submitted to ETA until the time it was posted to ETA’s research database. By contrast, from 2009 through the first quarter of 2010, the average time between submission and public release was 76 days, which represents a more than 90 percent improvement in dissemination time compared with 2008. Additionally, there were no research reports in 2009 that were delayed for more than 6 months. Further, the average time to dissemination improved significantly even when we excluded such outliers as the 20 research reports that were delayed for 2 years or more. Without these outliers, average time to dissemination for reports in 2008 was 100 days, indicating that time to dissemination in 2009 through the first quarter of 2010 still improved by 24 percent. ETA Has Improved Its Research Database but Lacks Plans for Assessing the Usability of Its Search Page In 2010, ETA updated its online, Web-based search page in order to improve the usability of its research database—the primary tool for making ETA research available to policymakers and the general public. Officials told us that ETA’s old Web-based search page was so error-prone and difficult to use that they opted to substitute it with one that had not yet completed internal testing. Our review of the old Web-based search page confirmed that it had serious limitations and did not consistently return the same results. For example, when we searched the database by title for a known ETA research report titled Registered Apprenticeship, we successfully retrieved that report once. One month later, when we entered the exact same search terms, we were unable to retrieve the report. (For a more complete description of our analysis of ETA’s search capability, see app. II.) In our review of the updated Web-based search page, we found that the updates make the research database more useable. Labor officials told us they have taken other steps, as well, in efforts to improve its Web-based search page. For example, they have developed a project plan that articulates the steps Labor will take to update ETA’s Web-based search page. In addition, they have assigned a database administrator whose responsibilities include performing daily quality control spot checks in order to monitor performance and address technical problems. Although these changes have the potential to improve the usability of ETA’s database, Labor has not developed a formal plan for assessing the overall effectiveness of its Web-based search page, including user satisfaction. Labor has made a number of changes to the way the page operates, but it has not provided users with tips on how to use the search functions, even though it is an industry standard to do so. Even skilled users who were familiar with the old Web-based search page may need guidance on the exact meaning of new terms and functions now available on the new page. For example, the old Web-based search page gave users the option of searching by “key word,” which is no longer an option in the new page. Instead, “key word” searches have been replaced with a variety of other options, including the ability to search the full text or abstract of a research report. However, there is no guidance on the Web site on how to use these new search options. Industry best practices suggest that a Web site evaluation plan that incorporates data from routine reviews of Web site performance and that assesses user satisfaction can help agencies ensure the usability of their Web sites. ETA currently has no plans to do such assessments. ETA Uses Various Methods to Disseminate Research, but Experts Suggest Additional Methods At present, ETA’s research database is the primary method that ETA uses to make its research reports publicly available, according to officials. In order to call attention to new reports available in that database, ETA sends a Training and Employment Notice, also commonly known as a TEN, to an e-mail list of the more than 40,000 subscribers who have signed up to receive them. ETA’s research process specifies that for each new research report that is approved for dissemination, ETA must draft a TEN and an abstract before it is posted to ETA’s Web site. Beyond posting reports to its database, ETA also distributes hard copies of some of its research reports. In addition to electronic distribution, ETA also organizes various presentations to disseminate its research findings. These presentations, however, are done on an ad hoc basis. As mentioned in our prior report, ETA hosted a research conference in 2009 to present some of its research findings, renewing a practice that had been discontinued in 2003. As ETA looks to the future, officials tell us they will plan and organize similar research conferences as resources permit. In addition to these research conferences, ETA’s regional offices occasionally hold smaller, regional conferences as well. Beyond these formal conferences, ETA also hosts an internal briefing series at Labor headquarters where research contractors present their findings to various officials. For each of these briefings, ETA has a list of stakeholders that it invites, including various Labor officials, outside agency officials, congressional staff, and other outside stakeholders. Experts who participated in our virtual panel provided their views on the effectiveness of different methods for disseminating research reports, and several of those rated more highly are methods currently employed by ETA. (See fig. 8.) Most of the experts (30 of the 39 respondents) in our panel reported that using e-mail notifications, a searchable database of ETA papers, and briefings at ETA for external audiences (including stakeholders and policymakers) would be very effective or extremely effective approaches for disseminating research. In addition, a majority of the experts (26 of the 39 respondents) in our panel reported that publishing one-page summaries of research findings, not currently done by ETA, would be very or extremely effective. Conclusions ETA plays an important role in developing workforce policies and helping to identify the most effective and efficient ways to train and employ workers for jobs in the twenty-first century. With the current economic crisis and high unemployment rates, ETA’s role has become even more critical. The agency has made some improvements in its research program, even since our last review a year ago. But officials can do more to ensure that the progress continues in the years to come. ETA has taken a major step forward in establishing a formal research process—one that documents most actions that must be taken in the life cycle of a research or evaluation project. But, it is missing some key elements that could help ensure the continuation of current practices. While ETA is currently using outside advisory bodies to help it establish its research agenda, the formal process does not include the agenda-setting phase. Officials tell us they have plans to incorporate this phase in the future, and we urge them to do so. Without a formalized agenda- setting phase, ETA may miss opportunities to ensure that its research agenda addresses the most critical employment and training issues and that outside stakeholders are routinely involved. Moreover, ETA’s process has not formalized the now ad hoc advisory role of the Chief Evaluation Officer. Absent the routine involvement of the Chief Evaluation Officer at key steps in the process, ETA may find it difficult to ensure that research proposals are asking the right questions, are methodologically sound, and that they can quickly pass critical OMB reviews. ETA’s research findings are now available to the public on its Web site in far less time than it took in 2008. Despite this clear improvement, ETA has not taken the necessary steps to ensure that research products remain readily available to the public. The decision regarding what and when to make research publicly available is left in the hands of too few, and the process lacks needed safeguards to ensure transparency and accountability. Absent safeguards, key research decisions may again be made in ways that harm the credibility of the program and prevent important research findings from being used to inform policy and practice. ETA’s Web-based search page is the primary means ETA uses to make the research studies it funds readily available to the public. And, while ETA has improved the functionality of its Web site, no effort has been made to ensure that the problems that plagued the system in the past do not recur. Absent such efforts, ETA will have little assurance that its research findings are actually available to users. Recommendations for Executive Action To improve ETA’s research program, we recommend that the Secretary of Labor require ETA to take the following three actions: Formally incorporate into its research process the routine involvement of the Chief Evaluation Officer at key milestones, including at the development of ETA’s annual research agenda and spending priorities, as well as at the early stages of developing specific research projects. Develop a mechanism to enhance the transparency and accountability of ETA’s research program. For example, such a mechanism might include involving advisory bodies or other entities outside ETA, in efforts to develop ETA’s research policies and processes. Develop a formal plan for ensuring that ETA’s research products are easily accessible to stakeholders and to the general public through its searchable database. Such a plan could involve requiring Labor to assess the overall effectiveness of its Web-based search page, including user satisfaction with search features. Agency Comments and Our Evaluation We provided a draft of this report to the Department of Labor for review and comment. Labor provided written comments, which are reproduced in appendix VII. In addition, ETA provided technical comments, which we incorporated where appropriate. In its response, Labor generally agreed with our findings and all of our recommendations, noting its ongoing efforts in support of the recommendations. Regarding our recommendation to formally incorporate into its research process the routine involvement of the Chief Evaluation Officer at key research milestones, Labor noted that it is currently taking steps to do so. Officials reported that they have worked closely with this office in various aspects of its research, including discussing research, demonstration projects, and evaluations in the early stages of development and plans to continue this collaboration in the future. However, ETA’s comments did not discuss plans to update its documentation on the formal research process. We found in our review that involving the Chief Evaluation Officer was not an official component of ETA’s documented research process, and it occurred on an ad hoc basis. As ETA moves forward, we urge the agency to modify its current research process and document the involvement of the Chief Evaluation Officer at critical research milestones. Regarding our recommendation for ETA to develop a mechanism to enhance the transparency and accountability of its research program, officials cited several steps they are taking to improve the program, including involving outside experts in the development of their 5-year research plan and establishing advisory and peer review groups to review major evaluations. While officials note they plan to engage outside experts in broader research policies and processes, we encourage ETA to formalize this involvement. Moreover, we encourage ETA to continue to move forward in its efforts to further clarify components of its research process that are not well defined, including, for example, the criteria to be used when deciding when a peer review should be performed. Regarding our recommendation to develop a formal plan to ensure that disseminated research is easily accessible to stakeholders and the general public, officials cited specific steps the agency has taken to improve its Web-based research database. While these actions are a step in the right direction, we believe that it is still important for Labor to develop a formal and comprehensive plan to ensure that disseminated research continues to be accessible to the public. Furthermore, Labor expressed concerns about how we characterized the agency’s budget for pilots, demonstrations, and research. Recognizing these concerns, we made changes to the report to better capture the amount of funding ETA has available for research. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Labor, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Status of Prior GAO Recommendations to the Department of Labor, as of January 2011 Appendix I: Status of Prior GAO Recommendations to the Department of Labor, as of January 2011 Department of Labor’s response The Department of Labor (Labor) does not agree with this recommendation as written. According to Labor officials, the Administrator of OPDR currently reports to the Deputy Assistant Secretary, not directly to ETA’s Assistant Secretary. However, Labor officials acknowledge that important functions such as research and evaluation should not have too many intermediary reporting layers. To facilitate communication, officials further noted that the OPDR Administrator, the Deputy Assistant Secretary, and the Chief Evaluation Officer meet on a monthly basis with the Assistant Secretary to discuss evaluations. Labor agrees with this recommendation, but authority to make key decisions still resides with the Office of the Assistant Secretary for ETA. OPDR currently provides recommendations to this office regarding plans for conducting and disseminating research. In an effort to improve evaluations departmentwide, the Secretary of Labor recently established the Chief Evaluation Office to monitor evaluation efforts across the department. OPDR has begun to work informally with the Chief Evaluation Officer and the Chief Economist to design and implement research and evaluation projects. Labor agrees with this recommendation. ETA reports that it has taken some steps to establish more specific processes regarding dissemination of research, citing changes in performance standards for project officers. However, our recommendation would make broader changes to their research process and no such changes are reflected in the documents the agency provided. Labor’s actions do not completely satisfy recommendation … create an information system to track research projects at all phases to ensure timely completion and dissemination. Labor agrees with this recommendation. Officials report that they have begun working on a centralized, electronic tracking system for its research projects. However, the work is still under way and no time frames have been provided for its completion. Currently, OPDR uses an Excel document to keep inventory of all research, demonstration, and evaluation projects. Labor’s actions do not completely satisfy recommendation … instruct ETA’s research and evaluation center to develop processes to routinely involve outside experts in setting its research agenda and to the extent required, do so consistent with the Federal Advisory Committee Act. Labor agrees with this recommendation. OPDR has taken steps to engage outside experts in setting its 5-year research plan for 2011 and collaborate with the research and evaluation centers of other federal agencies, such as the Departments of Education and Health and Human Services. OPDR also plans to convene an expert panel, solicit public comments, and incorporate feedback from its 2009 Reemployment Research Conference and its 2010 ETA Reemployment Summit. However, despite these current efforts, OPDR has not formally incorporated them in its standard research process. Appendix II: Scope and Methodology We were asked to review the Employment and Training Administration’s (ETA) research program to better understand its approach to conducting and disseminating research. Specifically, we answered the following research questions: (1) To what extent do ETA’s research priorities reflect key national employment and training issues and how useful were the studies funded under them? (2) What steps has ETA taken to improve its research program? (3) How has ETA improved, if at all, the availability of its research since our last review in January 2010 and what other steps could ETA take to further ensure its research findings are readily available? To answer our research questions, we convened a virtual panel using a modified Delphi technique to obtain selected employment and training experts’ opinions on ETA’s research priorities and dissemination methods. We also visited two workforce agencies in Pennsylvania and Virginia that are implementing two of ETA’s ongoing research studies to learn about implementation issues and how research is being conducted. In addition, we reviewed 58 ETA-funded research and evaluation reports disseminated between January 2008 and March 2010 and assessed the methodological soundness of completed studies that cost $1 million or more. We also reviewed ETA’s ongoing studies that cost $2 million or more. To determine the availability of ETA’s research, we measured the time between when the final version of a research report was submitted to ETA’s Office of Policy Development and Research (ODPR) and when it was posted on ETA’s Web site. We also conducted a series of systematic searches to test the reliability of ETA’s research database. Furthermore, we interviewed Department of Labor (Labor) and ETA officials to better understand ETA’s research capacity, processes, and the use of research findings to inform policy and practice. Lastly, we reviewed relevant agency documents and policies, as well as relevant federal laws. Web-Based Expert Panel We convened a nongeneralizable Web-based virtual panel of 41 employment and training experts to obtain their opinions on ETA’s research priorities and dissemination methods. We employed a modified version of the Delphi method to organize and gather these experts’ opinions. To encourage participation by our experts, we promised that responses would not be individually identifiable and that results would generally be provided in summary form. To select the panel, we asked several employment and training experts, on the basis of their experience and expertise, to identify other experts who were knowledgeable of ETA and the research it conducts and disseminates. After receiving nominations from experts, we reviewed the list to ensure that it reflected a range of perspectives and backgrounds, including academics, researchers, and consultants. Our Delphi process entailed two survey phases. (See app. V for a copy of our phase I and phase II questionnaires.) In phase I, which ran from June 22, 2010, to August 9, 2010, we asked the panel to respond to five open- ended questions about ETA’s research priorities and dissemination methods. We developed these questions based on our study objectives and pretested them with four experts by phone to ensure the questionnaire was clear, unbiased, and did not place an undue burden on respondents. All relevant changes were made before we deployed the first Web-based questionnaire to experts. After the experts completed the open-ended questions in the first questionnaire, we performed a content analysis of the responses in order to identify the most important issues raised by our experts. Two members of our team categorized experts’ responses to each of the questions. Any disagreements were discussed until consensus was reached. Thirty-six of the 41 panelists selected completed phase I of the survey (about an 88 percent response rate). Those that did not complete phase I were allowed to participate in phase II. (For a list of experts who participated in phase I and phase II, see app. VI.) The experts’ responses to phase I were used to create the questions for phase II. In phase II, we gathered more specific information on ETA’s research and dissemination practices. Phase II, which ran from October 29, 2010, to December 14, 2010, consisted of 16 follow-up questions where panelists were asked to either rank or rate the responses from phase I. We pretested the questionnaire for the second phase with three experts to ensure the clarity of the instrument. We conducted two of our expert pretests in-person and one by phone. Thirty-nine of the 41 experts completed phase II (about a 95 percent response rate). Site Visits to Workforce Agencies Implementing ETA-Funded Research Studies To further enhance our understanding of how ETA conducts its research, we visited two workforce agencies that are implementing ETA’s ongoing research studies. First, we visited the Lancaster County Workforce Investment Board in Lancaster, Pa., which received funding from ETA to implement the Young Parents Demonstration project. This project provides educational and occupational skills training to promote employment and economic self-sufficiency for mothers, fathers, and expectant mothers ages 16 to 24. Second, we visited the Northern Virginia Workforce Investment Board in Falls Church, Va., which received funding from ETA to implement the second round of the Project Growing America Through Entrepreneurship, also referred to as Project GATE II. This grant helps dislocated workers aged 50 and over obtain information, classroom training, one-to-one technical assistance, counseling, and financial assistance to establish new businesses in order to help them start and sustain successful self-employment. We selected these workforce agencies because they were identified by ETA as having active research projects in the implementation stage. These sites also required minimum travel expenditure. During our site visits, we toured each workforce agencies’ facilities and used a semistructured interview protocol to interview the project director and staff about their role and responsibilities, the extent to which they communicate with ETA, and whether or not they face challenges with regards to implementation. At the Lancaster County site, we participated in an informal on-site lunch forum where local community programs that the agency partners with talked with us about their collaboration with the program. At the Northern Virginia GATE II site, we observed a focus group operated by the program to facilitate information-sharing among participants. After our site visits, we conducted phone interviews with the contractors that received funding from ETA to evaluate the outcomes of two research projects. Specifically, we interviewed the Urban Institute, which evaluates the Young Parents Demonstration project, and IMPAQ International, which evaluates Project GATE II. Both projects include an experimental component with control and comparison groups to determine the effects of program interventions on participants. During our interviews we used a semistructured questionnaire and asked questions to better understand their roles and responsibilities for the project, the extent to which they communicate with ETA, and whether or not they experience methodological and implementation challenges. Analysis of Methodological Characteristics of ETA We reviewed the 58 research and evaluation reports that ETA disseminated between January 2008 and March 2010 and assessed the methodological soundness of 11 completed studies that cost $1 million or more. In addition, we reviewed 10 ongoing studies costing $2 million or more to determine if research practices or the soundness of research designs had changed over time. We categorized the 58 studies disseminated between January 2008 and March 2010 by study type, cost, and research area. For the larger studies costing $1 million or more, we analyzed key characteristics including design features, scope, generalizability, and the appropriateness of analytical approaches and statistical procedures. These studies were analyzed independently by two analysts and the agreement between their ratings was 100 percent. (For results of this analysis, see app. VI.) Analysis of the Timeliness and Effectiveness of ETA’s Dissemination Activities To evaluate the availability of ETA’s research, we measured the time between when the final version of a research report was submitted to ODPR and when it was posted on ETA’s Web site. Specifically, we measured the dissemination time frames for reports posted in 2008 and compared that with the dissemination time frames for reports issued between January 2009 through March 2010. In addition, we conducted a series of systematic searches to test the reliability of ETA’s Web-based research database. To perform our searches, we selected a random sample of 30 reports from the 312 reports available on ETA’s research database at the time of our review. Specifically, we tested a variety of search functions available at the time of our review to determine the extent to which research reports could be easily retrieved on ETA’s research database. These functions included searches by title, keywords, author, and/or dates. We classified a report as retrievable if it appeared anywhere in our search results. We conducted our initial searches between June 30, 2010, and July 6, 2010. A second round of searches was conducted between August 6, 2010, and August 10, 2010. Further, we interviewed Labor and ETA officials to learn more about the search capabilities of ETA’s research database and the processes used to address errors and implement changes. Finally, we interviewed officials to gather information about ETA’s dissemination methods, including its current techniques and future plans for disseminating research reports. Interviews with Labor and ETA Officials To better understand the agency’s research capacity, we interviewed ETA officials and reviewed relevant agency and budget documentation. Similarly, to obtain information on ETA’s research process and how research findings are used to inform employment and training policy and practice, we interviewed officials and reviewed agency documentation, including relevant policies and procedures that guide ETA’s research. We also reviewed relevant federal laws. We conducted this performance audit from March 2009 through March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: The Panel’s Ratings of Key Employment and Training Issues, Populations, and Programs That ETA Should Address in Its Future Research In our Delphi phase II Web-based questionnaire, we asked the panel of experts to rate and rank the key employment and training issues, populations, and programs that ETA should address in its future research. These issues were identified by the panel during phase I. For our analysis, we calculated basic descriptive statistics on these issues, which are presented in tables 5 through 7. Appendix IV: Characteristics of Research Studies Disseminated between January 2009 and March 2010 That Cost $1 Million or More Other (comparison v. treatment group using propensity score matching) Increasing the labor market participation of underutilized populations characteristics collected before the intervention, but these were not used to make comparisons) Integration of the workforce and regional economic development network analysis) Integration of the workforce and regional economic development network analysis ) Experimental (randomized control trials) Increasing the labor market participation of underutilized populations collected on characteristics upon entry and outcome characteristics collected after completion) Appendix V: Delphi Phase I and Phase II Questionnaires Appendix VI: Experts Who Agreed to Participate in GAO’s Delphi Panel Trachtenberg School of Public Policy and Public Administration, George Washington University Abt Associates Inc. Mathematica Policy Research, Inc. W.E. Upjohn Institute for Employment Research National Bureau of Economic Research Robert M. LaFollette School of Public Affairs, University of Wisconsin-Madison W.E. Upjohn Institute for Employment Research Abt Associates Inc. John J. Heldrich Center for Workforce Development, Rutgers, The State University of New Jersey Department of Economics, College of Arts and Sciences, American University Mathematica Policy Research, Inc. Department of Economics, University of Missouri- Columbia Humphrey School of Public Affairs, University of Minnesota Minnesota Department of Employment and Economic Development W.E. Upjohn Institute for Employment Research Institute for Policy Studies, Johns Hopkins University Center for Law and Social Policy Peterson Institute for International Economics Mathematica Policy Research, Inc. Appendix VII: Comments from the Department of Labor Appendix VIII: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact listed above, Dianne Blank, Assistant Director, and Kathleen White, analyst-in-charge, managed all phases of the engagement. Ashanta Williams assisted in managing many aspects of the work and was responsible for final report preparation. Lucas Alvarez and Benjamin Collins made significant contributions to all aspects of this report. In addition, Amanda Miller assisted with study and questionnaire design; Joanna Chan performed the data analysis; Stephanie Shipman advised on evaluation approaches; James Bennett provided graphics assistance; David Chrisinger provided writing assistance; Alex Galuten and Sheila McCoy provided legal support; and Sheranda Campbell and Ryan Siegel verified our findings. GAO Related Products Program Evaluation: Experienced Agencies Follow a Similar Model For Prioritizing Research. GAO-11-176. Washington, D.C.: January 14, 2011. Employment and Training Administration: Increased Authority and Accountability Could Improve Research Program. GAO-10-243. Washington, D.C.: January 29, 2010. Workforce Investment Act: Labor Has Made Progress in Addressing Areas of Concern, but More Focus Needed on Understanding What Works and What Doesn’t. GAO-09-396T. Washington, D.C.: February 26, 2009. Employment and Training Program Grants: Evaluating Impacts and Enhanced Monitoring Would Improve Accountability. GAO-08-486. Washington, D.C.: May 7, 2008. Federal Research: Policies Guiding the Dissemination of Scientific Research from Selected Agencies Should Be Clarified and Better Communicated. GAO-07-653. Washington, D.C.: May 17, 2007. Data Quality: Expanded Use of Key Dissemination Practices Would Further Safeguard the Integrity of Federal Statistical Data. GAO-06-607. Washington, D.C.: May 31, 2006. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Program Evaluation: An Evaluation Culture and Collaborative Partnerships Help Build Agency Capacity. GAO-03-454. Washington, D.C.: May 2, 2003. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002.
To help guide the nation's workforce development system, the Department of Labor's (Labor) Employment and Training Administration (ETA) conducts research in areas related to job training and employment. Building upon our earlier work, GAO examined the following: (1) To what extent do ETA's research priorities reflect key national employment and training issues and how useful were the studies funded under them? (2) What steps has ETA taken to improve its research program? (3) How has ETA improved the availability of its research since our last review in January 2010? To answer these questions, GAO reviewed ETA's research reports disseminated between January 2008 and March 2010 costing $1 million or more, as well as ongoing studies costing $2 million or more. GAO also convened a virtual expert panel, interviewed Labor and ETA officials, and reviewed relevant documents. ETA's 2007 to 2012 research plan generally addressed key employment and training issues, but some studies were limited in their usefulness. Most experts on our panel reported that the areas in ETA's plan reflected key national employment and training issues at least to a moderate extent. ETA invested most of its research and evaluation resources in the areas of Unemployment Insurance and increasing labor market participation of underutilized groups. Of the $96 million that ETA invested in the 58 research reports we reviewed, more than half--or about $56 million--funded studies in these two areas. The methodological approaches and statistical procedures researchers used in the studies we reviewed were generally consistent with the questions posed, but the studies were not always useful for informing policy and practice. For example, in one study, shortcomings in the data collection phase limited the strength of the findings. Experts suggested that ETA include more varied and rigorous methodologies in its future research projects. They also reported that future research should address additional areas, including a focus on employment and training approaches that work and for whom. Labor and ETA have taken steps to improve the way research is conducted, but additional actions are needed. In acknowledging the need for more rigorous evaluations to inform its policies, Labor recently established the Chief Evaluation Office to oversee departmentwide research and evaluation efforts. In addition, ETA made changes to some of its research practices. For example, ETA has begun involving outside experts in developing its research plan. Despite these improvements, ETA's process lacks critical elements needed to ensure that current improvements become part of its routine practices. For example, ETA's process lacks a formal provision to consult with the newly established Chief Evaluation Officer at important points in the research process. Moreover, ETA's current research practices fall short of ensuring research transparency and accountability--essential elements of a sound research program. For example, its research and evaluation center lacks safeguards to protect it from undue outside influence. ETA has recently begun efforts to increase the rigor of its research designs, but has faced design and implementation challenges. For example, some of ETA's ongoing research studies face challenges in recruiting large enough sample sizes to meet the studies' objectives. ETA has improved the availability of its research findings, but it lacks a plan for assessing the usability of its Web-based search page--the primary tool for making ETA's research publicly available. ETA recently improved the timeliness with which it disseminates its research reports, decreasing the average number of days to release its reports to the public from 804 days in 2008 to 76 days in 2009. ETA has taken steps to update its online, Web-based search page. However, the agency has not developed a formal plan for assessing the overall effectiveness of its Web-based search page, including user satisfaction. In addition to its research database, ETA uses a variety of other methods to disseminate its research, including providing its research reports at conferences and internal briefings. Experts suggested that ETA consider other effective dissemination methods, such as publishing a one-page summary of research findings.
Background To carry out its mission, SEC’s responsibilities are organized into 5 divisions and 23 offices. Of those, OCIE, the Division of Corporation Finance, and the Division of Enforcement are subject to section 961 of the Dodd-Frank Act. The roles and responsibilities of these offices are summarized in table 1. Section 961 of the Dodd-Frank Act requires SEC to submit a report to Congress (1) on the assessment of the effectiveness of its internal supervisory controls and the procedures applicable to staff who perform examinations, enforcement investigations, and reviews of financial securities filings; (2) a certification that SEC has adequate internal supervisory controls to carry out examinations, reviews of financial securities filings, and investigations; and (3) a summary of the Comptroller General’s findings on the adequacy and effectiveness of SEC internal supervisory controls. According to section 961, SEC must submit these reports no later than 90 days after the end of each fiscal year. SEC’s first three annual reports—for fiscal years 2010, 2011, and 2012— found no significant deficiencies in internal supervisory controls, and concluded that the controls were effective. While not subject to section 961, SEC’s Office of the Chief Operating Officer (OCOO) and the Division of Risk, Strategy, and Financial Innovation provided advice and assistance to OCIE, Corporation Finance, and Enforcement, in identifying, establishing, and carrying out internal control policies and procedures. For example, the Division of Risk, Strategy, and Financial Innovation advised the offices on developing appropriate statistical methods for testing controls. The OCOO has also provided guidance and training on how to implement an internal control process. In addition to the section 961 requirement, SEC is responsible for establishing and maintaining effective internal control and financial management systems that meet the objectives of the Federal Managers’ Financial Integrity Act of 1982 (FMFIA). annually assess and report on the internal controls that protect the integrity of their programs and whether financial management systems conform to related requirements. The Office of Management and Budget’s (OMB) Circular No. A-123, which requires agencies to provide an assurance statement on the effectiveness of programmatic internal controls and financial system conformance, provides guidance for implementing FMFIA. We review SEC’s internal controls for its financial management systems as part of our annual financial audit of the agency and therefore these controls are not examined in this report. Internal Control Standards Pub. L. No. 97-255, 96 Stat. 814 (Sept. 8, 1982). responsible for developing detailed policies and procedures to fit their agency’s operations. Agencies may implement these standards at an office level to establish an overall framework for organizing the development and implementation of internal controls. The standards also can be implemented to help ensure that specific program activities are carried out according to adopted policies and procedures. Our standards are similar to the framework for internal control developed by the Committee of Sponsoring Organizations of the Treadway Commission (COSO). Five interrelated standards establish the minimum level of quality acceptable for internal control: Control Environment. Management and employees should establish and maintain an environment throughout the organization that sets a positive supportive attitude toward internal control and conscientious management. A positive control environment is the foundation for all other standards. It provides the discipline and structure as well as the climate that influences the quality of an organization’s internal control. Management’s philosophy and operating style also affect the environment, including management’s philosophy towards monitoring, audits, and evaluations. Risk Assessment. After establishing clear, consistent agency objectives, management should conduct an assessment of the risks the agency faces from external and internal sources. Risk assessment is the identification of risks associated with achieving the agency’s control objectives and analysis of the potential effects of the risk. Risk identification methods may include qualitative and quantitative ranking activities, management discussions, strategic planning, and consideration of findings from audits and other assessments. Risks should be analyzed for their possible effect and risk analysis generally includes estimating a risk’s likelihood of occurrence and its significance or impact if it were to occur. Because governmental, economic, regulatory, and operating conditions continually change, mechanisms should be provided to identify and appropriately deal with additional risk resulting from such changes. Control Activities. Control activities—policies and procedures that help management carry out its directives—help to ensure that actions are taken to address risks. Control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. The control activities should be effective and efficient in accomplishing the agency’s control objectives. Information and Communications. Key information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Monitoring. Management should assess the quality of internal control performance over time and ensure that the findings of audits and other reviews are promptly resolved. Existing Internal Supervisory Control Framework Generally Reflects Accepted Standards of Internal Control As part of their efforts to respond to section 961 requirements, OCIE, Corporation Finance, and Enforcement put in place an internal supervisory control framework that generally reflects federal internal control standards. The framework requires that each office develop a formal process for identifying and assessing risks, identifying key internal controls that address those risks, assessing the operating effectiveness of internal controls, and reporting the results of the testing. According to staff, although internal controls were in place to oversee examinations, investigations, and securities filing reviews, the offices had no formal methods for identifying, documenting, or assessing internal supervisory controls prior to 2010. Before 2010, the offices annually assessed and provided assurance statements on the adequacy of their internal controls to comply with requirements of FMFIA and OMB Circular No. A-123; however, according to SEC officials, these assessments generally focused on controls affecting SEC’s financial statements and information technology. In response to section 961 of the Dodd-Frank Act, senior officers and staff from OCIE, Corporation Finance, and Enforcement and the Offices of the Chief Accountant, General Counsel and Executive Director formed the 961 Working Group (Working Group) to coordinate the annual assessment and certification. This group also worked to coordinate the section 961 assessments with agencywide efforts to comply with FMFIA internal control requirements. The Working Group included senior-level managers who also were tasked with leading their office’s 961 annual assessment efforts. In fiscal year 2011, the Working Group expanded to include OCOO.year 2011 the MorganFranklin consulting firm, has provided assistance to the offices on certain aspects of SEC’s 961 program. During our interviews with members of the Working Group, staff demonstrated knowledge of their respective office’s internal control framework, known gaps, and efforts to address gaps. Staff discussed risks to their respective programs and how existing controls addressed those risks. For example, OCIE staff discussed a key risk of examinations being conducted in a manner inconsistent with policies and procedures due to a gap in its processes for organizing and updating policies and procedures. OCIE staff described the development of the new governance structure and how it addresses this gap. Division management and senior officer involvement in the establishment of the internal supervisory control framework and in-depth understanding of a program’s internal supervisory control framework, risks, and the design and implementation of a plan to mitigate risks reflect the control environment standard, which states that management should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. As they worked to develop the internal supervisory control framework, the Working Group used GAO’s standards, guidance in OMB Circular No. A- 123, and the Commission’s own internal control guidance to public companies. To guide the design of the framework and internal supervisory controls assessment process, the Working Group identified three key principles—(1) control systems and assessments should be designed to provide a reasonable assurance of effectiveness, (2) management should rely on its judgment, and (3) management should make judgments based on risk and its own knowledge and expertise to implement an efficient and effective evaluation process. The Working Group also developed key definitions and criteria to better coordinate the offices’ approach for determining the scope and required evidence needed to support management’s evaluation and certification as required under section 961. For example, the group defined “internal supervisory control” to assist each office in scoping its assessment and established criteria for determining if a control evaluation finding rose to the level of a “deficiency” or “significant deficiency,” which is consistent with generally accepted government auditing standards. The resulting internal supervisory control framework generally reflects federal internal control standards. Specifically, SEC’s internal supervisory control framework includes the following elements: Identifying and assessing risks. Under SEC’s framework, each office must conduct an annual risk assessment. Consistent with the risk assessment standard of internal control, each office’s risk assessment includes processes for identifying and assessing key risks. To implement this process, each office assigned a small group, led by the managing executive or other senior officer, the task of identifying what they believed to be the key risks. GAO/AIMD-00-21.3.1, and OMB, Circular No A-123: Management’s Responsibility for The Working Group defines a key risk to be a risk that in the office’s informed judgment carries significant inherent risk to its ability to consistently conduct examinations, investigations, or reviews with professional competence and integrity. from MorganFranklin in conducting its 961 reviews for fiscal years 2011 and 2012. These small groups then evaluated the “inherent risk” associated with each key risk based on their judgment of the likelihood of the risk occurring and the severity of impact if it were to occur. Based on this evaluation, each risk was assigned a rating. For example, for each identified risk in fiscal year 2011, Corporation Finance rated the likelihood of the risk occurring using a three-level system (low, medium, or high). It similarly rated each identified risk’s impact. The group then used a three- by-three matrix to arrive at an overall risk rating. Identifying key internal controls that address the risks. For each key risk, the small groups identified corresponding key controls, including internal supervisory controls, used to address the risks. For example, OCIE requires examination reports and workpapers to be reviewed and approved by management at the end of every examination. This helps to ensure that applicable rules and regulations are reviewed and examinations are consistently performed. The key risks and controls are documented in a risk-assessment tool called a risk and control matrix and, according to SEC staff, vetted by other managers and senior officials within each respective office, and approved by each office’s director. Specific controls implemented by each office are discussed in more detail later in this report. Assessing the operating effectiveness of internal controls. In developing SEC’s framework, the Working Group incorporated the required 961 annual assessments. Consistent with the internal control standard for monitoring, the assessments provide the Commission and management with annual evaluations of the design and operating effectiveness of each office’s internal supervisory controls. According to the Working Group, each office has the discretion to determine the methodology, including level of evidence and frequency, for testing each control that would provide management with reasonable assurance of the control’s effectiveness. Furthermore, the Working Group also consulted with SEC’s Division of Risk, Strategy, and Financial Innovation and used GAO’s Financial Audit Manual for assistance in determining the appropriate sample sizes, and the acceptable number of errors for a particular sample size and for pulling random samples. Each office designated an assessment team to carry out the testing and took steps to maintain the objectivity of the testing. For example, Corporation Finance’s senior assessment team segregated testing duties so that an associate director would not be involved in selecting samples or testing the work of the offices that he or she oversees. According to staff, the fiscal year 2011 assessment was the first year for which control testing was conducted under section 961. On the basis of our review of each office’s assessment procedures and documentation of assessment findings for fiscal year 2011, each used accepted methods such as inquiry, observation, inspection, and direct testing. In fiscal year 2012, the director of OCR began signing the certification document. 961(c) requires the office director to certify that, among other things, he or she has evaluated the effectiveness of the internal supervisory controls during the 90-day period ending on the final day of the fiscal year to which the report relates and disclosed to the Commission any significant deficiencies in the design or operation of internal supervisory controls that could adversely affect the ability of the office to consistently conduct inspections, investigations, or financial securities filing reviews with professional competence and integrity. Reporting assessment results constitutes a significant part of an overall internal control framework and reflects the information and communication and monitoring components of internal control standards. In fiscal year 2012, the Working Group took additional steps to improve the overall internal supervisory control framework and 961 assessment processes. Notably, the group adopted a single set of procedures for conducting the annual assessments for all of the offices. In fiscal year 2011, each office used similar but separate processes for conducting its assessment. The fiscal year 2012 procedures maintain a risk-assessment methodology that continues the offices’ focus on identifying key risks, but differs in that it establishes a common scale for assessing the likelihood and impact of key risks. The fiscal year 2012 procedures also provide a common definition of key controls and information on how to identify them; allow for each office to design an appropriate control evaluation strategy; provide guidance—developed in consultation with economists from the Division of Risk, Strategy, and Financial Innovation—for conducting statistical testing of internal supervisory controls; and incorporate additional control testing guidance similar to that set forth in our Financial Audit Manual. Finally, the procedures incorporate guidance from the offices’ fiscal year 2011 procedures on reporting the results of the assessments to office or division management, SEC’s Chairman, and Congress. In fiscal year 2012, the Working Group also further incorporated staff from OCOO and provided additional guidance aimed at improving the offices’ risk assessment and control identification. According to OCOO staff, in fiscal year 2012, they periodically reviewed the offices’ documentation of risks and controls, consulted with the offices to help address any challenges or questions, and helped staff use an electronic tool that assists in the identification of key risks and controls. This tool also captures control descriptions and data on control evaluation results and provides information to office management in a standardized format. Additionally, OCOO staff assisted OCIE and Enforcement with identifying potential gaps in risks and controls. OCOO staff and MorganFranklin also provided staff from each of the three offices with additional training on how to identify and evaluate risks and controls. For example, training materials outline specific questions to ask when evaluating the design of new or established controls such as (1) how often the control activity was completed, (2) how the control was documented, and (3) the purpose of the documentation. Such training can help to improve future 961 assessments, specifically the evaluation of a control’s design to help ensure it includes clear and specific implementing procedures. OCOO plans to increase its support to each office’s risk and control identification and assessment process. Offices’ Work Processes Incorporate Internal Supervisory Controls Designed to Address Identified Risks As part of developing and applying the internal supervisory control framework, each office identified internal supervisory controls to address the risks identified through the risk assessment. These internal supervisory controls are built into the offices’ work processes—that is, the processes they use to carry out examinations, filing reviews, and investigations. The controls are intended to help ensure that objectives are being met and that the procedures applicable to staff carrying out these activities are conducted completely and consistently. They range from supervisory review and approval activities to information regularly provided to management to monitor the processes as a whole. According to staff, many of the offices’ internal supervisory controls existed prior to the development of SEC’s internal supervisory control framework in 2010. Others were developed through the process of developing the framework. Our review of each office’s process for conducting examinations, filing reviews, and investigations found that each included controls generally As noted earlier, agencies reflective of the internal control standards.may implement the internal control standards at an office level to establish an overall framework for organizing the development and implementation of internal controls and at the program level to help ensure that specific activities are carried out according to adopted policies and procedures. Figure 1 shows the relationship between the internal supervisory control framework and the internal supervisory controls established by each office at the program level. OCIE administers SEC’s nationwide examination program. Key risks to ensuring examinations are conducted in a manner consistent with OCIE objectives include (1) not effectively or efficiently selecting high-risk examination candidates and (2) examination findings that are not generally supported by the workpapers. To address these and other identified risks, OCIE developed controls that help ensure that high-risk examination candidates are selected in accordance with OCIE program goals and that managers perform oversight of examination workpapers to better ensure that examination findings are generally supported by workpapers. OCIE’s recent implementation of a new governance structure, generally referred to as the National Examination Program (NEP), has the potential to provide for greater standardization of the examination process and supervisory controls. Consistent with the standard of control environment, NEP defines areas of authority and responsibility. For example, under NEP, senior officers with the title of national associate head each of the five examination program areas. The national associates are charged with setting directives and helping ensure consistency across NEP. Under NEP, OCIE also created a number of committees responsible for carrying A primary function of the committees is to help out designated activities.ensure that policies and procedures are formally discussed, approved, and communicated. Such a committee structure reflects the control environment standard by clearly delegating authority and responsibility throughout OCIE. Further, OCIE created an Office of the Managing Executive responsible for general operational areas and oversight of internal controls. Assigning responsibility for internal controls to a senior- level manager demonstrates a commitment to internal control and is consistent with establishing a positive control environment. Finally, OCIE has been working with SEC University to develop an examiner certification program based on a job analysis of examiners to identify the skills needed.ensure all personnel possess and maintain a level of competence that allows them to accomplish their assigned duties. In addition to the examination program’s governance structure, OCIE established a standardized set of policies and procedures for conducting examinations under NEP. These control activities are a key part of the framework. Prior to the adoption of standardized policies and procedures, the processes for conducting and documenting supervisory review of staff work varied. For example, some regional offices used control sheets to document staff work and supervisory review, while others indicated review through management’s review of the examination report. The standardized policies and procedures outline the examination process and provide guidance to staff and supervisors for conducting and reviewing examinations. They also include existing management and supervisory activities intended to help ensure that examinations are carried out according to OCIE policies and are consistent with OCIE’s goals and objectives. Examples of internal supervisory control activities include the following: Entity selection. NEP management works with regional offices to determine registrants targeted for examination. Each year, NEP management holds several meetings to develop examination program goals and objectives, including guidance for the selection of registrants for examination and potential focus areas. To further assist OCIE management in selecting registrants for examination, OCIE’s Office of Risk Analysis and Surveillance staff use information from registration and other required forms, past examinations, and other sources to help identify regulated entities that likely pose the highest risk to investors. According to staff, each regional office is provided this information about the regulated entities in its jurisdiction, including specific areas of risk that a certain entity may pose. The regional offices incorporate local information and knowledge and confer with home office (headquarters) management and national associates on a semi-annual basis to determine registrants targeted for examination. Examination scope approval. Supervisors review and approve the initial scope of the examination and any subsequent modifications to the scope. After staff conduct the pre-examination research, procedures require the staff to schedule a pre-examination meeting with supervisors to discuss the areas that will be included in the scope of the examination and whether additional expertise or resources are needed. The staff document the decisions made and submit the scoping work to the supervisor for approval. Supervisors are expected to ensure that relevant pre-examination research is completed, including a review of: previous examinations and deficiencies; tips, complaints, and referrals; and Division of Enforcement activity. They also must determine that the proposed scope of the examination is appropriate and in line with OCIE goals and objectives. Examination workpaper review. Supervisors review and sign control OCIE procedures require sheets (or other examination workpapers).staff to document the steps that were taken during the course of the examination, methodology used, documents reviewed, and findings and conclusions for each aspect of the examination in the workpapers. Supervisors review the key workpapers supporting the staff’s findings to determine whether the work performed sufficiently assessed the focus areas in the scoping and planning documents. Supervisors also must review the evidence provided to determine if it sufficiently supports the findings and conclusions. Finally, procedures require that supervisors meet with the examination team after the information-gathering portion of the examination is substantially complete to discuss preliminary findings and any challenges encountered during the examination. In the event that staff discover facts that may result in an Enforcement referral, those facts should be brought to the immediate attention of an associate director. Once the appropriate associate director or national associate determines that an examination merits a referral to Enforcement, OCIE staff are to follow NEP procedures for documenting and communicating the referral to Enforcement. Examination report approval. An assistant director or higher-level supervisor approves the nonpublic examination report. After the examination team completes its examination but before it finalizes its nonpublic examination report, staff prepare the report and submit it for approval. Once examination findings are approved, an examination team will issue an examination summary or other closing letter to the registrant. Examination managers are responsible for ensuring that the examination summary letter includes information about any required response from the registrant and that the letter and report are properly filed in OCIE’s systems. Examination closing approval. An examination manager or a higher-level supervisor approves the closure of an examination. OCIE policies and procedures consider an examination to be closed after the assistant director or other authorized supervisor has approved the examination summary report, staff have sent an examination summary letter to the entity, and the entity has satisfactorily responded to the examination summary letter; or, when an Enforcement referral has been made and no further OCIE staff action is expected. According to staff, the examination manager or higher- level supervisor determines the sufficiency of an entity’s response. In addition to the standardized policies and procedures, OCIE also has been implementing a new examination tracking system, the Tracking and Reporting Examinations-National Documentation System (TRENDS), which is intended to improve documentation of staff work and supervisory reviews and approvals. Consistent with the internal control standard of control activities, TRENDS is designed to provide OCIE with a means of clearly documenting significant events in the examination process and making that documentation readily available for review and reporting purposes. TRENDS was created in 2011 to capture NEP data and information, including workpapers, examination scope, deficiencies, audit techniques, and management approvals. TRENDS replaces manual methods for maintaining the results of examination work. For example, TRENDS replaces paper-based scope memorandums and examination reports with on-line “working scope” and “examination summary” screens. In TRENDS, each examination workbook has three phases (prefieldwork, fieldwork, and postfieldwork). At the completion of the prefieldwork and postfieldwork phases, examination staff electronically submit the examination workbook for management approval. Supervisors then can approve the workbook or return it to the staff for corrections or additional work. When staff receives a satisfactory registrant response to the examination summary letter, supervisors then perform a final approval by closing the examination. These approvals correspond to the internal supervisory control activities described earlier. TRENDS also contains built-in workflows and checklists that help ensure staff complete certain steps before an examination moves to the next phase and automatic notifications that alert supervisors of pending reviews. TRENDS also allows staff to search associated or previously closed examinations and track the status of deficiencies, and will be used to collect examination program performance information and statistics. began a phased-in implementation of TRENDS. According to OCIE, by September 30, 2013, all OCIE examination programs will use the system for newly initiated examinations. Staff access rights to TRENDS examination information are based on the staff member’s role in specific examinations. In general, staff only may access examinations to which they are assigned and work on those portions of the examination to which they have been assigned. Supervisors can access any examination for which they are responsible. All staff can open, in a read-only format, any closed examinations in TRENDS. meeting described above, and where feasible, the examination exit interview or conference call. According to staff, these meetings further enable supervisors to obtain the operating information necessary to determine if an examination team is meeting its objectives. OCIE also established standing meetings to discuss broader examination program information. For instance, OCIE holds monthly videoconferences with staff to provide updates on policies or procedures, share information on current examination program events and trends, and provide staff with the opportunity to raise issues with management. In addition, senior officers in OCIE regional offices and headquarters conduct quarterly meetings with the assistant directors and exam managers to review all open examinations, and the NEP senior management meets weekly to discuss program performance and goal achievement. Furthermore, OCIE management obtains pertinent information, through monthly performance reports that are prepared by the Office of the Managing Executive. These reports contain key performance measures, such as the percentage of enforcement investigations resulting from examination referrals and the percentage of firms receiving examination summary letters that take corrective action in response to all examination findings. Finally, OCIE management monitors examination information to help ensure the office meets the statutory requirement that examinations be completed within the later of 180 days of the end of fieldwork or the date on which the last document was received from the registrant. OCIE also implemented a number of controls consistent with federal internal control standards for monitoring. In addition to the 961 annual assessments, supervisory oversight of examinations, and management review of regular reports and the meetings, OCIE hired a senior specialized examiner to develop a compliance program within its Office of Chief Counsel. Since then, a compliance group has been formed and three additional permanent staff positions have been added to the group. The group periodically tests a random sample of examinations from each NEP office to evaluate for compliance with documented procedures and make recommendations for improvement. According to staff, this group is empowered to select what to evaluate (and when) and reports to the Chief Counsel. As of March 5, 2013, the Office of Chief Counsel was in the process of filling a recently created assistant director position to lead OCIE’s compliance group. Since its creation, the group has completed six separate evaluations and, according to staff, has two additional evaluations ongoing. Moreover, OCIE established policies and procedures for responding to OCIE recommendations from GAO and SEC OIG audits. According to OCIE policy, management of the affected area will meet to discuss and draft a response to GAO and OIG audit findings. The Compliance, Ethics, and Internal Controls Steering Committee is responsible for reviewing management’s proposed responses to GAO or OIG recommendations and other identified deficiencies. The committee discusses the response, obtains additional information if necessary, and can elect to elevate the response to OCIE’s Executive Committee, which consists of the director of the NEP and at least seven members of the NEP’s leadership team— including at least two representatives from headquarters, two from large regional offices, and three from smaller regional offices, if necessary. All responses to GAO and OIG recommendations are presented to OCIE’s director for final approval. According to staff, any audit findings and recommendations made by OCIE’s compliance unit follow a similar process. Noncontroversial or lower-level responses to recommendations may bypass the committees and go directly to the director for approval. Corporation Finance’s Internal Supervisory Controls Are Designed to Help Ensure Financial Securities Filing Reviews Are Conducted Completely and Consistently Corporation Finance selectively reviews filings made under the Securities Act of 1933 and Securities and Exchange Act of 1934 to monitor and enhance compliance with the applicable disclosure and accounting requirements. Key risks identified by the division to meeting its objectives include (1) not effectively identifying companies for review in accordance with regulations or that pose the greatest risk to investors and (2) not identifying and addressing material noncompliance in reviewing company disclosures. The division developed key internal supervisory controls to address these and other risks, including documenting procedures for determining the level and scope of reviews. The review program for corporate financial securities filings, which falls under the Office of Disclosure Operations in Corporation Finance, includes a number of management efforts and processes designed to oversee the program’s performance and establish a positive control environment. division created an organizational structure with clear lines of authority and reporting. The program consists of 12 assistant director-led offices, each responsible for filings from one or more sectors of the economy. Each office includes a number of attorneys and accountants who serve as first-line supervisors. The program is overseen by senior management consisting of a deputy director and five associate directors. In addition, in 2011 Corporation Finance created an Office of the Managing Executive responsible for general operational areas and oversight of internal controls. Assigning responsibility for internal controls to a senior-level manager demonstrates a commitment to internal control and is consistent with establishing a positive control environment. For the purposes of the 961 assessments, Corporation Finance defines “corporate financial securities filings” to mean filings containing financial statements and related disclosures that (1) public companies file with SEC in accordance with the Securities Act, Exchange Act, and Commission rules and regulations, and (2) fall within the scope of authority delegated by the Commission to the division. Corporation Finance Internal Supervisory Control Activities In addition to control environment procedures, the division has established policies and procedures for conducting filing reviews. Specifically, the division’s filing review procedures include multiple internal supervisory controls to help ensure that filing reviews are being conducted completely and consistently and that the division’s goals and objectives are being met. Examples of internal supervisory controls that reflect the control activities standard are described below. Annual filing review goals. At the start of each fiscal year, division management develops goals for the filing review program. The goals include reviewing companies pursuant to section 408 of the Sarbanes-Oxley Act and internally defined criteria. The division also aims to conduct financial reviews of the most highly capitalized companies, reflecting a broad shareholder base, every year. In addition, division management suggests criteria for selecting other companies for review and allows broad discretion for assistant directors to make selections within these parameters. According to division officials, together these companies account for a substantial percentage of total market capitalization. Second-level supervisory review. Once identified for selective review, a filing enters the review cycle, which generally includes four phases: screening, examination, closing, and the public posting to http://www.SEC.gov of SEC comments and responses to them (“filing review correspondence”). For most filings, a second-level review is required during each of these phases. For example, in the examination phase, examiners evaluate the disclosures in the filing and document their evaluation and any proposed comments on compliance improvements or material noncompliance with applicable disclosure or financial statement requirements in an examination report. Designated second-level review staff then review the examination reports and proposed comments to confirm that the comments are consistent with prior comments from the assistant director’s office, address appropriate issues, reflect the division’s opinions and interpretations of disclosure and financial statement requirements, and generally comply with division policies.level reviewers’ findings are documented in a review report. Corporation Finance created various documents and electronic databases to record and store filing review data. Recording significant events in the filing review process and ensuring that documentation is readily available for review are consistent with the control activities standard. Generally, documentation for each filing review includes a screening sheet, an examination report, a review report, and a closing memorandum. Each document captures information on the filing review and describes staff members’ participation. For example, the examination report captures factual information about the company, the filing, the staff member who performed the filing review, the nature (or type) of the filing, and any staff comments. The closing memorandum includes a list of the documents reviewed, the actions taken, when the review was concluded, and any significant issues identified during the review. The division maintains five distinct electronic databases to track, conduct, document, and report on different aspects of its filing review program. For example, the Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system is the division’s primary record-keeping system. EDGAR performs automated collection, validation, indexing, acceptance, and forwarding of submissions by companies and others that voluntarily file or are required by law to file forms with SEC. Corporation Finance is aware of limitations within the databases that require some information to be manually entered or uploaded and that such limitations increase opportunities for error and misinformation. As a result, division management recently began conducting periodic audits of access rights and data quality. Consistent with the internal control standard for information and communication, the division’s management interacts with supervisors and staff using standing meetings and memorandums to share information about the program’s progress towards meeting filing review goals, quality of staff’s work, and compliance with established policies and procedures. Also, management regularly receives various standard and ad hoc reports about program performance. For example, assistant and associate directors receive weekly updates that provide a real-time snapshot of the division’s current workload. Division managers also receive monthly reports that present summary data on review activity and progress toward meeting goals for number of reviews completed and timing. Finally, the division provides guidance and other program information to staff on its intranet site. In addition to the annual 961 assessment, Corporation Finance has implemented a number of controls consistent with the monitoring internal control standard. For example, according to Corporation Finance staff, their standing meetings are an important aspect of the division’s monitoring strategy and provide opportunities for senior officers to share information about resources, potential issues with filing reviews, or personnel matters within the assistant director offices. For example, associate directors, assistant directors, and the senior assistant chief accountants in disclosure operations meet regularly to share information across the division and discuss trends or issues across filing reviews. Corporation Finance staff also stated that assistant directors and senior accountants regularly meet with staff to gather information on what staff have seen in their filing reviews. Other internal supervisory controls that demonstrate the monitoring standard include the division’s practices of (1) releasing its correspondence with companies to the public, which allows for public scrutiny of its work, and (2) assigning a senior officer to manage the process of developing and tracking responses to audit recommendations. Corporation Finance also has efforts under way to help provide an overarching perspective on the quality of filing reviews. Enforcement’s Internal Supervisory Controls Are Designed to Help Ensure Investigations Are Conducted Completely and Consistently Enforcement is charged with investigating potential violations of the federal securities laws and litigating SEC’s enforcement actions. As documented by Enforcement, key risks to the division’s mission include (1) untimely identification and investigation of potential securities fraud and (2) failure to bring enforcement actions that could deter potential violators and protect investors. Enforcement developed internal supervisory controls to address these and other risks. In 2009, Enforcement began a review of its investigative process intended to streamline procedures and maximize resources. Since that time, Enforcement implemented a number of actions that collectively reflect Enforcement management’s efforts to establish and manage its overall performance, in accordance with the internal control standard for control environment. These actions included the following: In 2009, Enforcement created the Office of the Managing Executive to oversee functions such as case-management systems and broader operational areas such as process improvement and internal controls. According SEC officials, the new office enables staff to focus on mission-critical investigative activities. In 2010, the division established the Office of Market Intelligence (OMI) to centrally handle tips, complaints, and referrals, known as TCRs. OMI uses a searchable database (known as the TCR system) to triage TCRs, and assign or refer potential investigative leads. OMI has been currently piloting a tool that will add analytics capabilities to the database to improve staff’s ability to identify high-value TCRs and to search for trends and patterns. Also, in 2010, the division reassigned approximately 20 percent of its staff to nationwide specialized units designed to concentrate on high- priority enforcement areas, including asset management (for example, hedge funds and investment advisors), market abuse (large-scale insider trading and market structure issues), structured and new products (such as derivatives products), Foreign Corrupt Practices Act violations, and municipal securities and public pensions. The units rely on the knowledge and expertise of experienced staff to better detect links and patterns that could suggest wrongdoing. Finally, Enforcement has been working with SEC University to develop a curriculum for all levels of staff to increase competency in investigative skills and knowledge of the division’s high-priority enforcement areas. The division maintains procedures that reflect the internal control standard for control activities and that are intended to help ensure that investigations are being carried out according to Enforcement’s policies. Such control activities are designed to occur early in and throughout the enforcement process. Supervisory review of TCR recommendations. According to OMI triage procedures, OMI staff review tips, complaints, and referrals before entering them into the TCR system, then decide whether a TCR should be (1) closed because it does not suggest a violation of securities law, (2) assigned for further review, (3) referred outside of Enforcement, or (4) assigned for investigation. The division’s control activities include requirements for all decisions to be reviewed by management or senior investigative staff. In addition, TCRs that were closed without becoming an investigation may undergo additional supervisory review by an OMI attorney, assistant director, or senior- level subject-matter expert, and can be re-opened, if appropriate. Management discussions and documentation of formal orders of investigation. Recommendations to pursue a formal order of investigation are discussed between investigative staff and management and rely heavily on information from sources such as the staff’s informal inquiries, publicly available information, informants, complaints, and whistleblowers. Recommendations that are approved are documented in a signed memorandum to the Commission’s Office of the Secretary. Quarterly meetings for ongoing investigations. In 2010, Enforcement began conducting quarterly review meetings between supervisors and senior staff to discuss major milestones, resources, and other feedback for all open and active investigations. Supervisors document quarterly reviews by using check sheets. Supervisory review of resolutions. As investigations are brought to resolution, assistant directors must review and approve all staff recommendations to close an investigation. Senior officers approve and sign off on the final case-closing report. Each closing approval is documented in a memorandum and recorded in Enforcement’s case tracking system, called HUB. Enforcement management relies heavily on information communicated by staff and internal systems to carry out internal supervisory control responsibilities. The division has established various practices intended to help ensure that information is conveyed in a timely, relevant, and reliable form, in accordance with the accepted internal control standard for information and communication. For example, staff may access common information about TCRs and active investigations through the TCR and HUB systems, which can encourage effective communication among staff about whether to exercise investigative and enforcement powers. In addition, during quarterly reviews, supervisors are expected to review the status of all open and active investigations, including information about target deadlines, potential impediments, and estimated resources. Weekly senior officer meetings and bimonthly meetings between senior division leadership and assistant directors enable discussion of key issues and developments that affect investigations. According to Enforcement officials, the meetings help ensure investigations stay on track and have the necessary resources. Finally, staff, supervisors, and senior division management hold a separate weekly meeting, known as the “To-be-calendared” meeting to discuss all recommendations to pursue an enforcement action or settle an enforcement action in litigation. Enforcement’s procedures for conducting the 961 assessment, in addition to many of the activities noted above, are consistent with the internal control standard for monitoring. Monitoring controls help management oversee and assess the quality of the work of Enforcement staff. For example, supervisors regularly review information to (1) determine whether investigations are meeting the division’s strategic goals, performance goals, and compliance requirements; and (2) monitor staff performance. The division also complies with SEC’s procedures for responding to external audit recommendations. Common Control Deficiencies Indicate Need for Continued Management Attention to Internal Supervisory Controls We identified deficiencies in about half of the 60 internal supervisory controls we tested. Specifically, we reviewed a nongeneralizable sample of 60 controls—20 controls from each office’s fiscal year 2011 risk and control matrix—that reflect (1) broad aspects of the offices’ internal supervisory control structure, and (2) our knowledge of previous internal control failures or high-risk areas. We found that about half (33 controls) were effectively designed and generally operating as intended. However, the other half had deficiencies in design or operating effectiveness. Specifically, for almost half (27) of the controls in our sample (1) descriptions of the control activity did not accurately reflect policy or practice; (2) documentation demonstrating the controls’ execution was not complete, clear, or consistent; or (3) the controls lacked clearly defined control activities. These control deficiencies may not prevent management from detecting whether the activities of the offices are conducted completely and in accordance with policy. However, the deficiencies were similar in nature across all three offices and made testing the controls difficult. Without clearly defined control activities and consistent, readily accessible documentation, management and others (including external auditors) may not be able to determine whether the supervisory controls were being appropriately applied and whether they were effective in achieving their intended outcomes. The offices have addressed or have been taking steps to address all the 27 identified deficiencies. SEC officials identified some of these deficiencies as they tested the controls during their fiscal year 2011 assessments.control deficiencies in our sample were addressed during our review, after we had detailed discussions with SEC staff about the deficiencies. Other However, not enough time had passed to assess the effectiveness of these changes. First, in reviewing these controls we found some that some descriptions of the control activity did not accurately reflect current policy or practice. Six controls in our sample were difficult to review because the control description, as stated in the fiscal year 2011 risk and control matrix, did not accurately reflect the policy or practice in place during the audit period (see table 2). For example, one of the controls implemented by Enforcement stated that OMI was responsible for providing training on TCR system policies and procedures. However, when questioning Enforcement officials about this control, the officials said that OMI does not maintain documentation of TCR training because it is provided on an informal, as-needed basis and that attendance records are maintained by a different SEC office. Enforcement updated its fiscal year 2012 risk and control matrix to reflect the SEC office responsible for implementing the control. Similarly, an OCIE control described supervisors’ use of control sheets to conduct the review of examination workpapers; however, we found that OCIE policy did not require the use of control sheets during the audit period. As OCIE continues to implement TRENDS, all supervisory reviews and approvals of examination control sheets or similar workpapers will be captured electronically. In March 2013, OCIE officials updated the risk and control matrix to better align the control description with current policy. Second, for some controls the documentation demonstrating execution of the control was not complete, clear, or consistent. For nine controls in our sample, the underlying documentation to support execution of the control was inconsistent, unclear, or missing (see table 3). For example, management reviews of OCIE examination reports were documented in different ways, conducted by different levels of management, and found in different locations in the examination file. As of April 2013, OCIE officials stated that they addressed or were addressing deficiencies in all of these controls. In another example, Enforcement’s documentation of supervisory review of case progress on a quarterly basis was not consistent and in a few instances lacked evidence demonstrating that the review took place. Specifically, we requested all checksheets from our audit period, a total of 168, used by supervisors to document their quarterly case reviews and found that the checksheets were not maintained in a manner readily available for review. As a result, we worked with Enforcement officials to select a sample of 65 checksheets to review. Upon review, we found that the practices for documenting supervisory review were inconsistent and made our review challenging. For example, in some checksheets, supervisors signed the checksheet and also initialed next to each individual case listed on the checksheet. On other checksheets, supervisors signed the checksheet and either did not initial next to individual cases at all or only initialed next to select cases. Enforcement officials said that communication through standing meetings with assistant directors and executive management, rather than supervisory signatures, provided officials with confidence that the quarterly case reviews were taking place. To increase consistency in how the quarterly review sheets are executed, Enforcement officials provided guidance to its senior officers communicating that supervisors must sign the checksheet and that this signature will indicate that all matters on the checksheet have been reviewed. Finally, some controls lacked clearly defined control activities. Specifically, 12 controls in our sample were difficult to test because they were not designed to enable the control to operate effectively (see table 4). For example, Corporation Finance’s policy requires a review of all Securities Act initial public offerings and initial Exchange Act registrations unless an associate director determines otherwise; however, we found that the division lacked specific procedures by which an associate director could indicate and document this decision. And, although decisions to forgo a second-level review at the screening and examination stages were made consistently, the documented procedures did not completely describe when exceptions to the general requirement were acceptable. In addition, Enforcement did not have a mechanism in place to implement its control that all policies and procedures are reviewed, updated, and approved on an annual basis. As of April 2013, all of these deficiencies were addressed or were being addressed. Conclusions Since the passage of the Dodd-Frank Act, OCIE, Corporation Finance, and Enforcement have established an internal supervisory control framework that is generally reflective of federal internal control standards. The offices’ efforts, including senior-level management and internal control experts’ involvement in the formation of the 961 Working Group, demonstrate a deliberate and coordinated approach to designing the framework. In addition, senior-level management’s involvement in the annual 961 assessments, as well as our audit, indicate a commitment to improving internal control. We found deficiencies in the design or operating effectiveness of about half of the 60 internal supervisory controls we tested. Specifically, for these internal supervisory controls, the description of the control activity did not accurately reflect policy or practice; the documentation demonstrating execution of the control was not complete, clear, or consistent; or the control lacked clearly defined control activities. These control deficiencies may not prevent management from detecting whether the activities of the offices are conducted completely and in accordance with policy. However, the similarity in the nature of the deficiencies across all three offices suggests that management attention to the design and operation of internal supervisory controls is warranted. Federal internal control standards state control activities should enable effective operation and have clear, readily available documentation. The offices have addressed or have been taking steps to address all the 27 identified deficiencies. In some cases, the offices began to take corrective action before or during our audit based on their fiscal year 2011 section 961 assessment findings. Other control deficiencies were addressed during our review, after we had detailed discussions with SEC staff about the deficiencies. Because most actions became effective during our audit, not enough time had passed to test and verify the effectiveness of the actions SEC has been taking to address the identified deficiencies. Taking steps to ensure that all controls have clearly defined activities and clear and readily available documentation demonstrating execution of the activity would provide SEC management with better assurances that policies were being executed as intended and strengthen SEC’s internal supervisory control framework. Furthermore, SEC management and auditors would be better able to test and assess the effectiveness of a control, opening the doors to further improvement in individual controls. Recommendation for Executive Action To help ensure that controls are properly designed and operating effectively, SEC should make certain that existing internal supervisory controls and any developed in the future have clearly defined activities and clear and readily available documentation demonstrating execution of the activities. Agency Comments We provided a draft of this report to SEC for review and comment. SEC provided written comments, which are reprinted in appendix II. In its letter, SEC agreed with our recommendation. SEC also states that GAO concluded that the agency has established an overall framework to implement section 961 that meets GAO’s internal control standards. While we found that OCIE, Corporation Finance, and Enforcement have established an internal supervisory control framework that is generally reflective of federal internal control standards, we also found deficiencies in the design or operating effectiveness of about half of the 60 internal supervisory controls we tested. The offices have addressed or have been taking steps to address all of the deficiencies. Further, SEC noted in its letter that it conducted additional testing on the effectiveness of its internal supervisory controls for the 90-day period ending September 30, 2012, and did not identify any material weakness or significant deficiencies. We did not evaluate SEC’s testing of controls for this time period as part of this report. SEC also provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to SEC, appropriate congressional committees and members, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report focuses on functions performed through the Office of Compliance and Examination (OCIE), Division of Corporation Finance (Corporation Finance), and Division of Enforcement (Enforcement) at the Securities and Exchange Commission (SEC)—to which we refer collectively as the offices. We examined (1) the steps the offices have taken toward developing an internal supervisory control framework over the specified programs, (2) the internal supervisory controls each office has implemented and how these controls reflect established internal control standards, and (3) the extent to which the internal supervisory controls have operated as intended. To describe the steps each office has taken toward developing an internal supervisory control framework over the specified programs, we evaluated and analyzed documentation from (1) fiscal year 2011 assessments that OCIE, Corporation Finance, and Enforcement completed in accordance with requirements of section 961 of the Dodd-Frank Wall Street Reform and Consumer Protection Act; (2) SEC’s reports to Congress; and (3) documentation related to each office’s fiscal year 2011 testing of internal supervisory controls. We also reviewed documentation from the 961 Working Group and Office of the Chief Operating Officer, such as training presentations and documents describing the electronic tool used to capture risk and control information. We also reviewed previous GAO reports on other internal control frameworks and GAO’s audits of SEC’s financial statement and the Federal Managers’ Financial Integrity Act process. We compared SEC’s internal supervisory control framework with frameworks set forth in GAO’s Standards for Internal Control in the Federal Government. We interviewed officials from OCIE, Corporation Finance, Enforcement, and the Office of the Chief Operating Officer about actions taken to develop an internal supervisory control framework and how the framework addresses accepted internal control standards. To describe the internal supervisory controls that exist as part of the offices’ processes for conducting complete and consistent examinations, reviews of financial securities filings, and investigations, we evaluated and analyzed documentation from OCIE, Corporation Finance, and Enforcement, including policies and procedures for conducting examinations, filing reviews, and investigations. We also analyzed the offices’ fiscal years 2011 and 2012 risk and control matrixes, in which they identify key risks and controls designed to mitigate those risks. Furthermore, we observed the information technology systems used to track and document these activities. We interviewed officials from these offices about the examination, filing review, and investigation processes; and the specific internal supervisory controls that each unit has in place. We also interviewed these officials and MorganFranklin, the consulting firm hired to help assess the offices’ internal supervisory controls, to better understand their work processes, internal supervisory controls, and how each office has been addressing individual internal control standards. Finally, we obtained staff views on each office’s internal controls and communication from focus groups of randomly selected supervisory and nonsupervisory staff from OCIE and Enforcement in the Fort Worth, Texas; Miami, Florida; and Los Angeles, California regional offices and headquarters. We obtained similar information from Corporation Finance supervisory and nonsupervisory staff. We assessed a nongeneralizable sample of 60 fiscal year 2011 internal supervisory controls relevant to the conduct of examinations, filing reviews, and investigations to determine whether they operated as intended. We identified 135 controls that we categorized according to the internal control standard (control environment, risk assessment, control activities, information and communication, and monitoring) each best demonstrated. We selected a nonprobability sample of 11 OCIE, 10 Corporation Finance, and 11 Enforcement controls to review based on known information on past internal control failures and high-risk activities. We supplemented this sample with a random selection of 9 controls from OCIE and Enforcement and 10 controls from Corporation Finance from the remaining population, for a total of 20 controls from each office. For the selected controls, we reviewed the policies, procedures, and stated control objectives of the offices to determine if selected internal supervisory controls were designed in a manner capable of achieving their stated objectives. We also interviewed staff from each office on the operation of these controls. To review the operational effectiveness of the selected controls, we directly observed the electronic databases or spreadsheets described in some controls, obtained documentation or electronic data to analyze other controls, and compared the evidence with each control’s description to determine whether the control functioned as intended. The methodology used to review each control varied due to the nature of each control, the availability of control-level data, and the different methods used to document the control. In this report, we present our findings on controls with deficiencies in tables 2 through 4. The results of our reviews of the design and functioning of the specified controls are applicable only to the tested control for the audited time period and therefore are not generalizable to all of SEC’s internal supervisory controls. To review the fiscal year 2011 testing conducted by each office, we reviewed documentation describing the methodologies used and the results. As our review did not identify or test every control, it should not be interpreted as an attestation of the offices’ internal control. We conducted this performance audit from February 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Securities and Exchange Commission Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Andrew Pauline (Assistant Director), Bethany Benitez, Tiffani Humble, Matt Keeler, Kristen Kociolek, Jonathan Kucskar, Mark Molino, Luann Moy, Mark Ramage, and Barbara Roesmann made key contributions to this report.
Recent high-profile securities frauds have raised questions about the internal controls that SEC has in place to help ensure that staff carry out their work completely and in a manner consistent with applicable policies and procedures. Section 961 of the Dodd-Frank Act directs SEC to annually assess and report on internal supervisory controls for staff performing examinations, corporate financial securities filing reviews, and investigations. The act also requires GAO to review SEC's structure for internal supervisory control applicable to staff working in those offices. This report examines the (1) steps the offices took to develop an internal supervisory control framework; (2) internal supervisory controls each office has implemented; and (3) extent to which the internal supervisory controls have operated as intended. GAO reviewed each office's section 961 assessments and reports; analyzed the offices' internal supervisory control framework; and tested a sample of 60 supervisory controls using random samples and nonprobability selections. After the passage of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) in 2010, the Securities and Exchange Commission's (SEC) Office of Compliance Inspections and Examinations, Division of Corporation Finance, and Division of Enforcement (herein "the offices") established a working group that developed an internal supervisory control framework. Internal supervisory controls include the processes established by management to help ensure that procedures applicable to staff are performed completely, consistent with applicable policies and procedures, and remain current. The overall control framework is generally consistent with federal internal control standards, which includes identifying and assessing risks, identifying and assessing internal controls, and reporting the results of testing to management and Congress. As part of developing and applying an internal supervisory control framework, the offices each identified internal supervisory controls to mitigate risks that could undermine their ability to consistently and competently carry out their responsibilities. These internal supervisory controls are built into the offices' work processes--that is, the processes they use to carry out examinations, financial securities filing reviews, and investigations--and range from specific supervisory review and approval activities to management reports used to monitor the processes as a whole. For example, within Enforcement, supervisors must review and approve staff recommendations that a tip, complaint, or referral be closed without further investigation. Many of the offices' internal supervisory controls existed prior to the development of SEC's internal supervisory control framework; others were developed through the process of developing the framework. GAO identified deficiencies in about half of the 60 internal supervisory controls it tested. Specifically, GAO found that for 27 internal supervisory controls (1) the description of the control activity did not accurately reflect policy or practice; (2) documentation demonstrating execution of the control was not complete, clear, or consistent; or (3) the controls lacked clearly defined control activities. These control deficiencies may not prevent management from detecting whether the activities of the offices are conducted completely and in accordance with policy. However, similarities in the nature of deficiencies across all three offices suggest that management attention to the design and operation of internal supervisory controls is warranted. Federal internal control standards state that control activities should enable effective operation and have clear, readily available documentation. The offices have addressed or have been taking steps to address all of the 27 identified deficiencies. Some steps have been taken based on the offices' section 961 assessments. SEC addressed other deficiencies during GAO's review after discussions with GAO detailing the identified deficiency. Not enough time has passed for GAO to assess the effectiveness of these changes. Ensuring that all internal supervisory controls have clearly defined activities and clear, readily available documentation demonstrating execution of the control would provide SEC management with better assurance that policies were being executed as intended and strengthen SEC's internal supervisory control framework.
Scope and Methodology To determine why NRC did not identify and prevent the vessel head corrosion at the Davis-Besse nuclear power plant, we reviewed NRC’s lessons-learned task force report; FirstEnergy’s root cause analysis reports; NRC’s Office of the Inspector General reports on Davis-Besse; NRC’s augmented inspection team report; and NRC’s inspection reports and licensee assessments from 1998 through 2001. We also reviewed NRC generic communications issued on boric acid corrosion and on nozzle cracking. In addition, we interviewed NRC regional officials who were involved in overseeing Davis-Besse at the time corrosion was occurring, and when the reactor vessel head cavity was found, to learn what information they had, their knowledge of plant activities, and how they communicated information to headquarters. We also held discussions with the resident inspector who was at Davis-Besse at the time that corrosion was occurring to determine what information he had and how this information was communicated to the regional office. Further, we met with FirstEnergy and NRC officials at Davis-Besse and walked through the facility, including the containment building, to understand the nature and extent of NRC’s oversight of licensees. Additionally, we met with NRC headquarters officials to discuss the oversight process as it related to Davis-Besse, and the extent of their knowledge of conditions at Davis- Besse. We also met with county officials from Ottawa County, Ohio, to discuss their views on NRC and Davis-Besse plant safety. Further, we met with representatives from a variety of public interest groups to obtain their thoughts on NRC’s oversight and the agency’s proposed changes in the wake of Davis-Besse. To determine whether the process NRC used was credible when deciding to allow Davis-Besse to delay its shutdown, we evaluated NRC guidelines for reviewing licensee requests for temporary and permanent license changes, or amendments to their licenses. We also reviewed NRC guidance for making and documenting agency decisions, such as those on whether to accept licensee responses to generic communications, as well as NRC’s policies and procedures for taking enforcement action. We supplemented these reviews with an analysis of internal NRC correspondence related to the decision-making process, including e-mail correspondence, notes, and briefing slides. We also reviewed NRC’s request for additional information to FirstEnergy following the issuance of NRC’s generic bulletin for conducting reactor vessel head and nozzle inspections, as well as responses provided by FirstEnergy. In addition, we reviewed the draft shutdown order that NRC prepared before accepting FirstEnergy’s proposal to conduct its inspection in mid-February 2002. We reviewed these documents to determine whether the basis for NRC’s decision was clearly laid out, persuasive, and defensible to a party outside of NRC. As part of our analysis for determining whether NRC’s process was credible, we also obtained and reviewed NRC’s probabilistic risk assessment (PRA) calculations that it developed to guide its decision making. To conduct this analysis, we relied on the advice of consultants who, collectively, have an extensive background in nuclear engineering, PRA, and metallurgy. These consultants included Dr. John C. Lee, Professor and Chair, Nuclear Engineering and Radiological Sciences at the University of Michigan’s College of Engineering; Dr. Thomas H. Pigford, Professor Emeritus, at the University of California-Berkeley’s College of Engineering; and Dr. Gary S. Was, Associate Dean for Research in the College of Engineering, and Professor, Nuclear Engineering and Radiological Sciences at the University of Michigan’s College of Engineering. These consultants reviewed internal NRC correspondence relating to NRC’s PRA estimate, NRC’s calculations, and the basis for these calculations. These consultants also discussed the basis for NRC’s estimates with NRC officials and outside contractors who provided information to NRC as it developed its estimates. These consultants were selected on the basis of recommendations made by other nuclear engineering experts, their résumés, their collective experience, lack of a conflict of interest, and previous experience with assessing incidents at nuclear power plants such as Three Mile Island. To determine whether NRC is taking sufficient action in the wake of the Davis-Besse incident to prevent similar problems from developing in the future, we reviewed NRC’s lessons-learned task force recommendations, NRC’s analysis of the underlying causes for failing to identify the corrosion of the reactor vessel head, and NRC’s action plan developed in response to the task force recommendations. We also reviewed other NRC lessons- learned task force reports and their recommendations, our prior reports to identify issues related to those at Davis-Besse, and NRC’s Office of the Inspector General reports. We met with NRC officials responsible for implementing task force recommendations to obtain a clear understanding of the actions they were taking and the status of their efforts, and discussed NRC’s recommendations with NRC regional officials, on-site inspectors, and representatives from public interest groups. We conducted our review from November 2002 through May 2004 in accordance with generally accepted government auditing standards. Background NRC’s Role and Responsibilities NRC, as an independent federal agency, regulates the commercial uses of nuclear material to ensure adequate protection of public health and safety and the environment. NRC is headed by a five-member commission appointed by the President and confirmed by the Senate; one commissioner is appointed as chairman. NRC has about 2,900 employees who work in its headquarters office in Rockville, Maryland, and its four regional offices. NRC is financed primarily by fees that it imposes on commercial users of the nuclear material that it regulates. For fiscal year 2004, NRC’s appropriated budget of $626 million includes about $546 million financed by these fees. NRC regulates the nation’s commercial nuclear power plants by establishing requirements for plant owners and operators to follow in the design, construction, and operation of the nuclear reactors. NRC also licenses the reactors and individuals who operate them. Currently, 104 commercial nuclear reactors at 65 locations are licensed to operate. Many of these reactors have been in service since the early to mid-1970s. NRC initially licensed the reactors to operate for 40 years, but as these licenses approach their expiration dates, NRC has been granting 20-year extensions. To ensure the reactors are operated within their licensing requirements and technical specifications, NRC oversees them by both inspecting activities at the plants and assessing plant performance. NRC’s inspections consist of both routine, or baseline, inspections and supplemental inspections to assess particular licensee programs or issues that arise at a power plant. Inspections may also occur in response to a specific operational problem or event that has occurred at a plant. NRC maintains inspectors at every operating nuclear power plant in the United States and supplements the inspections conducted by these resident inspectors with inspections conducted by staff from its regional offices and from headquarters. Generally, inspectors verify that the plant’s operator qualifications and operations, engineering, maintenance, fuel handling, emergency preparedness, and environmental and radiation protection programs are adequate and comply with NRC safety requirements. NRC also oversees licensees by requesting information on their activities. NRC requires that information provided by licensees be complete and accurate and, according to NRC officials, this is an important aspect of the agency’s oversight. While we have added information to this report on the requirement that licensees provide NRC with complete and accurate information, we believe that NRC’s oversight program should not place undue reliance on this requirement. Nuclear power plants have many physical structures, systems, and components, and licensees have numerous activities under way, 24-hours a day, to ensure the plants operate safely. Programs to ensure quality assurance and safe operations include monitoring, maintenance, and inspection. To carry out these programs, licensees typically prepare several thousand reports per year describing conditions at the plant that need to be addressed to ensure continued safe operations. Because of the large number of activities and physical structures, systems, and components, NRC focuses its inspections on those activities and pieces of equipment or systems that are considered to be most significant for protecting public health and safety. NRC terms this a “risk-informed” approach for regulating nuclear power plants. Under this risk-informed approach, some systems and activities that NRC considers to have relatively less safety significance receive little NRC oversight. NRC has adopted a risk-informed approach because it believes it can focus its regulatory resources on those areas of the plant that the agency considers to be most important to safety. In addition, it was able to adopt this approach because, according to NRC, safety performance at nuclear power plants has improved as a result of more than 25 years of operating experience. To decide whether inspection findings are minor or major, NRC uses a process it began in 2000 to determine the extent to which violations compromise plant safety. Under this process, NRC characterizes the significance of its inspection findings by using a significance determination process to evaluate how an inspection finding impacts the margin of safety at a power plant. NRC has a range of enforcement actions it can take, depending on how much the safety of the plant had been compromised. For findings that have low safety significance, NRC can choose to take no formal enforcement action. In these instances, nonetheless, licensees remain responsible for addressing the identified problems. For more serious findings, NRC may take more formal action, such as issuing enforcement orders. Orders can be used to modify, suspend, or even revoke an operating license. NRC has issued one enforcement order to shut down an operating power plant in its 28-year history—in 1987, after NRC discovered control room personnel sleeping while on duty at the Peach Bottom nuclear power plant in Pennsylvania. In addition to enforcement orders, NRC can issue civil penalties of up to $120,000 per violation per day. Although NRC does not normally use civil penalties for violations associated with its Reactor Oversight Process, NRC will consider using them for issues that are willful, have the potential for impacting the agency’s regulatory process, or have actual public health and safety consequences. In fiscal year 2003, NRC proposed imposing civil penalties totaling $120,000 against two power plant licensees for the failure to provide complete and accurate information to the agency. NRC uses generic communications—such as bulletins, generic letters, and information notices—to provide information to and request information from the nuclear industry at large or specific groups of licensees. Bulletins and generic letters both usually request information from licensees regarding their compliance with specific regulations. They do not require licensees to take any specific actions, but do require licensees to provide responses to the information requests. In general, NRC uses bulletins, as opposed to generic letters, to address significant issues of greater urgency. NRC uses information notices to transmit significant recently identified information about safety, safeguards, or environmental issues. Licensees are expected to review the information to determine whether it is applicable to their operations and consider action to avoid similar problems. Operation of Pressurized Water Nuclear Power Plants and Events Leading to the March 2002 Discovery of Serious Corrosion The Davis-Besse Nuclear Power Station, owned and operated by FirstEnergy Nuclear Operating Company, is an 882-megawatt electric pressurized water reactor located on Lake Erie in Oak Harbor, Ohio, about 20 miles east of Toledo. The power plant is under NRC’s Region III oversight, which is located in Lisle, Illinois. Like other pressurized water reactors, Davis-Besse is designed with multiple barriers between the radioactive heat-producing core and the outside environment—a design concept called “defense-in-depth.” Three main design components provide defense-in-depth. First, the reactor core is designed to retain radioactive material within the uranium oxide fuel, which is also covered with a layer of metal tubing. Second, a 6-inch-thick carbon steel vessel, lined with three- sixteenth-inch-thick stainless steel, surrounds the reactor core. Third, a steel containment structure, surrounded by a thick reinforced concrete building, encloses the reactor vessel and other systems and components important for maintaining safety. The containment structure and concrete building are intended to help not only prevent a release of radioactivity to the environment, but also shield the reactor from external hazards like tornados and missiles. The reactor vessel, in addition to housing the reactor core, contains highly pressurized water to cool the radioactive heat-producing core and transfer heat to a steam generator. Consequently, the vessel is referred to as the reactor pressure vessel. From the vessel, hot pressurized water is piped to the steam generator, where a separate supply of water is turned to steam to drive turbines that generate electricity. (See fig. 1.) The top portion of the Davis-Besse reactor pressure vessel consisted of an 18-foot-diameter vessel head that was bolted to the lower portion of the pressure vessel. At Davis-Besse, 69 vertical tubes penetrated and were welded to the vessel head. These tubes, called vessel head penetration nozzles, contained control rods that, when raised or lowered, were used to moderate or shut down the nuclear reaction in the reactor. Because control rods attach to control rod drive mechanisms, these types of nozzles are referred to as control rod drive mechanism nozzles. A platform, known as the service structure, sat above the reactor vessel head and the control rod drive mechanism nozzles. Inside the service structure and above the pressure vessel head was a layer of insulation to help contain the heat emanating from the reactor. The sides of the lower portion of the service structure were perforated with 18 5- by 7-inch rectangular openings, termed “mouse-holes,” that were used for vessel head inspections. In pressurized water reactors such as Davis-Besse, the reactor vessel, the vessel head, the nozzles, and other equipment used to ensure a continuous supply of pressurized water in the reactor vessel are collectively referred to as the reactor coolant pressure boundary. (See fig. 2.) To better control the nuclear reaction at nuclear power plants, boron in the form of boric acid crystals is dissolved in the cooling water contained within the reactor vessel and pressure boundary. Boric acid, under certain conditions, can cause corrosion of carbon steel. For about 3 decades, NRC and the nuclear power industry have known that boric acid had the potential to corrode reactor components. In general, if leakage occurs from the reactor coolant system, the escaping coolant will flash to steam and leave behind a concentration of impurities, including noncorrosive dry boric acid crystals. However, under certain conditions, the coolant will not flash to steam, and the boric acid will remain in a liquid state where it can cause extensive and rapid degradation of any carbon steel components it contacts. Such extensive degradation, in both domestic and foreign pressurized water reactor plants, has been well documented and led NRC to issue a generic letter in 1988 requesting information from pressurized water reactor licensees to ensure they had implemented programs to control boric acid corrosion. NRC was primarily concerned that boric acid corrosion could compromise the reactor coolant pressure boundary. This concern also led NRC to develop a procedure for inspecting licensees’ boric acid corrosion control programs and led the Electric Power Research Institute to issue guidance on boric acid corrosion control. NRC and the nuclear power industry have also known that nozzles made of alloy 600, used in several areas within nuclear power plants, were prone to cracking. Cracking had become an increasingly topical issue as the nuclear power plant fleet has aged. In 1986, operators at domestic and foreign pressurized water reactors began reporting leaks in various types of alloy 600 nozzles. In 1989, after leakage was detected at a domestic plant, NRC identified the cause of the leakage as cracking due to primary water stress corrosion. However, NRC concluded that the cracking was not an immediate safety concern for a few reasons. For example, the cracks had a low growth rate, were in a material with an extremely high flaw tolerance and, accordingly, were unlikely to spread. Also, the cracks were axial—that is, they ran the length of the nozzle rather than its circumference. NRC and the nuclear power industry were more concerned that circumferential cracks could result in broken or snapped nozzles. NRC did, however, issue a generic information notice in 1990 to inform the industry of alloy 600 cracking. Through the early 1990s, NRC, the Nuclear Energy Institute, and others continued to monitor alloy 600 cracking. In 1997, continued concern over cracking led NRC to issue a generic letter to pressurized water reactor licensees requesting information on their plans to monitor and manage cracking in vessel head penetration nozzles as well as to examine these nozzles. In the spring of 2001, licensee inspections led to the discovery of large circumferential cracking in several vessel head penetration nozzles at the Oconee Nuclear Station, in South Carolina. As a result of the discovery, the nuclear power industry and NRC categorized the 69 operating pressurized water reactors in the United States into different groups on the basis of (1) whether cracking had already been found and (2) how similar they were to Oconee in terms of the amount of time and the temperature at which the reactors had operated. The industry had developed information indicating that greater operating time and temperature were related to cracking. In total, five reactors at three locations were categorized as having already identified cracking, while seven reactors at five locations were categorized as being highly susceptible, given their similarity to Oconee. In August 2001, NRC issued a bulletin requesting that licensees of these reactors provide, within 30 days, information on their plans for conducting nozzle inspections before December 31, 2001. In lieu of this information, NRC stated that licensees could provide the agency with a reasoned basis for their conclusions that their reactor vessel pressure boundaries would continue to meet regulatory requirements for ensuring the structural integrity of the reactor coolant pressure boundary until the licensees conducted their inspections. NRC used a bulletin, as opposed to a generic letter, to request this information because cracking was considered a significant and urgent issue. All of the licensees of the highly susceptible reactors, except Davis-Besse and D.C. Cook reactor unit 2, provided NRC with plans for conducting inspections by December 31, 2001. In September 2001, FirstEnergy proposed conducting the requested inspection in April 2002, following its planned March 31, 2002, shutdown to replace fuel. In making this proposal, FirstEnergy contended that the reactor coolant pressure boundary at Davis-Besse met and would continue to meet regulatory requirements until its inspection. NRC and FirstEnergy exchanged information throughout the fall of 2001 regarding when FirstEnergy would conduct the inspection at Davis-Besse. NRC drafted an enforcement order that would have shut down Davis-Besse by December 2001 for the requested inspection in the event that FirstEnergy could not provide an adequate justification for safe operation beyond December 31, 2001, but ultimately compromised on a mid-February 2002 shutdown date. NRC, in deciding when FirstEnergy had to shut down Davis-Besse for the inspection, used a risk-informed decision-making process, including probabilistic risk assessment (PRA), to conclude that the risk that Davis- Besse would have an accident in the interim was relatively low. PRA is an analytical tool for estimating the probability that a potential accident might occur by examining how physical structures, systems, and components, along with employees, work together to ensure plant safety. Following the mid-February 2002 shutdown and in the course of its inspection in March 2002, FirstEnergy removed about 900 pounds of boric acid crystals and powder from the reactor vessel head, and subsequently discovered three cracked nozzles. The number of nozzles that had cracked, as well as the extent of cracking, was consistent with analyses that NRC staff had conducted prior to the shutdown. However, in examining the extent of cracking, FirstEnergy also discovered that corrosion had caused a pineapple-sized cavity in the reactor vessel head. (See figs. 3 and 4.) After this discovery, NRC directed FirstEnergy to, among other things, determine the root cause of the corrosion and obtain NRC approval before restarting Davis-Besse. NRC also dispatched an augmented inspection team consisting of NRC resident, regional, and headquarters officials. The inspection team concluded that the cavity was caused by boric acid corrosion from leaks through the control rod drive mechanism nozzles in the reactor vessel head. Primary water stress corrosion cracking of the nozzles caused through-wall cracks, which led to the leakage and eventual corrosion of the vessel head. NRC’s inspection team also concluded, among other things, that this corrosion had gone undetected for an extended period of time—at least 4 years—and significantly compromised the plant’s safety margins. As of May 2004, NRC had not yet completed other analyses, including how long Davis-Besse could have continued to operate with the corrosion it had experienced before a vessel head loss-of-coolant accident would have occurred. However, on May 4, 2004, NRC released preliminary results of its analysis of the vessel head and cracked cladding. Based on its analysis of conditions that existed on February 16, 2002, NRC estimated that Davis-Besse could have operated for another 2 to 13 months without the vessel head failing. However, the agency cautioned that this estimate was based on several uncertainties associated with the complex network of cracks on the cladding and the lack of knowledge about corrosion and cracking rates. NRC plans to use this data in preparing its preliminary analysis of how, and the likelihood that, the events at Davis-Besse could have led to core damage. NRC plans to complete this preliminary analysis in the summer of 2004. NRC also established a special oversight panel to (1) coordinate NRC’s efforts to assess FirstEnergy’s performance problems that resulted in the corrosion damage, (2) monitor Davis-Besse’s corrective actions, and (3) evaluate the plant’s readiness to resume operations. The panel, which is referred to as the Davis-Besse Oversight Panel, comprises officials from NRC’s Region III office in Lisle, Illinois; NRC headquarters; and the resident inspector office at Davis-Besse. In addition to overseeing FirstEnergy’s performance during the shutdown and through restart of Davis-Besse, the panel holds public meetings in Oak Harbor, Ohio, where the plant is located, and nearby Port Clinton, Ohio, to inform the public about its oversight of Davis-Besse’s restart efforts and its views on the adequacy of these efforts. The panel developed a checklist of issues that FirstEnergy had to resolve prior to restarting: (1) replacing the vessel head and ensuring the adequacy of other equipment important for safety, (2) correcting FirstEnergy programs that led to the corrosion, and (3) ensuring FirstEnergy’s readiness to restart. To restart the plant, FirstEnergy, among other things, removed the damaged reactor vessel head, purchased and installed a new head, replaced management at the plant, and took steps to improve key programs that should have prevented or detected the corrosion. As of March 2004, when NRC gave its approval for Davis-Besse to resume operations, the shutdown and preparations for restart had cost FirstEnergy approximately $640 million. In addition, NRC established a task force to evaluate its regulatory processes for assuring reactor pressure vessel head integrity and to identify and recommend areas for improvement that may be applicable to either NRC or the nuclear power industry. The task force’s report, which was issued in September 2002, contains 51 recommendations aimed primarily at improving NRC’s process for inspecting and overseeing licensees, communicating with industry, and identifying potential emerging technical issues that could impact plant safety. NRC developed an action plan to implement the report’s recommendations. NRC’s Actions to Oversee Davis-Besse Did Not Provide an Accurate Assessment of Safety at the Plant NRC’s inspections and assessments of FirstEnergy’s operations should have but did not provide the agency with an accurate understanding of safety conditions at Davis-Besse, and thus NRC failed to identify or prevent the vessel head corrosion. Some NRC inspectors were aware of the indications of corrosion and leakage that could have alerted NRC to corrosion problems at the plant, but they did not have the knowledge to recognize the significance of this information. These problems were compounded by NRC’s assessments of FirstEnergy that led the agency to believe FirstEnergy was a good performer and could or would successfully resolve problems before they became significant safety issues. More broadly, NRC had a range of information that could have identified and prevented the incident at Davis-Besse but did not effectively integrate it into its oversight. Several Factors Contributed to the Inadequacy of NRC’s Inspections for Determining Plant Conditions Three separate, but related, NRC inspection program factors contributed to the development of the corrosion problems at Davis-Besse. First, resident inspectors did not know that the boric acid, rust, and unidentified leaking indicated that the reactor vessel head might be degrading. Second, these inspectors thought they understood the cause for the indications, based on licensee actions to address them. Therefore, resident inspectors, as well as regional and headquarters officials, did not fully communicate information on the indications or decide how to address them, and therefore took no action. Third, because the significance of the symptoms was not fully recognized, NRC did not direct sufficient inspector resources to aggressively investigate the indicators. NRC might have taken a different approach to the Davis-Besse situation if its program to identify emerging issues important to safety had pursued earlier concerns about boric acid corrosion and cracking and recognized how they could affect safety. Inspectors Did Not Know Safety Significance of Observed Problems NRC limits the amount of unidentified leakage from the reactor coolant system to no more than 1 gallon per minute. When this limit is exceeded, NRC requires that licensees identify and correct any sources of unidentified leakage. NRC also prohibits any leakage from the reactor coolant pressure boundary, of which the reactor vessel is a key component. Such leakage is prohibited because the pressure boundary is key to maintaining adequate coolant around the reactor fuel and thus protects public health and safety. Because of this, NRC’s technical specification states that licensees are to monitor reactor coolant leakage and shut down within 36 hours if leakage is found in the pressure boundary. In the years leading up to FirstEnergy’s March 2002 discovery that Davis- Besse’s vessel head had corroded extensively, NRC had several indications of potential leakage problems. First, NRC knew that the rates of leakage in the reactor coolant system had increased. Between 1995 and mid-1998, the unidentified leakage rate was about 0.06 gallon per minute or less, according to FirstEnergy’s monitoring. In mid-1998, the unidentified reactor coolant system leakage rate increased significantly—to as much as 0.8 gallon per minute. The elevated leakage rate was dominated by a known problem with a leaking relief valve on the reactor coolant system pressurizer tank, which masked the ongoing leak on the reactor pressure vessel head. However, the elevated leak rate should have raised concerns. To investigate this leakage, as well as to repair other equipment, FirstEnergy shut down the plant in mid-1999. It then identified a faulty relief valve that accounted for much of the leakage and repaired the valve. However, after restarting Davis-Besse, the unidentified leakage rate remained significantly higher than the historical average. Specifically, the unidentified leakage rate varied between 0.15 and 0.25 gallon per minute as opposed to the historical low of about 0.06 gallon or less. While NRC was aware that the rate was higher than before, NRC did not aggressively pursue the difference because the rate was well below NRC’s limit of no more than 1 gallon per minute, and thus the leak was not viewed as being a significant safety concern. Following the repair in 1999, NRC’s inspection report concluded that FirstEnergy’s efforts to reduce the leak rate during the outage were effective. Second, NRC was aware of increased levels of boric acid in the containment building—an indication that components containing reactor coolant were leaking. So much boric acid was being deposited that FirstEnergy officials had to repeatedly clean the containment air cooling system and radiation monitor filters. For example, before 1998, the containment air coolers seldom needed cleaning, but FirstEnergy had to clean them 28 times between late 1998 and May 2001. Between May 2001 and the mid-February 2002 shutdown, the containment air coolers were not cleaned, but at shutdown, FirstEnergy removed 15 5-gallon buckets of boric acid from the coolers—which is almost as much as was found on the reactor pressure vessel head. Rather than seeing these repeated cleanings as an indication of a problem that needed to be addressed, FirstEnergy made cleaning the coolers a routine maintenance activity, which NRC did not consider significant enough to require additional inspections. Furthermore, the radiation monitors, used to sample air from the containment building to detect radiation, typically required new filters every month. However, from 1998 to 2002, these monitors became clogged and inoperable hundreds of times because of boric acid, despite FirstEnergy’s efforts to fix the problem. Third, NRC was aware that FirstEnergy found rust in the containment building. The radiation monitor filters had accumulated dark colored iron oxide particles—a product of carbon steel corrosion—that were likely to have resulted from a very small steam leak. NRC inspection reports during the summer and fall of 1999 noted these indications and, while recognizing FirstEnergy’s aggressive attempts to identify the reasons for the phenomenon, concluded that they were a “distraction to plant personnel.” Several NRC inspection reports noted indications of leakage, boric acid, and rust before the agency adopted its new Reactor Oversight Process in 2000, but because the leakage was within NRC’s technical specifications and NRC officials thought that the licensee understood and would fix the problem, NRC did not aggressively pursue the indications. NRC’s new oversight process, implemented in the spring of 2000, limited the issues that could be discussed in NRC inspection reports to those that the agency considers to have more than minor significance. Because the leakage rates were below NRC’s limits, NRC’s inspection reports following the implementation of NRC’s new oversight process did not identify any discussion of these problems at the plant. Fourth, NRC was aware that FirstEnergy found rust on the Davis-Besse reactor vessel head, but it did not recognize its significance. For instance, during the 2000 refueling outage, a FirstEnergy official said he showed one of the two NRC resident inspectors a report that included photographs of rust-colored boric acid on the vessel head. (See fig. 5.) According to this resident inspector, he did not recall seeing the report or photographs but had no reason to doubt the FirstEnergy official’s statement. Regardless, he stated that had he seen the photographs, he would not have considered the condition to be significant at the time. He said that he did not know what the rust and boric acid might have indicated, and he assumed that FirstEnergy would take care of the vessel head before restarting. The second resident inspector said he reviewed all such reports at Davis-Besse but did not recall seeing the photographs or this particular report. He stated that it was quite possible that he had read the report, but because the licensee had a plan to clean the vessel head, he would have concluded that the licensee would correct the matter before plant restart. However, FirstEnergy did not accomplish this, even though work orders and subsequent licensee reports indicated that this was done. According to the NRC resident inspector and NRC regional officials, because of the large number of licensee activities that occur during a refueling outage, NRC inspectors do not have the time to investigate or follow up on every issue, particularly when the issue is not viewed as being important to safety. While the resident inspector informed regional officials about conditions at Davis-Besse, the regional office did not direct more inspection resources to the plant, or instruct the resident inspector to conduct more focused oversight. Some NRC regional officials were aware of indications of boric acid corrosion at the plant; others were not. According to the Office of the Inspector General’s investigation and 2003 report on Davis-Besse, the NRC regional branch chief—who supervised the staff responsible for overseeing FirstEnergy’s vessel head inspection activities during the 2000 refueling outage—said that he was unaware of the boric acid leakage issues at Davis-Besse, including its effects on the containment air coolers and the radiation monitor filters. Had his staff been requested to look at these specific issues, he might have directed inspection resources to that area. (App. I provides a time line showing significant events of interest.) NRC Did Not Fully Communicate Indications NRC was not fully aware of the indications of a potential problem at Davis- Besse because NRC’s process for transmitting information from resident inspectors to regional offices and headquarters did not ensure that information was fully communicated, evaluated, or used. NRC staff communicated information about plant operations through inspection reports, licensee assessments, and daily conference calls that included resident, regional, and headquarters officials. According to regional officials, information that is not considered important is not routinely communicated to NRC management and technical specialists. For example, while the resident inspectors at Davis-Besse knew all of the indications of leakage, and there was some level of knowledge about these indications at the regional office level, the knowledge was not sufficiently widespread within NRC to alert a technical specialist who might have recognized their safety significance. According to NRC Region III officials, the region uses an informal means—memorandums sent to other regions and headquarters—of communicating information identified at plants that it considers to be important to safety. However, because the indications at Davis-Besse were not considered important, officials did not transmit this information to headquarters. Further, because the process is informal, these officials said they did not know whether—and if so, how—other NRC regions or headquarters used this information. Similarly, NRC officials said that NRC headquarters had no systematic process for communicating information, such as on boric acid corrosion, cracking, and small amounts of unidentified leakage, that had not yet risen to a relatively high level of concern within the agency, in a timely manner to its regions or on-site inspectors. For example, the regional inspector that oversaw FirstEnergy’s activities during the 2000 refueling outage, including the reactor vessel head inspection, stated that he was not aware of NRC’s generic bulletins and letters pertaining to boric acid and corrosion, even though NRC issues only a few of these bulletins and generic letters each year. In addition, according to NRC regional officials and the resident inspector at Davis-Besse, there is little time to review technical reports about emerging safety issues that NRC compiles because they are too lengthy and detailed. Ineffective communication, both within the region and between NRC headquarters and the region, was a primary factor cited by NRC’s Office of the Inspector General in its investigation of NRC’s oversight of Davis-Besse boric acid leakage and corrosion. For example, it found that ineffective communication resulted in senior regional management being largely unaware of repeated reports of boric acid leakage at Davis-Besse. It also found that headquarters, in communications with the regions, did not emphasize the issues discussed in its generic letters or bulletins on boric acid corrosion or cracking. NRC programs for informing its inspectors about issues that can reduce safety at nuclear power plants were not effective. As a result, NRC inspectors did not recognize the significance of the indications at Davis-Besse, fully communicate information about the indications, or spend additional effort to follow up on the indications. Resource Constraints Affected NRC Oversight NRC also did not focus on the indications that the vessel head was corroding because of several staff constraints. Region III was directing resources to other plants that had experienced problems throughout the region, and these plants thus were the subject of increased regulatory oversight. For example, during the refueling outages in 1998 and 2000, while NRC oversaw FirstEnergy’s inspection of the reactor vessel head, the region lacked senior project engineers to devote to Davis-Besse. A vacancy existed for a senior project engineer responsible for Davis-Besse from June 1997 until June 1998, except for a one month period, and from September 1999 until May 2000, which resulted in fewer inspection hours at the facility than would have been normal. Other regional staff were also occupied with other plants in the region that were having difficulties, and NRC had unfilled vacancies for resident and regional inspector positions that strained resources for overseeing Davis-Besse. Even if the inspector positions had been filled, it is not certain that the inspectors would have aggressively followed up on any of the indications. According to our discussions with resident and regional inspectors and our on-site review of plant activities, because nuclear power plants are so large, with many physical structures, systems, and components, an inspector could miss problems that were potentially significant for safety. Licensees typically prepare several hundred reports per month for identifying and resolving problems, and NRC inspectors have only a limited amount of time to follow up on these licensee reports. Consequently, NRC selects and oversees the most safety significant structures, systems, and components. NRC’s Assessment Process Did Not Indicate Deteriorating Performance Under NRC’s Reactor Oversight Process, NRC assesses licensees’ performance using two distinct types of information: (1) NRC’s inspection results and (2) performance indicators reported by the licensees. These indicators, which reflect various aspects of a plant’s operations, include data on, for example, the failure or unavailability of certain important operating systems, the number of unplanned power changes, and the amount of reactor coolant system leakage. NRC evaluates both the inspection results and the performance indicators to arrive at licensee assessments, which it then color codes to reflect their safety significance. Green assessments indicate that performance is acceptable, and thus connote a very low risk significance and impact on safety. White, yellow, and red assessments each represent a greater degree of safety significance. After NRC adopted its Reactor Oversight Process in April 2000, FirstEnergy never received anything but green designations for its operations at Davis- Besse and was viewed by NRC as a good performer until the 2002 discovery of the vessel head corrosion. Similarly, prior to adopting the Reactor Oversight Process, NRC consistently assessed FirstEnergy as generally being a good performer. NRC officials stated, however, that significant issues were identified and addressed as warranted throughout this period, such as when the agency took enforcement action in response to FirstEnergy’s failure to properly repair important components in 1999—a failure caused by weaknesses in FirstEnergy’s boric acid corrosion control program. Key Davis-Besse programs for ensuring the quality and safe operation of the plant’s engineered structures, systems, and components include, for example, a corrective action program to ensure that problems at the plant that are relevant to safety are identified and resolved in a timely manner, an operating experience program to ensure that experiences or problems that occur are appropriately identified and analyzed to determine their significance and relevance to operations, and a plant modification program to ensure that modifications important to safety are implemented in a timely manner. As at other commercial nuclear power plants, NRC conducted routine, baseline inspections of Davis-Besse to determine the effectiveness of these programs. Reports documenting these inspections noted incidences of boric acid leakage, corrosion, and deposits. However, between February 1997 and March 2000, the regional office’s assessment of the licensee’s performance addressed leakage in the reactor coolant system only once and never noted the other indications. Furthermore, Davis-Besse was not the subject of intense scrutiny in regional plant assessment meetings because plants perceived as good performers—such as Davis-Besse— received substantially less attention. Between April 2000—when NRC’s revised assessment process took effect—until the corrosion was discovered in March 2002, none of NRC’s assessments of Davis-Besse’s performance noted leakage or other indications of corrosion at the plant. As a result, NRC may have missed opportunities to identify weaknesses in the Davis-Besse programs intended to detect or prevent the corrosion. After the corrosion was discovered, NRC analyzed the problems that led to the corrosion on the reactor vessel head and concluded that FirstEnergy’s programs for overseeing safety at Davis-Besse were weak, as seen in the following examples: Davis-Besse’s corrective action program did not result in timely or effective actions to prevent indications of leakage from reoccurring in the reactor coolant system. FirstEnergy officials did not always enter equipment problems into the corrective action program because individuals who had identified the problem were often responsible for resolving it. For over a decade, FirstEnergy had delayed plant modifications to its service structure platform, primarily because of cost. These modifications would have improved its ability to inspect the reactor vessel head nozzles. As a result, FirstEnergy could conduct only limited visual inspections and cleaning of the reactor pressure vessel head through the small “mouse-holes” that perforated the service structure. NRC was also unaware of the extent to which various aspects of FirstEnergy’s safety culture had degraded—that is, FirstEnergy’s organization and performance related to ensuring safety at Davis-Besse. This degradation had allowed the incident to occur with no forewarning because NRC’s inspections and performance indicators do not directly assess safety culture. Safety culture is a group of characteristics and attitudes within an organization that establish, as an overriding priority, that issues affecting nuclear plant safety receive the attention their significance warrants. Following FirstEnergy’s March 2002 discovery, NRC found numerous indications that FirstEnergy emphasized production over plant safety. First, Davis-Besse routinely restarted the plant following an outage, even though reactor pressure vessel valves and control rod drive mechanisms leaked. Second, staff was unable to remove all of the boric acid deposits from the reactor pressure vessel head because FirstEnergy’s schedule to restart the plant dictated the amount of work that could be performed. Third, FirstEnergy management was willing to accept degraded equipment, which indicated a lack of commitment to resolve issues that could potentially compromise safety. Fourth, Davis-Besse’s program that was intended to ensure that employees feel free to raise safety concerns without fear of retaliation had several weaknesses. For example, in one instance, a worker assigned to repair the containment air conditioner was not provided a respirator in spite of his concerns that he would inhale boric acid residue. According to NRC’s lessons-learned task force report, NRC was not aware of weaknesses in this program because its inspections did not adequately assess it. Given that FirstEnergy concluded that one of the causes for the Davis- Besse incident was human performance and management failures, the panel overseeing FirstEnergy’s efforts to restart Davis-Besse requested that FirstEnergy assess its safety culture before allowing the plant to restart. To oversee FirstEnergy’s efforts to improve its safety culture, NRC (1) reviewed whether FirstEnergy had adequately identified all of the root causes for management and human performance failures at Davis-Besse, (2) assessed whether FirstEnergy had identified and implemented appropriate corrective actions to resolve these failures, and (3) assessed whether FirstEnergy’s corrective actions were effective. As late as February 2004, NRC had concerns about whether FirstEnergy’s actions would be adequate in the long term. As a result, the Davis-Besse safety culture was one of the issues contributing to the delay in restarting the plant. In March 2004, NRC’s panel concluded that FirstEnergy’s efforts to improve its safety culture were sufficient to allow the plant to restart. In doing so, however, NRC officials stated that one of the conditions the panel imposed was for FirstEnergy to conduct an independent assessment of the safety culture at Davis-Besse annually over the course of the next 5 years. NRC Did Not Effectively Incorporate Long-Standing Knowledge about Corrosion, Nozzle Cracking, and Leak Detection into Its Oversight NRC has been aware of boric acid corrosion and its potential to affect safety since at least 1979. It issued several notices to the nuclear power industry about boric acid corrosion and, specifically, the potential for it to degrade the reactor coolant pressure boundary. In 1987, two licensees found significant corrosion on their reactor pressure vessel heads, which heightened NRC’s concern. A subsequent industry study concluded that concentrated solutions of boric acid could result in unacceptably high corrosion rates—up to 4 inches per year—when primary coolant leaks onto surfaces and concentrates at temperatures found on the surface of the reactor vessel. After considering this information and several more instances of boric acid corrosion at plants, NRC issued a generic letter in 1988 requesting licensees to implement boric acid corrosion control programs. In 1990, NRC visited Davis-Besse to assess the adequacy of the plant’s boric acid corrosion control program. At that time, NRC concluded that the program was acceptable. However, in 1999, NRC became aware that FirstEnergy’s boric acid corrosion control program was inadequate because boric acid had corroded several bolts on a valve, and NRC issued a violation. As a result of the violation, FirstEnergy agreed to review its boric acid corrosion procedures and enhance its program. NRC inspectors evaluated FirstEnergy’s completed and planned actions to improve the boric acid corrosion control program and found them to be adequate. According to NRC officials, they never inspected the remaining actions— assuming that the planned actions had been implemented effectively. In 2000, NRC adopted its new Reactor Oversight Process and discontinued its inspection procedure for plants’ corrosion control programs because these inspections had rarely been conducted due to higher priorities. Thus, NRC had no reliable or routine way to ensure that the nuclear power industry fully implemented boric acid corrosion control programs. NRC also did not routinely review operating experiences at reactors, both in the United States and abroad, to keep abreast of boric acid developments and determine the need to emphasize this problem. Indeed, NRC did not fully understand the circumstances in which boric acid would result in corrosion, rather than flash to steam. Similarly, NRC did not know the rate at which carbon steel would corrode under different conditions. This lack of knowledge may be linked to shortcomings in its program to review operating experiences at reactors, which could have been exacerbated by the 1999 elimination of the office specifically responsible for reviewing operating experiences. This office was responsible for, among other things, (1) coordinating operational data collection, (2) systematically analyzing and evaluating operational experience, (3) providing feedback on operational experience to improve safety, (4) assessing the effectiveness of the agencywide program, and (5) acting as a focal point for interaction with outside organizations on issues pertaining to operational safety data analysis and evaluation. According to NRC officials who had overseen Davis-Besse at the time of the incident, they would not have suspected the reactor vessel head or cracked head penetration nozzles as the source of the filter clogging and unidentified leakage because they had not been informed that these could be potential problems. According to these officials, the vessel head was “not on the radar screen.” With regard to nozzle cracking, NRC, for more than two decades, was aware of the potential for nozzles and other components made of alloy 600 to crack. While cracks were found at nuclear power plants, NRC considered their safety significance to be low because the cracks were not developing rapidly. In contrast, other countries considered the safety significance of such cracks to be much higher. For example, concern over alloy 600 cracking led France, as a preventive measure, to institute requirements for an extensive nondestructive examination inspection program for vessel head penetration nozzles, including the removal of insulation, during every fuel outage. When any indications of cracking were observed, even more frequent inspections were required, which, because of economic considerations, resulted in the replacement of vessel heads when indications were found. The effort to replace the vessel heads is still under way. Japan replaced those vessel heads whose nozzles it considered most susceptible to cracking, even though no cracks had yet been found. Both France and Sweden also installed enhanced leakage monitoring systems to detect leaks early. However, according to NRC, such systems cannot detect the small amounts of leakage that may be typical from cracked nozzles. NRC recognized that an integrated, long-term program, including periodic inspections and monitoring of vessel heads to check for nozzle cracking, was necessary. In 1997, it issued a generic letter that summarized NRC’s efforts to address cracking of control rod drive mechanism nozzles and requested information on licensees’ plans to inspect nozzles at their reactors. More specifically, this letter asked licensees to provide NRC with descriptions of their inspections of these nozzles and any plans for enhanced inspections to detect cracks. At that time, NRC was planning to review this information to determine if enhanced licensee inspections were warranted. Based on its review of this information, NRC concluded that the current inspection program was sufficient. As a result, between 1998 and 2001, NRC did not issue or solicit additional information on nozzle cracking or assess its requirements for inspecting reactor vessels to determine whether they were sufficient to detect cracks. At Davis-Besse, NRC also did not determine if FirstEnergy had plans or was implementing any plans for enhanced nozzle inspections, as noted in the 1997 generic letter. NRC took no further action until the cracks were found in 2001 at the Oconee plant, in South Carolina. NRC attributed its lack of focus on nozzle cracking, in part, to the agency’s inability to effectively review, assess, and follow up on industry operating experience events. Furthermore, as with boric acid corrosion, NRC did not obtain or analyze any new data about cracking that would have supported making changes in either its regulations or inspections to better identify or prevent corrosion on the vessel head at Davis-Besse. NRC’s technical specifications regarding allowable leakage rates also contributed to the corrosion at Davis-Besse because the amount of leakage that can cause extensive corrosion can be significantly less than the level that NRC’s specifications allow. According to NRC officials, NRC’s requirements, established in 1973, were based on the best available technology at that time. The task of measuring identified and unidentified leakage from the reactor coolant system is not precise. It requires licensees to estimate the amount of coolant that the reactor is supposed to contain and identify any difference in coolant levels. They then have to account for the estimated difference in the actual amount of coolant to arrive at a leakage rate; to do this, they identify all sources and amounts of leakage by, among other things, measuring the amount of water contained in various sump collection systems. If these sources do not account for the difference, licensees know they have an unidentified source of leakage. This estimate can vary significantly from day to day between negative and positive numbers. According to analyses that FirstEnergy conducted after it identified the corrosion in March 2002, the leakage rates from the nozzle cracks were significantly below NRC’s reactor coolant system unidentified leakage rate of 1 gallon per minute. Specifically, the leakage from the nozzle around which the vessel head corrosion occurred was predicted to be 0.025 gallon per minute. If such small leakage can result in such extensive corrosion, identifying if and where such leakage occurs is important. NRC staff recognized as early as 1993 it would be prudent for the nuclear power industry to consider implementing an enhanced method for detecting small leaks during plant operation, but NRC did not require this action, and the industry has not taken steps to do so. Furthermore, NRC has not consistently enforced its requirement for reactor coolant pressure boundary leakage. As a result, the NRC Davis-Besse task force concluded that inconsistent enforcement may have reinforced a belief that alloy 600 nozzle leakage was not actually or potentially a safety significant issue. NRC’s Process for Deciding Whether to Allow a Delayed Davis- Besse Shutdown Lacked Credibility Although FirstEnergy operated Davis-Besse without incident until shutting it down in February 2002, certain aspects of NRC’s deliberations allowing the delayed shutdown raise questions about the credibility of the agency’s decision making, if not about the Davis-Besse decision itself. NRC does not have specific guidance for deciding on plant shutdowns. Instead, agency officials turned to guidance developed for a different purpose—reviewing requests to amend license operating conditions—and even then did not always adhere to this guidance. In addition, NRC did not document its decision-making process, as called for by its guidance, and its letter to FirstEnergy to lay out the basis for the decision—sent a year after the decision—did not fully explain the decision. NRC’s lack of guidance, coupled with the lack of documentation, precludes us from independently judging whether NRC’s decision was reasonable. Finally, some NRC officials stated that the shutdown decision was based, in part, on the agency’s probabilistic risk assessment (PRA) calculations of the risk that Davis-Besse would pose if it delayed its shutdown and inspection. However, as noted by our consultants, the calculations were flawed, and NRC’s decision makers did not always follow the agency’s guidance for developing and using such calculations. NRC Did Not Have Specific Guidance for Deciding on Plant Shutdowns NRC believed that Davis-Besse could have posed a potential safety risk because it was, in all likelihood, failing to comply with NRC’s technical specification that no leakage occur in the reactor coolant pressure boundary. Its belief was based on the following indicators of probable leakage: All six of the other reactors manufactured by the same company as Davis-Besse’s reactor had cracked nozzles and identified leakage. Three of these six reactors had identified circumferential cracking. FirstEnergy had not performed a recent visual examination of all of its nozzles. Furthermore, a FirstEnergy manager agreed that cracks and leakage were likely. NRC has the authority to shut down a plant when it is clear that the plant is in violation of important safety requirements, and it is clear that the plant poses a risk to public health and safety. Thus, if a licensee is not complying with technical specifications, such as those for no allowable reactor vessel pressure boundary leakage, NRC can order a plant to shut down. However, NRC decided that it could not require Davis-Besse to shut down on the basis of other plants’ cracked nozzles and identified leakage or the manager’s acknowledgement of a probable leak. Instead, it believed it needed more direct, or absolute, proof of a leak to order a shutdown. This standard of proof has been questioned. According to the Union of Concerned Scientists, for example, if NRC needed irrefutable proof in every case of suspected problems, the agency would probably never issue a shutdown order. In effect, in this case NRC created a Catch-22: It needed irrefutable proof to order a shutdown but could not get this proof without shutting down the plant and requiring that the reactor be inspected. Despite NRC’s responsibility for ensuring that the public is adequately protected from accidents at commercial nuclear power plants, NRC does not have specific guidance for shutting down a plant when the plant may pose a risk to public health and safety, even though it may be complying with NRC requirements. It also has no specific guidance or standards for quality of evidence needed to determine that a plant may pose an undue risk. Lacking direct or absolute proof of leakage at Davis-Besse, NRC instead drafted a shutdown order on the basis that a potentially hazardous condition may have existed at the plant. NRC had no guidance for developing such a shutdown order, and therefore, it used its guidance for reviewing license amendment requests. NRC officials recognized that this guidance was not specifically designed to determine whether NRC should shut down a power plant such as Davis-Besse. However, NRC officials stated that this guidance was the best available for deciding on a shutdown because, although the review was not to amend a license, the factors that NRC needed to consider in making the decision and that were contained in the guidance were applicable to the Davis-Besse situation. To use its guidance for reviewing license amendment requests, NRC first determined that the situation at Davis-Besse posed a special circumstance because new information revealed a substantially greater potential for a known hazard to occur, even if Davis-Besse was in compliance with the technical specification for leakage from the reactor coolant pressure boundary. The special circumstance stemmed from NRC’s determination that requirements for conducting vessel head inspections were not sufficient to detect nozzle cracking and, thus, small leaks. According to NRC officials, this determination allowed NRC to use its guidance for reviewing license amendment requests when deciding whether to order a shutdown. The Extent of NRC’s Reliance on License Amendment Guidance Is Not Clear Under NRC’s license amendment guidance, NRC considers how the license change affects risk, but not how it has previously assessed licensee performance, such as whether the licensee was viewed as a good performer. With regard to the Davis-Besse decision, the guidance directed NRC to determine whether the plant would comply with five NRC safety principles if it operated beyond December 2001 without inspecting the reactor vessel head. As applied to Davis-Besse, these principles were whether the plant would (1) continue to meet requirements for vessel head inspections, (2) maintain sufficient defense-in-depth, (3) maintain sufficient safety margins, (4) have little increase in the likelihood of a core damage accident, and (5) monitor the vessel head and nozzles. The guidance, however, does not specify how to apply these safety principles, how NRC can demonstrate it has followed the principles and ensured they are met, or whether any one principle takes precedence over the others. The guidance also does not indicate what actions NRC or licensees should take if some or all of the principles are not met. In mid-September 2001, NRC staff concluded that Davis-Besse complied with the first safety principle but did not meet the remaining four. According to the staff, Davis-Besse did not meet three safety principles because the requirements for vessel head inspections were not adequate. Specifically, the requirements do not require the inspector to remove the insulation above the vessel head, and thus allow all of the nozzles to be visually inspected. NRC therefore could not ensure that FirstEnergy was maintaining defense-in-depth and adequate safety margins or sufficiently monitoring the vessel head and nozzles. The staff believed that Davis-Besse did not meet the fourth safety principle because the risk estimate of core damage approached an unacceptable level and the estimate itself was highly uncertain. Between early October and the end of November 2001, NRC requested and received additional information from FirstEnergy regarding its risk estimate of core damage—its PRA estimate—and met with the company to determine the basis for the estimate. NRC was also developing its own risk estimate, although its numbers kept changing. At some point during this time, NRC staff also concluded that the first safety principle was probably not being met, although the basis for this conclusion is not known. At the end of November 2001, NRC contacted FirstEnergy and informed it that a shutdown order had been forwarded to the NRC commissioners and asked if FirstEnergy could take any actions that would persuade NRC to not issue the shutdown order. The following day, FirstEnergy proposed measures to mitigate the potential for and consequences of an accident. These measures included, among other things, lowering the operating temperature from 605 degrees Fahrenheit to 598 degrees Fahrenheit to reduce the driving force for stress corrosion cracking on the nozzles, identifying a specific operator to initiate emergency cooling in response to an accident, and moving the scheduled refueling outage up from March 31, 2002, to no later than February 16, 2002. NRC staff discussed these measures, and NRC management asked the staff if they were concerned about extending Davis-Besse’s operations until mid-February 2002. While some of the staff were concerned about continued operations, none indicated to NRC management that cracking in control rod drive mechanism nozzles was likely extensive enough to cause a nozzle to eject from the vessel head, thus making it unsafe to operate. NRC formally accepted FirstEnergy’s compromise proposal within several days, thus abandoning its shutdown order. NRC Did Not Fully Explain or Document the Basis for Its Decision We could not fully assess NRC’s basis for accepting FirstEnergy’s proposal. NRC did not document its deliberations, even though its guidance requires that it do so. This documentation is to include the data, methods, and assessment criteria used; the basis for the decisions made; and essential correspondence sufficient to document the persons, places, and matters dealt with by NRC. Specifically, the guidance requires that the documentation contain sufficient detail to make possible a “proper scrutiny” of NRC decisions by authorized outside agencies and provide evidence of how basic decisions were formed, including oral decisions. NRC’s guidance also states that NRC should document all important staff meetings. In reviewing NRC’s documentation on the Davis-Besse decision, we found no evidence of an in-depth or formal analysis of how Davis-Besse’s proposed measures would affect the plant’s ability to satisfy the five safety principles. Thus, it is unclear whether the safety principles contained in the guidance were met by the measures that FirstEnergy proposed. However, several NRC officials stated that FirstEnergy’s proposed measures had no impact on plant operations or safety. For example, according to one NRC official, FirstEnergy’s proposal to reduce the operating temperature would have had little impact on safety because the small drop in operating temperature over a 7-week period would have had little effect on the growth rate of any cracks in a nozzle. As such, this official considered the measures as “window dressing.” A proposed measure that NRC staff did consider as having a significant impact on the risk was for FirstEnergy to dedicate an operator for manually turning on safety equipment in the event that a nozzle was ejected. Subsequent to approving the delayed shutdown, NRC learned that FirstEnergy had not, in fact, planned to dedicate an operator for this task—rather, FirstEnergy planned to have an operator do this task in addition to other regularly assigned duties. According to an NRC official, once NRC decided not to issue a shutdown order for December 2001, NRC staff needed to discuss how NRC’s assessment of whether the five safety principles had been met had changed in the course of the staff’s deliberations. However, there was no evidence in the agency’s records to support that this discussion was held, and other key meetings, such as the one in which the agency made its decision to allow Davis-Besse to operate past December 31, 2001, were not documented. Without documentation, it is not clear what factors influenced NRC’s decision. For example, according to the NRC Office of the Inspector General’s December 2002 report that examined the Davis-Besse incident, NRC’s decision was driven in large part by a desire to lessen the financial impact on FirstEnergy that would result from an early shutdown. While NRC disputed this finding, we found no evidence in the agency’s records to support or refute its position. In December 2001, when NRC informed FirstEnergy that it accepted the company’s proposed measures and the February 16, 2002, shutdown date, it also said that the company would receive NRC’s assessment in the near future. However, NRC did not provide the assessment until a full year later—in December 2002. In addition, the December 2002 assessment, which includes a four-page evaluation, does not fully explain how the safety principles were used or met—other than by stating that if the likelihood of nozzle failure were judged to be small, then adequate protection would be ensured. Even though NRC’s regulations regarding the reactor coolant pressure boundary dictate that the reactor have an extremely low probability of failing, NRC stated it did not believe that Davis-Besse needed to demonstrate strict conformance with this regulation. As evidence of the small likelihood of failure, NRC cited the small size of cracks found at other power plants, as well as its preliminary assessment of nozzle cracking, which projected crack growth rates. NRC concluded that 7 weeks of additional operation would not result in an appreciable increase in the size of the cracks. While NRC included its calculated estimates of the risk that Davis-Besse would pose, it did not detail how it calculated its estimates. NRC’s PRA Estimate Was Flawed and Its Use in Deciding to Delay the Shutdown Is Unclear In moving forward with its more risk-informed regulatory approach, NRC has established a policy to increase the use of PRA methods as a means to promote regulatory stability and efficiency. Using PRA methods, NRC and the nuclear power industry can estimate the likelihood that different accident scenarios at nuclear power plants will result in reactor core damage and a release of radioactive materials. For example, one of these accident scenarios begins with a “medium break” loss-of-coolant accident in which the reactor coolant system is breached and a midsize—about 2- to 4-inch—hole is formed that allows coolant to escape from the reactor pressure boundary. The probability of such an accident scenario occurring and the consequences of that accident take into account key engineering safety system failure rates and human error probabilities that influence how well the engineered systems would be able to mitigate the consequences of an accident and ensure no radioactive release from the plant. For Davis-Besse, NRC needed two estimates: one for the frequency of a nozzle ejecting and causing a loss-of-coolant accident and one for the probability that a loss-of-coolant accident would result in core damage. NRC first established an estimate, based partially on information provided by FirstEnergy, for the frequency of a plant developing a cracked nozzle that would initiate a medium break loss-of-coolant accident. NRC estimated that the frequency of this occurring would be about 2x10-2, or 1 chance in 50, per year. NRC then used an estimate, which FirstEnergy provided, for the probability of core damage given a medium break loss-of- coolant accident. This probability estimate was 2.7x10-3, or about 1 chance in 370. Multiplying these two numbers, NRC estimated that the potential for a nozzle to crack and cause a loss-of-coolant accident would increase the frequency of core damage at Davis-Besse by about 5.4x10-5 per year, or about 1 in 18,500 per year. Converting this frequency to a probability associated with continued operation for 7 weeks, NRC calculated that the increase in the probability of core damage was approximately 5x10-6, or 1 chance in 200,000. While NRC officials currently disagree that this was the number it used, this is the number that it included in its December 2002 assessment provided to FirstEnergy. Further, we found no evidence in the agency’s records to support NRC’s current assertion. According to our consultants, the way NRC calculated and used the PRA estimate was inadequate in several respects. (See app. II for the consultants’ detailed report.) First, NRC’s calculations did not take into account several factors, such as the possibility of corrosion and axial cracking that could lead to leakage. For example, the consultants concluded that NRC’s estimate of risk was incorrectly too small, primarily because the calculation did not consider corrosion of the vessel head. In reviewing how NRC developed and used its PRA estimates for Davis-Besse, our consultants noted that the calculated risk was smaller than it should have been because the calculations did not consider corrosion of the reactor vessel from the boric acid coolant leaking through cracks in the nozzles. According to the consultants, apparently all NRC staff involved in the Davis-Besse decision were aware that coolant under high pressure was leaking from valves, flanges, and possibly from cracks but evidently thought that the coolant would immediately flash into steam and noncorrosive compounds of boric acid. Our consultants, however, stated that because boric acid could potentially cause corrosion, except at temperatures much higher than 600 degrees Fahrenheit, NRC should have anticipated that corrosion could occur. Our consultants further stated that as evaporation occurs, boric acid becomes more concentrated in the remaining liquid—making it far more corrosive—and as vapor pressure decreases, evaporation is further slowed. They said it should be expected that some of the boric acid in the escaping coolant could reach the metal surfaces as wet or moist, highly corrosive material underlying the surface layers of dry noncorrosive boric acid, which is evidently what happened at Davis-Besse. Our consultants concluded that NRC staff should have been aware of the experience at French nuclear power plants, where boric acid corrosion from leaking reactor coolant had been identified during the previous decade, the safety significance had been recognized, and safety procedures to mitigate the problem had been implemented. Furthermore, tests had been conducted by the nuclear power industry and in government laboratories on boric acid corrosion that were widely available to NRC. They stated that keeping abreast of safety issues at similar plants, whether domestic or foreign, and conveying relevant safety information to licensees are important functions of NRC’s safety program. According to NRC, the agency was aware of the experience at French nuclear power plants. For example, NRC concluded, in a December 15, 1994, internal NRC memo, that primary coolant leakage from a through-wall crack could cause boric acid corrosion of the vessel head. However, because it concluded that some analyses indicated that it would take at least 6 to 9 years before any corrosion would challenge the structural integrity of the head, NRC concluded that cracking was not a short-term safety issue. Our consultants also stated that NRC’s risk analysis was inadequate because the analysis concerned only the formation and propagation of circumferential cracks that could result in nozzle failure, loss of coolant, and even control rod ejection. Although there is less chance of axial cracks causing complete nozzle failure, these cracks open additional pathways for coolant leakage. In addition, their long crevices provide considerably greater opportunity for the coolant to concentrate near the surface of the vessel head. However, according to our consultants, NRC was convinced that the boric acid they saw resulted from leaking flanges above the reactor vessel head, as opposed to axial cracks in the nozzles. Second, NRC’s analysis was inadequate because it did not include the uncertainty of its risk estimate and use the uncertainty analysis in the Davis-Besse decision-making process, although NRC staff should have recognized large uncertainties associated with its risk estimate. Our consultants also concluded that NRC failed to take into account the large uncertainties associated with estimates of the frequency of core damage resulting from the failure of nozzles. PRA estimates for nuclear power plants are subject to significant uncertainties associated with human errors and other common causes of system component failures, and it is important that proper uncertainty analyses be performed for any PRA study. NRC guidance and other NRC reports on advancing PRA technology for risk-informed decisions emphasize the need to understand and characterize uncertainties in PRA estimates. Our consultants stated that had the NRC staff estimated the margin of error or uncertainty associated with its PRA estimate for Davis-Besse, the uncertainty would likely have been so high as to render the estimate of questionable value. Third, NRC’s analysis was inadequate because the risk estimates were higher than generally considered acceptable under NRC guidance. Despite PRA’s important role in the decision, our consultants found that NRC did not follow its own guidance for ensuring that the estimated risk was within levels acceptable to the agency. NRC required the nuclear power industry to develop a baseline estimate for how frequently a core damage accident could occur at every nuclear power plant in the United States. This baseline estimate is used as a basis for deciding whether changes at a plant that affect the core damage frequency are acceptable. The baseline core damage frequency estimate for the Davis-Besse plant was between 4x10-5 and 6.6x10-5 per year (which is between 1 chance in 25,000 per year and about 1 chance in 15,150 per year). NRC guidance for reviewing and approving license amendment requests indicates that any plant-specific change resulting in an increase in the frequency of core damage of 1x10-5 per year (which is 1 chance in 100,000 per year) or more would fall within the highest risk zone: In this case, NRC would generally not approve the change because the risk criterion would not be met. If a license change would result in a core damage frequency change of 1x10-5 per year to 1x10-6 per year (which is 1 chance in 100,000 per year to 1 chance in 1 million per year), the risk criterion would be considered marginally met and NRC would consider approving the change but would require additional analysis. Finally, if a license change would result in a core damage frequency change of 1x10-6 per year (which is 1 chance in 1 million per year) or less, the risk would fall within the lowest risk zone and NRC would consider the risk criterion to be met and would generally consider approving the change without requiring additional analysis. (See fig. 6.) However, NRC’s PRA estimate for Davis-Besse—an increase in the frequency of core damage of 5.4x10-5, or 1 chance in about 18,500 per year—was higher than the acceptable level. While an NRC official who helped develop the risk estimate said that additional NRC and industry guidance was used to evaluate whether its PRA estimate was acceptable, this guidance also suggests that NRC’s estimate was too high. NRC’s estimate of the increase in the frequency of core damage of 5.4x10-5 per year equates to an increase in the probability of core damage of 5x10-6 , or 1 chance in 200,000, for the 7-week period December 31, 2001, to February 16, 2002. NRC’s guidance for evaluating requests to relax NRC technical specifications suggests that a probability increase higher than 5x10-7, or 1 chance in 2 million, is considered unacceptable for relaxing the specifications. Thus, NRC’s estimate would not be considered acceptable under this guidance. NRC’s estimate would also not be considered acceptable under Electric Power Research Institute or Nuclear Energy Institute guidance unless further action were taken to evaluate or manage risk. According to NRC officials, NRC viewed its PRA estimate as being within acceptable bounds because it was a temporary situation—7 weeks—and NRC had, at other times, allowed much higher levels of risk at other plants. However, at the time that NRC made its decision, it did not document the basis for accepting this risk estimate, even though NRC’s guidance explicitly states that the decision on whether PRA results are acceptable must be based on a full understanding of the contributors to the PRA results and the reasoning must be well documented. In defense of its decision, NRC officials said that the process they used to arrive at the decision is used to make about 1,500 licensing decisions such as this each year. Lastly, NRC’s analysis was inadequate because the agency does not have clear guidance for how PRA estimates are to be used in the decision- making process. Our consultants concluded that NRC’s process for risk- informed decision making is ill-defined, lacks guidelines for how it is supposed to work, and is not uniformly transparent within NRC. According to NRC officials involved in the Davis-Besse decision, NRC’s guidance is not clear on the use of PRA in the decision-making process. For example, while NRC has extensive guidance, this guidance does not outline to what extent or how the resultant PRA risk number and uncertainty should be weighed with respect to the ultimate decision. One factor complicating this issue is the lack of a predetermined methodology to weigh risks expressed in PRA numbers against traditional deterministic results and other factors. Absent this guidance, the value assigned to the PRA analysis is largely at the discretion of the decision maker. The process, which NRC stated is robust, can result in a decision in which PRA played no role, a partial role, or one in which it was the sole deciding factor. According to our consultants, this situation is made worse by the lack of guidelines for how, or by whom, decisions in general are made at NRC. It is not clear how NRC staff used the PRA risk estimate in the Davis-Besse decision-making process. For example, according to one NRC official who was familiar with some of the data on nozzle cracking, these data were not sufficient for making a good probabilistic decision. He stated that he favored issuing an order requiring that Davis-Besse be shut down by the end of December 2001 because he believed the available data were not sufficient to assure a low enough probability for a nozzle to be ejected. Other officials indicated that they accepted FirstEnergy’s proposed February 16, 2002, shutdown date based largely on NRC’s PRA estimate for a nozzle to crack and be ejected. According to one of these officials, allowing the additional 7 weeks of operating time was not sufficiently risk significant under NRC’s guidance. He stated that safety margins at the plant were preserved and the PRA number was within an acceptable range. Still another official said he discounted the PRA estimate and did not use it at all when recommending that NRC accept FirstEnergy’s compromise proposal. This official also stated that it was likely that many of the staff did base their conclusions on the PRA estimate. According to our consultants, although the extent to which the PRA risk analysis influenced the decision making will probably never be known, it is apparent that it did play an important role in the decision to allow the shutdown delay. NRC Has Made Progress in Implementing Recommended Changes, but Is Not Addressing Important Systemic Issues NRC has made significant progress in implementing the actions recommended by the Davis-Besse lessons-learned task force. While NRC has implemented slightly less than half—21 of the 51—recommendations as of March 2004, it is scheduled to have more than 70 percent of them implemented by the end of 2004. For example, NRC has already taken actions to improve staff training and inspections that would appear to help address the concern that NRC inspectors viewed FirstEnergy as a good performer and thus did not subject Davis-Besse to the level of scrutiny or questioning that they should have. It is not certain when actions to implement the remaining recommendations will occur, in part because of resource constraints. NRC also faces challenges in fully implementing the recommendations, also in part because of resource constraints, both in the staff needed to develop specific corrective actions and in the additional staff responsibilities and duties to carry them out. Further, while NRC is making progress, the agency is not addressing three systemic issues highlighted by the Davis-Besse experience: (1) an inability to detect weakness or deterioration in FirstEnergy’s safety culture, (2) deficiencies in NRC’s process for deciding on a shutdown, and (3) lack of management controls to track, on a longer-term basis, the effectiveness of actions implemented in response to incidents such as Davis-Besse, so that they do not occur at another power plant. NRC Does Not Expect to Complete Its Actions until 2006, in Part Because of Resource Constraints NRC’s lessons-learned task force for Davis-Besse developed 51 recommendations to address the weaknesses that contributed to the Davis- Besse incident. Of these 51 recommendations, NRC rejected 2 because it concluded that agency processes or procedures already provided for the recommendations’ intent to be effectively carried out. To address the remaining 49 recommendations, NRC developed a plan in March 2003 that included, for each recommendation, the actions to be taken, the responsible NRC office, and the schedule for completing the actions. When developing its schedule, NRC placed the highest priority on implementing recommendations that were most directly related to the underlying causes of the Davis-Besse incident as well as those recommendations responding to vessel head corrosion. NRC assigned a lower priority to the remaining recommendations, which were to be integrated into the planning activities of those NRC offices assigned responsibility for taking action on the recommendations. In assigning these differing priorities, NRC officials stated they recognized that the agency has many other pressing matters to address that are not related to the Davis-Besse incident, such as renewing operating licenses, and they did not want to divert resources away from these activities. (App. III contains a complete list of the task force’s recommendations, NRC actions, and the status of the recommendations as of March 2004.) To better track the status of the agency’s actions to implement the recommendations, we split two of the 49 recommendations that NRC accepted into 4; therefore, our analysis reflects NRC’s response to 51 recommendations. As shown in table 1, as of March 2004, NRC had made progress in implementing the recommendations, although some completion dates have slipped. As the table shows, as of March 2004, NRC had implemented 21 recommendations and scheduled another 17 for completion by December 2004. However, some slippage has already occurred in this schedule— primarily because of resource constraints—and NRC has rescheduled completion of some recommendations. NRC’s time frames for completing the recommendations depend on several factors—the recommendations’ priority, the amount of work required to develop and implement actions, and the need to first complete actions on other related recommendations. Of the 21 implemented recommendations, 10 called upon NRC to revise or enhance its inspection guidance or training. For example, NRC revised the guidance it uses to assess the implementation of licensees’ programs to identify and resolve problems before they affect operations. It took this action because the task force had concluded that FirstEnergy’s weak corrective action program implementation was a major contributor to the Davis-Besse incident. NRC has also developed Web-based training modules to improve NRC inspectors’ knowledge of boric acid corrosion and nozzle cracking. The other 11 completed recommendations concerned actions such as collecting and analyzing foreign and domestic information on alloy 600 fully implementing and revising guidance to better assure that licensees carry out their commitments to make operational changes, and establishing measurements for resident inspector staffing levels and requirements. By the end of 2004, NRC expects to complete another 17 recommendations, 12 of which generally address broad oversight or programmatic issues, and 5 of which provide for additional inspection guidance and training. On the broader issues, for example, NRC is scheduled to complete a review of the effectiveness of its response to past NRC lessons-learned task force reports by April 2004. By December 2004, NRC expects to have a framework established for moving forward with implementing recommended improvements to its agencywide operating experience program. In 2005, 4 of the 6 recommendations scheduled for completion concern leakage from the reactor coolant system. For example, NRC is to (1) develop guidance and criteria for assessing licensees’ responses to increasing leakage levels and (2) determine whether licensees should install enhanced systems to detect leakage from the reactor coolant system. The fifth recommendation calls for NRC to inspect the adequacy of licensees’ programs for controlling boric acid corrosion, and the final recommendation calls on NRC to assess the basis for canceling a series of inspection procedures in 2001. NRC did not assign completion dates to 7 recommendations because, among other things, their completion depends on completing other recommendations or because of limited resources. Even though it has not assigned completion dates for these recommendations, NRC has begun to work on 5 of the 7: Two recommendations will be addressed when requirements for vessel head inspections are revised. To date, NRC has taken some related, but temporary, actions. For example, since February 2003, it has required licensees to more extensively examine their reactor vessel heads. NRC has also issued a series of temporary instructions for NRC inspectors to oversee the enhanced examinations. NRC expects to replace these temporary steps with revised requirements for vessel head inspections. Two recommendations call upon NRC to revise requirements for detecting leaks in the reactor coolant pressure boundary. In response, NRC has, for example, begun to review its barrier integrity requirements and has contracted for research on enhanced detection capabilities. One recommendation is directed at improving follow-up of licensee actions taken in response to NRC generic communications. NRC is currently developing a temporary inspection procedure to assess the effectiveness of licensee actions taken in response to generic communications. Additionally, as a long-term change in the operating experience program, the agency plans to improve the verification of how effective its generic communications are. The remaining two recommendations address NRC’s need to (1) evaluate the adequacy of methods for analyzing the risks posed by passive components, such as reactor vessels, and integrate these methods and risks into NRC’s decision-making process and (2) review a sample of plant assessments conducted between 1998 and 2000 to determine if any identified plant safety issues have not been adequately assessed. NRC has not yet taken action on these recommendations. Some recommendations will require substantial resources to develop and implement. As a result, some implementation dates have slipped and some plans in response to the recommendations have changed in scope. For example, owing to resource constraints, NRC has postponed indefinitely the evaluation of methods to analyze the risk associated with passive reactor components such as the vessel head. Also, in part due to resource constraints, NRC has reconceptualized its plan to review licensee actions in response to previous generic communications, such as bulletins and letters. Staff resources will be strained because implementing the recommendations adds additional responsibilities or duties—that is, more inspections, training, and reviews of licensee reports. For example, NRC’s revised inspection guidance for more thorough examinations of reactor vessel heads and nozzles, as well as new requirements for NRC oversight of licensees’ corrective action programs, will require at least an additional 200 hours of inspection per reactor per year. As of February 2004, NRC was also revising other inspection requirements that are likely to place additional demands on inspectors’ time. Thus, to respond to these increased demands, NRC will either need to add inspectors or reduce oversight of other licensee activities. To its credit, in its 2004 budget plan, NRC increased the level of resources for some inspection activities. However, it is not certain that these increases will be maintained. The number of inspection hours has fallen by more than one-third between 1995 and 2001. In addition, NRC is aware that resident inspector vacancies are filled with staff having varying levels of experience—from the basic level that would be expected from a newly qualified inspector to the advanced level that is achieved after several years’ experience. According to the latest available data, as of May 2003, about 12 percent of sites had only one resident inspector; the remaining 88 percent had two inspectors of varying levels of experience. Because of this situation, NRC augments these inspection resources with regional inspectors and contractors to ensure that, at a minimum, its baseline inspection program can be implemented throughout the year. Because of surges in the demand for inspections, NRC in 2003 increased its use of contractors and temporarily pulled qualified inspectors from other jobs to help complete the baseline inspection program for every plant. According to NRC, it did not expect to require such measures in 2004. Similarly, NRC may require additional staff to identify and evaluate plants’ operating experiences and communicate the results to licensees, as the task force recommended. NRC has currently budgeted an increase of three full-time staff in fiscal year 2006 to implement a centralized system, or clearinghouse, for managing the operating experience program. However, according to an NRC official, questions remain about the level of resources needed to fully implement the task force recommendations. NRC’s operating experience office, before it was disbanded in 1999, had about 33 staff whose primary responsibility was to collect, evaluate, and communicate activities associated with safety performance trends, as reflected in licensees’ operating experiences, and participate in developing rulemakings. However, it is too early to know the effectiveness of this clearinghouse approach and the adequacy of resources in the other offices available for collecting and analyzing operating experience information. Neither the operating experience office before it was disbanded nor the other offices flagged boric acid corrosion, cracking, or leakage as problems warranting significantly greater oversight by NRC, licensees, or the nuclear power industry. NRC Has Not Proposed Any Specific Actions to Correct Systemic Weaknesses in Oversight and Decision- Making Processes NRC’s Davis-Besse task force did not make any recommendations to address two systemic problems: evaluating licensees’ commitment to safety and improving the agency’s process for deciding on a shutdown. NRC’s Task Force Recommendations Did Not Address Licensee Safety Culture NRC’s task force identified numerous problems at Davis-Besse that indicated human performance and management failures and concluded that FirstEnergy did not foster an environment that was fully conducive to ensuring that plant safety issues received appropriate attention. Although the task force report did not use the term safety culture, as evidence of FirstEnergy’s safety culture problems, the task force pointed to an imbalance between production and safety, as evidenced by FirstEnergy’s efforts to address symptoms (such as regular cleanup of boric acid deposits) rather than causes (finding the source of the leaks during refueling outages); a lack of management involvement in or oversight of work at Davis- Besse that was important for maintaining safety; a lack of a questioning attitude by senior FirstEnergy managers with regard to vessel head inspections and cleaning activities; ineffective and untimely corrective action; a long-standing acceptance of degraded equipment; and inadequate engineering rigor. The task force concluded that NRC’s implementation of guidance for inspecting and assessing a safety-conscious work environment and employee concerns programs failed to identify significant safety problems. Although the task force did not make any specific recommendations that NRC develop a means to assess licensees’ safety culture, it did recommend changes to focus more effort on assessing programs to promote a safety- conscious work environment. NRC has taken little direct action in response to this task force recommendation. However, to help enhance NRC’s capability to assess licensee safety culture by indirect means, NRC modified the wording in, and revised its inspection procedure for, assessing licensees’ ability to identify and resolve problems, such as malfunctioning plant equipment. These revisions included requiring inspectors to review all licensee reports on plant conditions, analyze trends in plant conditions to determine the existence of potentially significant safety issues, and expand the scope of their reviews to the prior 5 years in order to identify recurring issues. This problem identification and resolution inspection procedure is intended to assess the end results of management’s safety commitment rather than the commitment itself. However, by measuring only the end results, early signs of a deteriorating safety culture and declining management performance may not be readily visible and may be hard to interpret until clear violations of NRC’s regulations occur. Furthermore, because NRC directs its inspections at problems that it recognizes as being more important to safety, NRC may overlook other problems until they develop into significant and immediate safety problems. Conditions at a plant can quickly degrade to the extent that they can compromise public health and safety. The International Atomic Energy Agency and its member nations have developed guidance and procedures for assessing safety culture at nuclear power plants, and today several countries, such as Brazil, Canada, Finland, Sweden, and the United Kingdom, assess plant safety culture or licensees’ own assessments of their safety culture. In assessing safety culture, an advisory group to the agency suggests that regulatory agencies examine whether, for example, (1) employee workloads are not excessive, (2) staff training is sufficient, (3) responsibility for safety has been clearly assigned within the organization, (4) the corporation has clearly communicated its safety policy, and (5) managers sufficiently emphasize safety during plant meetings. One reason for assessing safety culture, according to the Canadian Nuclear Safety Commission, is because management and human performance aspects are among the leading causes of unplanned events at licensed nuclear facilities, particularly in light of pressures such as deregulation of the electricity market. Finland specifically requires that nuclear power plants maintain an advanced safety culture and its inspections target the importance that has been embedded in factors affecting safety, including management. NRC had begun considering methods for assessing organizational factors, including safety culture, but in 1998, NRC’s commissioners decided that the agency should have a performance-based inspection program of overall plant performance and should infer licensee management performance and competency from the results of that program. They chose this approach instead of one of four other options: conduct performance-based inspections in all areas of facility operation and design, but not infer or articulate conclusions regarding the performance of licensee management; assess the performance of licensee management through targeted operations-based inspections using specific inspection procedures, trained staff, and contractors to assess licensee management—a task that would require the development of inspection procedures and significant training—and to document inspection results; assess the performance of licensee management as part of the routine inspection program by specifically evaluating and documenting management performance attributes—a larger effort that would require the development of assessment tools to evaluate safety culture as well as additional resources; or assess the competency of licensee management by evaluating management competency attributes—an even larger effort that would require that implementation options and their impacts be assessed. When adopting the proposal to infer licensee management performance from the results of its performance-based inspection program, NRC eliminated any resource expenditures specifically directed at developing a systematic method of inferring management performance and competency. NRC stated that it currently has a number of means to assess safety culture that provide indirect insights into licensee safety culture. These means include, for example, (1) insights from augmented inspection teams, (2) lessons-learned reviews, and (3) information obtained in the course of conducting inspections under the Reactor Oversight Process. However, insights from augmented inspection teams and lessons-learned reviews are reactionary and do not prevent problems such as those that occurred at Davis-Besse. Further, before the Davis-Besse incident, NRC assumed its oversight process would adequately identify problems with licensees’ safety culture. However, NRC has no formalized process for collectively assessing information obtained in the course of its problem identification and resolution inspection to ensure that individual inspection results would identify poor management performance. NRC stated that its licensee assessments consider inputs such as inspection results and insights, correspondence to licensees related to inspection observations, input from resident inspectors, and the results of any special investigations. However, this information may not be sufficient to inform NRC of problems at a plant in advance of these problems becoming safety significant. In part because of Davis-Besse, NRC’s Advisory Committee on Reactor Safeguards recommended that NRC again pursue the development of a methodology for assessing safety culture. It also asked NRC to consider expanding research to identify leading indicators of degradation in human performance and work to develop a consistent comprehensive methodology for quantifying human performance. During an October 2003 public meeting of the advisory committee’s Human Performance Subcommittee, the subcommittee’s members again reiterated the need for NRC to assess safety culture. Specifically, the members recognized that certain aspects of safety culture, such as beliefs, perceptions, and management philosophies, are ultimately the nuclear power industry’s responsibility but stated that NRC should deal with patterns of behavior and human performance, as well as organizational structures and processes. At this meeting, NRC officials discussed potential safety culture indicators that NRC could use, including, among other things, how many times a problem recurs at a plant, timeliness in correcting problems, number of temporary modifications, and individual program and process error rates. Committee members recommended that NRC test various safety culture indicators to determine whether (1) such indicators should ultimately be incorporated into the Reactor Oversight Process and (2) a significance determination process could be developed for safety culture. As of March 2004, NRC had yet to respond to the advisory committee’s recommendation. Despite the lack of action to address safety culture issues, NRC’s concern over FirstEnergy’s safety culture at Davis-Besse was one of the last issues resolved before the agency approved Davis-Besse’s restart. NRC undertook a series of inspections to examine Davis-Besse’s safety culture and determine whether FirstEnergy had (1) correctly identified the underlying causes associated with its declining safety culture, (2) implemented appropriate actions to correct safety culture problems, and (3) developed a process for monitoring to ensure that actions taken were effective for resolving safety culture problems. In December 2003, NRC noted significant improvements in the safety culture at Davis-Besse, but expressed concern with the sustainability of Davis-Besse’s performance in this area. For example, a survey of FirstEnergy and contract employees conducted by FirstEnergy in November 2003 indicated that about 17 percent of employees believed that management cared more about cost and schedule than resolving safety and quality issues—again, production over safety. NRC’s Task Force Recommendations Did Not Address NRC’s Decision-Making Process NRC’s task force also did not analyze NRC’s process for deciding not to order a shutdown of the Davis-Besse plant. It noted that NRC’s written rationale for accepting FirstEnergy’s justification for continued plant operation had not yet been prepared and recommended that NRC change guidance requiring NRC to adequately document such decisions. It also made a recommendation to strengthen guidance for verifying information provided by licensees. According to an NRC official on the task force, the task force did not assess the decision-making process in detail because the task force was charged with determining why the degradation at Davis- Besse was not prevented and because NRC had coordinated with NRC’s Office of the Inspector General, which was reviewing NRC’s decision making. NRC’s Failure to Track the Resolution of Identified Problems May Allow the Problems to Recur The NRC task force conducted a preliminary review of prior lessons- learned task force reports to determine whether they suggested any recurring or similar problems. As a result of this preliminary review, the task force recommended that a more detailed review be conducted to determine if actions that NRC took as a result of those reviews were effective. These previous task force reports included: Indian Point 2 in Buchanan, New York, in February 2000; Millstone in Waterford, Connecticut, in October 1993; and South Texas Project in Wadsworth, Texas, from 1988 to 1994. NRC’s more detailed review, as of May 2004, was still under way. We also reviewed these reports to determine whether they suggested any recurring problems and found that they highlighted broad areas of continuing programmatic weaknesses, as seen in the following examples: Inspector training and information sharing. All three of the other task forces also identified inspector training issues and problems with information collection and sharing. The Indian Point task force called upon NRC to develop a process for promptly disseminating technical information to NRC inspectors so that they can review and apply the information in their inspection program. Oversight of licensee corrective action programs. Two of the three task forces also identified inadequate oversight of licensee corrective action programs. The South Texas task force recommended improving assessments of licensees’ corrective action programs to ensure that NRC identifies broader licensee problems. Better identification of problems. Two of the three task force reports also noted the need for NRC to develop a better process for identifying problem plants, and one report noted the need for NRC inspectors to more aggressively question licensees’ activities. Over the past two decades, we have also reported on underlying causes similar to those that contributed, in part, to the incident at Davis-Besse. (See Related GAO Products.) For example, with respect to the safety culture at nuclear power plants, in 1986, 1995, and 1997, we reported on issues relevant to NRC assessing plant management so that significant problems could be detected and corrected before they led to incidents such as the one that later occurred at Davis-Besse. Regardless of our 1997 recommendation that NRC require that the assessment of management’s competency and performance be a mandatory component of NRC’s inspection process, NRC subsequently withdrew funding to accomplish this. In terms of inspections, in 1995 we reported that NRC, itself, had concluded that the agency was not effectively integrating information on previously identified and long-standing issues to determine if the issues indicated systemic weaknesses in plant operations. This report further noted that NRC was not using such information to focus future inspection activities. In 1997 and 2001, we reported on weaknesses in NRC’s inspections of licensees’ corrective action programs. Finally, with respect to learning from plants’ operating experiences, in 1984 we noted that NRC needed to improve its methods for consolidating information so that it could evaluate safety trends and ensure that generic issues are resolved at individual plants. These recurring issues indicate that NRC’s actions, in response to individual plant incidents and recommendations to improve oversight, are not always institutionalized. NRC guidance requires that resolutions to action plans be described and documented, and while NRC is monitoring the status of actions taken in response to Davis-Besse task force recommendations and preparing quarterly and semiannual reports on the status of actions taken, the Davis- Besse action plan does not specify how long NRC will monitor them. It also does not describe how long NRC will prepare quarterly and semiannual status reports, even though, according to NRC officials, these semiannual status reports will continue until all items are completed and the agency is required to issue a final summary report. The plan also does not specify what criteria the agency will use to determine when the actions in response to specific task force recommendations are completed. Furthermore, NRC’s action plan does not require NRC to assess the long-term effectiveness of recommended actions, even though, according to NRC officials, some activities already have an effectiveness review included. As in the past and in response to prior lessons-learned task force reports and recommendations, NRC has no management control in place for assessing the long-term effectiveness of efforts resulting from the recommendations. NRC officials acknowledged the need for a management control, such as an agencywide tracking system, to ensure that actions taken in response to task force recommendations effectively resolve the underlying issue over the long term, but the officials have no plans to establish such a system. Conclusions It is unlikely, given the actions that NRC has taken to date, that extensive reactor vessel corrosion will occur any time soon at another domestic nuclear power plant. However, we do not yet have adequate assurances from NRC that many of the factors that contributed to the incident at Davis- Besse will be fully addressed. These factors include NRC’s failure to keep abreast of safety significant issues by collecting information on operating experiences at plants, assessing their relative safety significance, and effectively communicating information within the agency to ensure that oversight is fully informed. The underlying causes of the Davis-Besse incident underscore the potential for another incident unrelated to boric acid corrosion or cracked control rod drive mechanism nozzles to occur. This potential is reinforced by the fact that both prior NRC lessons-learned task forces and we have found similar weaknesses in many of the same NRC programs that led to the Davis-Besse incident. NRC has not followed up on prior task force recommendations to assess whether the lessons learned were institutionalized. NRC’s actions to implement the Davis-Besse lessons-learned task force recommendations, to be fully effective, will require an extensive effort on NRC’s part to ensure that these are effectively incorporated into the agency’s processes. However, NRC has not estimated the amount of resources necessary to carry out these recommendations, and we are concerned that resource limitations could constrain their effectiveness. For this reason, it is important for NRC to not only monitor the implementation of Davis-Besse task force recommendations, but also determine their effectiveness, in the long term, and the impact that resource constraints may have on them. These actions are even more important because the nation’s fleet of nuclear power plants is aging. Because the Davis-Besse task force did not address NRC’s unwillingness to directly assess licensee safety culture, we are concerned that NRC’s oversight will continue to be reactive rather than proactive. NRC’s oversight can result in NRC making a determination that a licensee’s performance is good one day, yet the next day NRC discovers the performance to be unacceptably risky to public health and safety. Such a situation does not occur overnight: Long-standing action or inaction on the part of the licensee causes unacceptably risky and degraded conditions. NRC needs better information to preclude such conditions. Given the complexity of nuclear power plants, the number of physical structures, systems, and components, and the manner in which NRC inspectors must sample to assess whether licensees are complying with NRC requirements and license specifications, it is possible that NRC will not identify licensees that value production over safety. While we recognize the difficulty in assessing licensee safety culture, we believe it is sufficiently important to develop a means to do so. Given the limited information NRC had at the time and that an accident did not occur during the delay in Davis-Besse’s shutdown, we do not necessarily question the decision the agency made. However, we are concerned about NRC’s process for making that decision. It used guidance intended to make decisions for another purpose, did not rigorously apply the guidance, established an unrealistically high standard of evidence to issue a shutdown order, relied on incomplete and faulty PRA analyses and licensee evidence, and did not document key decisions and data. It is extremely unusual for NRC to order a nuclear power plant to shut down. Given this fact, it is more imperative that NRC have guidance to use when technical specifications or requirements may be met, yet questions arise over whether sufficient safety is being maintained. This guidance does not need to be a risk-based approach, but rather a more structured risk- informed approach that is sufficiently flexible to ensure that the guidance is applicable under different circumstances. This is important because NRC annually makes about 1,500 licensing decisions relating to operating commercial nuclear power plants. While we recognize the challenges NRC will face in developing such guidance, the large number and wide variety of decisions strongly highlight the need for NRC to ensure that its decision- making process and decisions are sound and defensible. Recommendations for Executive Action To ensure that NRC aggressively and comprehensively addresses the weaknesses that contributed to the Davis-Besse incident and could contribute to problems at nuclear power plants in the future, we are recommending that the NRC commissioners take the following five actions: Determine the resource implications of the task force’s recommendations and reallocate the agency’s resources, as appropriate, to better ensure that NRC effectively implements the recommendations. Develop a management control approach to track, on a long-term basis, implementation of the recommendations made by the Davis-Besse lessons-learned task force and future task forces. This approach, at a minimum, should assign accountability for implementing each recommendation and include information on the status of major actions, how each recommendation will be judged as completed, and how its effectiveness will be assessed. The approach should also provide for regular—quarterly or semiannual—reports to the NRC commissioners on the status of and obstacles to full implementation of the recommendations. Develop a methodology to assess licensees’ safety culture that includes indicators of and inspection information on patterns of licensee performance, as well as on licensees’ organization and processes. NRC should collect and analyze this data either during the course of the agency’s routine inspection program or during separate targeted assessments, or during both routine and targeted inspections and assessments, to provide an early warning of deteriorating or declining performance and future safety problems. Develop specific guidance and a well-defined process for deciding on when to shut down a nuclear power plant. The guidance should clearly set out the process to be used, the safety-related factors to be considered, the weight that should be assigned to each factor, and the standards for judging the quality of the evidence considered. Improve NRC’s use of probabilistic risk assessment estimates in decision making by (1) ensuring that the risk estimates, uncertainties, and assumptions made in developing the estimates are fully defined, documented, and communicated to NRC decision makers; and (2) providing guidance to decision makers on how to consider the relative importance, validity, and reliability of quantitative risk estimates in conjunction with other qualitative safety-related factors. Agency Comments and Our Evaluation We provided a draft of this report to NRC for review and comment. We received written comments from the agency’s Executive Director for Operations. In its written comments, NRC generally addressed only those findings and recommendations with which it disagreed. Although commenting that it agreed with many of the report’s findings, NRC expressed an overall concern that the report does not appropriately characterize or provide a balanced perspective on NRC’s actions surrounding the discovery of the Davis-Besse reactor vessel head condition or NRC’s actions to incorporate the lessons learned from that experience into its processes. Specifically, NRC stated that the report does not acknowledge that NRC must rely heavily on its licensees to provide it with complete and accurate information, as required by its regulations. NRC also expressed concern about the report’s characterization of its use of risk estimates—specifically the report’s statement that NRC’s estimate of risk exceeded the risk levels generally accepted by the agency. In addition, NRC disagreed with two of our recommendations: (1) to develop specific guidance and a well-defined process for deciding on when to shut down a plant and (2) to develop a methodology to assess licensees’ safety culture. With respect to NRC’s overall concern, we believe that the report accurately captures NRC’s performance. Our draft report, in discussing NRC’s regulatory and oversight role and responsibilities, stated that according to NRC, the completeness and accuracy of the information provided by licensees is an important aspect of the agency’s oversight. To respond further to NRC’s concern, we added a statement to the effect that licensees are required under NRC’s regulations to provide the agency with complete and accurate information. While we do not want to diminish the importance of this responsibility on the part of the licensees, we believe that NRC also has a responsibility, in designing its oversight program, to implement management controls, including inspection and enforcement, to ensure that it has accurate information on and is sufficiently aware of plant conditions. In this respect, it was NRC’s decision to rely on the premise that the information provided by FirstEnergy was complete and accurate. As we point out in the report, the degradation of the vessel head at Davis-Besse occurred over several years. NRC knew about several indications that problems were occurring at the plant, and the agency could have requested and obtained additional information about the vessel head condition. We also believe that the report’s characterization of NRC’s use of risk estimates is accurate. The NRC risk estimate that we and our consultants found for the period leading up to the December 2001 decision on Davis- Besse’s shutdown, including the risk estimate used by the staff during key briefings of NRC management, indicated that the estimate for core damage frequency was 5.4x10-5, as used in the report. The 5x10-6 referenced in NRC’s December 2002 safety evaluation is for core damage probability, which equates to a core damage frequency of approximately 5x10-5—a level that is in excess of the level generally accepted by the agency. The impression of our consultants is that some confusion about the differences in these terms may exist among NRC staff. Concerning NRC’s disagreement with our recommendation to develop specific guidance for making plant shutdown decisions, NRC stated that its regulations, guidance, and processes are robust and do provide sufficient guidance in the vast majority of situations. The agency added that from time to time a unique situation may present itself wherein sufficient information may not exist or the information available may not be sufficiently clear to apply existing rules and regulations definitively. According to NRC, in these unique instances, the agency’s most senior managers, after consultation with staff experts and given all of the information available at the time, decide whether to require a plant shutdown. While we agree that NRC has an array of guidance for making decisions, we continue to believe that NRC needs specific guidance and a well-defined process for deciding when to shut down a plant. As discussed in our report, the agency used its guidance for approving license change requests to make the decision on when to shut down Davis-Besse. Although NRC’s array of guidance provides flexibility, we do not believe that it provides the structure, direction, and accountability needed for important decisions such as the one on Davis-Besse’s shutdown. In disagreeing with our recommendation concerning the need for a methodology to assess licensees’ safety culture, NRC said that the Commission, to date, has specifically decided not to conduct direct evaluations or inspections of safety culture as a routine part of assessing licensee performance due to the subjective nature of such evaluations. According to NRC, as regulators, agency officials are not charged with managing licensees’ facilities, and direct involvement with organizational structure and processes crosses over to a management function. We understand NRC’s position that it is not charged with managing licensees’ facilities, and we are not suggesting that NRC should prescribe or regulate the licensees’ organizational structure or processes. Our recommendation is aimed at NRC monitoring trends in licensees’ safety culture as an early warning of declining performance and safety problems. Such early warnings can help preclude NRC from assessing a licensee as being a good performer one day, and the next day being faced with a situation that it considers a potentially significant safety risk. As discussed in the report, considerable guidance is available on safety culture assessment, and other countries have established safety culture programs. NRC’s written response also contained technical comments, which we have incorporated into the report, as appropriate. (NRC’s comments and our responses are presented in app. IV.) As arranged with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we plan to provide copies of this report to the appropriate congressional committees; the Chairman, NRC; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix V. Time Line Relating Significant Events of Interest Analysis of the Nuclear Regulatory Commission’s Probabilistic Risk Assessment for Davis-Besse Davis-Besse Task Force Recommendations to NRC and Their Status, as of March 2004 Either fully implement or revise guidance to manage licensee commitments. Determine whether the periodic report on commitment changes submitted by licensees should continue. Revised instructions for these submittals and reviews to ensure that these tasks are accomplished. Completed in May 2003. Determine if stress corrosion cracking models are appropriate for predicting susceptibility of vessel head penetration nozzles to pressurized water stress corrosion cracking. Determine if additional analysis and testing is needed to reduce modeling uncertainties for their continued applicability in regulatory decision making. Evaluated existing stress corrosion cracking models for their continuing use in determining susceptibility. Completed in July 2003. Revise the problem identification and resolution approach so that safety problems noted in daily licensee reports are reviewed and assessed. Enhance guidance to prescribe the format of information that is screened when deciding which problems to review. Revised inspection procedure for determining licensee ability to promptly identify and resolve conditions adverse to quality or safety. Completed in September 2003. Provide enhanced inspection guidance to pursue issues and problems identified during reviews of plant operations. Revised inspection procedure for determining licensee capability to promptly identify and resolve conditions adverse to quality or safety. Completed in September 2003. Revise inspection guidance to provide for longer-term follow-up of previously identified issues that have not progressed to an inspection finding. Revised inspection procedure for determining licensee capability to promptly identify and resolve conditions adverse to quality or safety. Completed in September 2003. Revise inspection guidance to assess (1) the safety implications of long-standing unresolved licensee equipment problems, (2) the impact of phased in corrective actions, and (3) the implications of deferred plant modifications. Revised inspection procedure for determining licensee capability to identify and resolve conditions adverse to quality or safety. Completed in September 2003. Revise inspection guidance to allow for establishing reactor oversight panels even when a significant performance problem, as defined under NRC's Reactor Oversight Process, does not exist. Revised inspection guidance for establishing reactor oversight panels. Completed in October 2003. Assess the scope and adequacy of requirements for licensees to review operating experience. Included in NRC’s recommendation to develop a program for collecting, analyzing, and disseminating information on experiences at operating reactors. Completed in November 2003. Ensure inspector training includes (1) boric acid corrosion effects and control, and (2) pressurized water stress corrosion cracking of nickel-based alloy nozzles. Developed and implemented Web-based training and a means for ensuring training is completed. Completed in December 2003. Provide training and reinforce expectations to managers and staff to (1) maintain a questioning attitude during inspection activities, (2) develop inspection insights from Davis-Besse on symptoms of reactor coolant leakage, (3) communicate expectations to follow up recurring and unresolved problems, and (4) maintain an awareness of surroundings while conducting inspections. Establish mechanisms to perpetuate this training. Developed Web-based inspector training and a means for ensuring that training has been completed. NRC headquarters provided an overview of the training to NRC regional offices. (Training modules will be added and updated as needed.) Completed in December 2003. Reinforce expectations that regional management should make every effort to visit each reactor at least once every 2 years. Discussed at regional counterparts meeting. Completed in December 2003. Develop guidance to address impacts of regional oversight panels on regional resource allocations and organizational alignment. Evaluated past and present oversight panels. Developed enhanced inspection approaches for oversight panels and issued revised procedures. Completed in December 2003. Evaluate (1) the capacity to retain operating experience information and perform long-term operating experience reviews; (2) thresholds, criteria, and guidance for initiating generic communications; (3) opportunities for more gains in effectiveness and efficiency by realigning the organization (i.e., feasibility of a centralized operating experience "clearinghouse"); (4) effectiveness of the generic Issues program; and (5) effectiveness of internal dissemination of operating experience information to end users. Developed program objectives and attributes and obtained management endorsement of a plan to implement the recommendation. Developed specific recommendations to improve program. Evaluation completed in November 2003. (Implementation of recommendations resulting from this evaluation expected to be completed in December 2004.) Ensure that generic requirements or guidance are not inappropriately affected when making unrelated changes to other programs, processes, guidance, etc. Revised inspection guidance. Completed in February 2004. Develop inspection guidance to assess scheduler influences on amount of work performed during refueling outages. Revised the appropriate inspection procedure. Completed in February 2004. Establish guidance to ensure that NRC decisions allowing licensees to deviate from guidelines and recommendations issued in generic communications are adequately documented. Update guidance to address documentation. Develop training and distribute to NRC offices and regions to emphasize compliance with the updated guidance. Follow up to assess the effectiveness of the training. Completed follow-up in February 2004. Develop or revise inspection guidance to ensure that NRC reviews vessel head penetration nozzles and the reactor vessel head during licensee inspection activities. Develop or revise inspection guidance to ensure that nozzles and the vessel head are reviewed during licensee inspection. Issued interim guidance in August 2003 and a temporary inspection procedure in September 2003. Additional guidance expected in March 2004. Develop inspection guidance to assess (1) repetitive or multiple technical specification actions in NRC inspection or licensee reports, and (2) radiation dose implications for conducting repetitive tasks. Revise the appropriate inspection procedure to reflect this need. Completion expected in March 2004. Develop guidance to periodically inspect licensees’ boric acid corrosion control programs. Issued temporary guidance in November 2003. Completion of further inspection guidance changes expected in March 2004. Reinforce expectations for managers responsible for overseeing operations at nuclear power plants regarding site visits, coordination with resident inspectors, and assignment duration. Reinforce expectations to question information about operating conditions and strengthen guidance for reviewing license amendments to emphasize consideration of current system conditions, reliability, and performance data in safety evaluation reports. Strengthen guidance for verifying licensee-provided information. Update project manager handbook that provides guidance on activities to be conducted during site visits and interactions with NRC regional staff. Also, revise guidance for considering plant conditions during licensing action and amendment reviews. Completion expected in March 2004. Assemble and analyze foreign and domestic information on Alloy 600 nozzle cracking. If additional regulatory action is warranted, propose a course of action and implement a schedule to address the results. Assemble and analyze alloy 600 cracking data. Completion expected in March 2004. Recommendations due to be completed between April and December 2004 Conduct an effectiveness review of actions taken in response to past NRC lessons-learned reviews. Review past lessons-learned actions. Completion expected in April 2004. Provide inspection and oversight refresher training to managers and staff. Develop a training module. Completion expected in June 2004. Establish guidance for accepting owners group and industry recommended resolutions for generic communications and generic issues, including guidance for verifying that actions are taken. Revise office instructions to provide recommended guidance. Completion expected in June 2004. Review inspection guidance to determine the inspection level that is sufficient during refueling outages, including inspecting reactor areas inaccessible during normal operations and passive components. Revised an inspection procedure to reflect these changes. Some inspection procedure changes were completed in November 2003, and additional changes are expected in August 2004. Evaluate, and revise as necessary, guidance for proposing candidate generic issues. Assemble and analyze foreign and domestic information on boric acid corrosion of carbon steel. If additional regulatory action is warranted, propose a course of action and implement a schedule to address the results. Review Argonne National Laboratory study on boric acid corrosion. Analyze data to revise inspection requirements. Completion expected in October 2004. Conduct a follow-on verification of licensee actions to implement a sample of significant generic communications with emphasis on those that are programmatic in nature. Screen candidate generic communications to identify those most appropriate for follow-up using management-approved criteria. Develop and approve verification plan. Completion expected in November 2004. Strengthen inspection guidance for periodically reviewing licensee operating experience. Incorporated into the recommendation pertaining to NRC’s capacity to retain operating experience information. Completion expected in December 2004. Enhance the effectiveness of processes for collecting, reviewing, assessing, storing, retrieving, and disseminating foreign operating experience. Incorporated into the recommendation pertaining to NRC’s capacity to retain operating experience information. Completion expected in December 2004. Update operating experience guidance to reflect the changes implemented in response to recommendations for operating experience. Incorporated into the recommendation pertaining to NRC’s capacity to retain operating experience information. Completion expected in December 2004. Review a sample of NRC evaluations of licensee actions made in response to owners groups’ commitments to identify whether intended actions were effectively implemented. Conduct the recommended review. Completion expected in December 2004. Develop general inspection guidance to periodically verify that licensees implement owners groups’ commitments. Develop inspection procedure to provide a mechanism for regions to support project managers’ ability to verify that licensees implement commitments. Completion expected in December 2004. Conduct follow-on verification of licensee actions pertaining to a sample of resolved generic issues. No specific actions have been identified. Completion expected in December 2004. Review the range of baseline inspections and plant assessment processes to determine sufficiency to identify and dispose of problems like those at Davis-Besse. No specific actions have been identified. Completion expected in December 2004. Identify alternative mechanisms to independently assess licensee plant performance for self-assessing NRC oversight processes and determine the feasibility of such mechanisms. No specific actions have been identified. Completion expected in December 2004. Establish measurements for resident inspector staffing levels and requirements, including standards for satisfying minimum staffing levels. Develop standardized staffing measures and implement details. Metrics were developed in December 2003. Completion expected in December 2004. Structure and focus inspections to assess licensee employee concerns and a "safety conscious work environment.” No specific actions have been identified. Completion expected in December 2004. Recommendations due to be completed in calendar year 2005 Develop inspection guidance and criteria for addressing licensee response to increasing leakage levels and/or adverse trends in unidentified reactor coolant system leakage. Develop recommendations for guidance with action levels to trigger greater NRC interaction with licensees in response to increased leakage. Completion expected in January 2005. Reassess the basis for the cancellation, in 2001, of certain inspection procedures (i.e., boric acid control programs and operational experience feedback) to assess if these procedures are still applicable. Review revised procedures and reactivate as necessary. Completion expected in March 2005. Assess requirements for licensee procedures to respond to plant alarms for leakage to determine whether requirements are sufficient to identify reactor coolant pressure boundary leakage. Review and assess adequacy of requirements and develop recommendations to (1) improve procedures to identify leakage from boundary, (2) establish consistent technical specifications for leakage, and (3) use enhanced leakage detection systems. Completion expected in March 2005. Determine whether licensees should install enhanced systems to detect leakage from the reactor coolant system. Inspect the adequacy of licensee’s programs to control boric acid corrosion, including effectiveness of implementation. Develop guidance to assess adequacy of corrosion control programs, including implementation and effectiveness, and evaluate the status of this effort after the first year of inspections. Guidance expected to be developed by March 2004. Follow-up scheduled for completion in March 2005. Continue ongoing efforts to review and improve the usefulness of barrier integrity performance indicators and evaluate the use of primary system leakage that licensees have identified but not yet corrected as a potential indicator. Develop and implement improved performance indicators based on current requirements and measurements. Explore the use of additional performance indicators to track the number, duration, and rate of system leakage. Determine the feasibility of establishing a risk-informed performance indicator for barrier integrity. Completion expected in December 2005. Recommendations whose completion dates have yet to be determined Encourage the American Society of Mechanical Engineers to revise inspection requirements for nickel-based alloy nozzles. Encourage changes to requirements for nonvisual, nondestructive inspections of vessel head penetration nozzles. Alternatively, revise NRC regulations to address the nature and scope of these inspections. Monitor and provide input to industry efforts to develop revised inspection requirements. Participate in American Society of Mechanical Engineers’ meetings and communicate with appropriate stakeholders. Decide whether to endorse the revised American Society of Mechanical Engineers’ code requirements. These actions parallel a larger NRC rulemaking effort. Completion date yet to be determined. Revise processes to require short- and long-term verification of licensee actions to respond to significant NRC generic communications before closing out issues. Target date to be set upon completion of review of NRC’s generic communications program. Completion date yet to be determined. Determine whether licensee reactor vessel head inspection summary reports should be submitted to NRC and, if so, revise submission requirements and report disposition guidance, as appropriate. Will be included as part of revised American Society of Mechanical Engineers’ requirements for inspection of reactor vessel heads and vessel head penetration nozzles. Completion date yet to be determined. Evaluate the adequacy of methods for analyzing the risk of passive component degradation and integrate these methods and risks into NRC’s decision-making processes. No specific actions have been identified. Completion date yet to be determined. Review pressurized water reactor technical specifications to identify plants that have nonstandard reactor coolant pressure boundary leakage requirements and change specifications to make them consistent among all plants. Assessed plants for nonstandard technical specifications. Completed in July 2003. Change leakage detection specifications in coordination with other changes in leakage detection requirements. Completion date yet to be determined. Improve requirements for unidentified leakage in reactor coolant system to ensure they are sufficient to (1) discriminate between unidentified leaks from the coolant system and leaks from the reactor coolant pressure boundary and (2) ensure that plants do not operate with pressure boundary leakage. Issue regulations implementing the improved requirements when these requirements are determined. Completion date yet to be determined. NRC should review a sample of plant assessments conducted between 1998 and 2000 to determine if any identified plant safety issues have not been adequately assessed. No specific actions have been identified. Completion expected in March 2004. Recommendations rejected by NRC management Review industry approaches licensees use to consider economic factors for inspection and repair and consider this information in formulating future positions on the performance of non-visual inspections of vessel head penetration nozzles. Recommendation rejected by NRC management. No completion date. Revise the criteria for review of industry topical reports to allow for NRC staff review of safety-significant reports that have generic implications but have not been formally submitted for NRC review in accordance with the existing criteria. Recommendation rejected by NRC management. No completion date. Comments from the Nuclear Regulatory Commission The following are GAO’s comments on the Nuclear Regulatory Commission’s letter dated May 5, 2004. GAO Comments 1. We agree with NRC that 10 C.F.R. § 50.9 requires that information provided to NRC by a licensee be complete and accurate in all material respects, and we have added this information to the report. NRC also states that in carrying out its oversight responsibilities, NRC must “rely heavily” on licensees providing accurate information. However, we believe that NRC’s oversight program should not place undue reliance on applicants providing complete and accurate information. NRC also recognizes that it cannot rely solely on information from licensees, as evidenced by its inspection program and process for determining the significance of licensee violations. Under this process, NRC considers whether there are any willful aspects associated with the violation— including the deliberate intent to violate a license requirement or regulation or falsify information. We believe that management controls, including inspection and enforcement, should be implemented by NRC so as to verify whether licensee-submitted information considered to be important for ensuring safety is complete and accurate as required by the regulation. In this regard, as stated in NRC’s enforcement policy guidance, NRC is authorized to conduct inspections and investigations (Atomic Energy Act § 161); revoke licenses for, among other things, a licensee’s making material false statements or failing to build or operate a facility in accordance with the terms of the license (Atomic Energy Act § 186); and impose civil penalties for a licensee’s knowing failure to provide certain safety information to NRC (Energy Reorganization Act § 206). With regard to the draft report conveying the expectation that NRC should have known about the thick layer of boron on the reactor vessel head, we note in the draft report that since at least 1998, NRC was aware that (1) FirstEnergy’s boric acid corrosion control program was inadequate, (2) radiation monitors within the containment area were continuously being clogged by boric acid deposits, (3) the containment air cooling system had to be cleaned repeatedly because of boric acid buildup, (4) corrosion was occurring within containment as evidenced by rust particles being found, and (5) the unidentified leakage rate had increased above the level that historically had been found at the plant. NRC was also aware of the repeated but ineffective attempts by FirstEnergy to correct many of these recurring problems—evidence that the licensee’s programs to identify and correct problems were not effective. Given these indications at Davis-Besse, NRC could have taken more aggressive follow-up action to determine the underlying causes. For example, NRC could have taken action during the fuel outage in 1998, the shutdown to repair valves in mid-1999, or the fuel outage in 2000 to ensure that staff with sufficient knowledge appropriately investigated the types of conditions that could cause these indications, or followed up to ensure that FirstEnergy had fully investigated and successfully resolved the cause of the indications. 2. With respect to the responsibility of the licensee to provide complete and accurate information, see comment 1. As to the Davis-Besse lessons-learned task force finding, we agree that some information provided by FirstEnergy in response to Bulletin 2001-01 may have been inconsistent with some information subsequently identified by NRC’s lessons-learned task force, and that had some of this information been known in the fall of 2001, the vessel head leakage and degradation may have been identified sooner than March 2002. This information included (1) the boric acid accumulations found on the vessel head by FirstEnergy in 1998 and 2000, (2) FirstEnergy’s limited ability to visually inspect the vessel head, (3) FirstEnergy’s boric acid corrosion control procedures relative to the vessel head, (4) FirstEnergy’s program to address the corrosive effects of small amounts of reactor coolant leakage, (5) previous nozzle inspection results, (6) the bases for FirstEnergy’s conclusion that another source of leakage—control rod drive mechanism flanges—was the source of boric acid deposits on the vessel head that obscured multiple nozzles, and (7) photographs of vessel head penetration nozzles. However, various NRC officials knew some of this information, other information should have been known by NRC, and the remaining information could have been obtained had NRC requested it from FirstEnergy. For example, according to the senior resident inspector, he reviewed every Davis-Besse condition report on a daily basis to determine whether the licensee properly categorized the safety significance of the conditions. Vessel head conditions found by FirstEnergy in 1998 and 2000 were noted in such condition reports or in potential-condition-adverse-to-quality reports. According to a FirstEnergy official, photographs of the pressure vessel head nozzles were specifically provided to NRC’s resident inspector, who, although he did not specifically recall seeing the photographs, stated that he had no reason to doubt the FirstEnergy official’s statement. NRC had been aware, in 1999, of limitations in FirstEnergy’s boric acid corrosion control program and, while it cited FirstEnergy for its failure to adequately implement the program, NRC officials did not follow up to determine if the program had improved. Lastly, while NRC questioned the information provided by FirstEnergy in its submissions to NRC in response to Bulletin 2001-01 (regarding vessel head penetration nozzle inspections), NRC staff did not independently review and assess information pertaining to the results of past reactor pressure vessel head inspections and vessel head penetration nozzle inspections. Similarly, NRC did not independently assess the information concerning the extent and nature of the boric acid accumulations found on the vessel head by the licensee during past inspections. “The NRC staff estimated that, giving credit only to the inspection performed in 1996, the probability of a nozzle ejection during the period of operation from December 31, 2001, to February 16, 2002, was in the range of 2E-3 and was an increase in the overall [loss of coolant accident] probability for the plant. The increase in core damage probability and large early release probability were estimated as approximately 5E-6 and 5E-08, respectively.” The probability of a large early release—5E-6—equates to a frequency of 5x10-5 per year. As we note in the report, according to NRC’s regulatory guide 1.174, this frequency would be in the highest risk zone and NRC would generally not approve the requested change. On several occasions, we met with the NRC staff that developed the risk estimate in an attempt to understand how it was calculated. We obtained from NRC staff the risk estimate information provided to senior management in late November 2001, as well as several explanations of how the staff developed its calculations. We were provided with no evidence that NRC estimated the frequency of core damage as being 5x10-6 per year until February 2004, after our consultants and we had challenged NRC’s estimate as being in the highest risk zone under NRC’s regulatory guide 1.174. Furthermore, several NRC staff involved in deciding whether to issue the order to shut down Davis-Besse, or to allow it to continue operating until February 16, 2002, stated that the risk estimate they used was relatively high. 4. We agree that existing regulations provide a spectrum of conditions under which a plant shutdown could occur and that could be interpreted as covering the vast majority of situations. However, we continue to believe that NRC lacks sufficient guidance for making plant shutdown decisions. We disagree on two grounds: First, the decision- making guidance used by NRC to shut down Davis-Besse was guidance for approving license change requests. This guidance provides general direction on how to make risk-informed decisions when licensees request license changes. It does not address important aspects of decision-making involved in deciding whether to shut down a plant. It also does not provide direction on how NRC should weigh deterministic factors in relation to probabilistic factors in making shutdown decisions. Secondly, while NRC views the flexibility afforded by its existing array of guidance as a strength, we are concerned that, even on the basis of the same information or circumstances, staff can arrive at very different decisions. Without more specific guidance, NRC will continue to lack accountability and the degree of credibility needed to convince the industry and the public that its shutdown decisions are sufficiently sound and reasoned for protecting public health and safety. 5. We are aware that the commissioners have specifically decided not to conduct direct evaluations or inspections of safety culture. We agree that as regulators, NRC is not charged with managing licensees’ facilities, but disagree that any direct NRC involvement with safety culture crosses over to a management function. Management is an embodiment of corporate beliefs and perceptions that affect management strategies, goals, and philosophies. These, in turn, impact licensee programs and processes and employee behaviors that have safety outcomes. We believe that NRC should not assess corporate beliefs and perceptions or management strategies, goals, or philosophies. Rather, we believe that NRC has a responsibility to assess licensee programs and processes, as well as employee behaviors. We cite several areas of safety culture in the report as being examples of various aspects of safety culture that NRC can assess which do not constitute “management functions.” The International Atomic Energy Agency has extensive guidance on assessing additional aspects of licensee performance and indicators of safety culture. Such assessments can provide early indications of declining safety culture prior to when negative safety outcomes occur, such as at Davis-Besse. We also agree that NRC has indirect means by which it attempts to assess safety culture. For example, NRC’s problem identification and resolution inspection procedure’s stated objective is to provide an early warning of potential performance issues and insight into whether licensees have established safety conscious work environments. However, we do not believe that the implementation of the inspection procedure has been demonstrated to be effective in meeting its stated objectives. The inspection procedure directs inspectors to screen and analyze trends in all reported power plant issues. In doing so, the procedure directs that inspectors annually review 3 to 6 issues out of potentially thousands of issues that can arise and that are related to various structures, systems, and components necessary for the safe operation of the plant. This requires that inspectors judgmentally sample 3 to 6 issues on which they will focus their inspection resources. While we do not necessarily question inspector judgment when sampling for these 3 to 6 issues, NRC inspectors stated that due to the large number of issues that they can sample from, they try to focus on those issues that they believe have the most relevance for safety. Thus, if an issue is not yet perceived as being important to safety, it is less likely to be selected for follow up. Further, even if an issue were selected for follow up and this indicated that the licensee did not properly identify and resolve underlying problems that contributed to the issue, according to NRC officials, it is highly unlikely that this one issue would rise to a high enough level of significance for it to be noted under NRC’s Reactor Oversight Process. Additionally, the procedure is dependant on the inspector being aware of, and having the capability to, identify issues or trends in the area of safety culture. According to NRC officials, inspectors are not trained in what to look for when assessing licensee safety culture because they are, by and large, nuclear engineers. While they may have an intuition that something is wrong, they may not know how to assess it in terms of safety culture. Additional specific examples NRC cites for indirectly assessing a selected number of safety culture aspects have the following limitations: NRC’s inspection procedure for assessing licensees’ employee concerns program is not frequently used. According to NRC Region III officials, approval to conduct such an inspection must be given by the regional administrator and the justification for the inspection to be performed has to be based on a very high level of evidence that a problem exists. Because of this, these officials said that the inspection procedure has only been implemented twice in Region III. NRC’s allegation program provides a way for individuals working at NRC-regulated plants and the public to provide safety and regulatory concerns directly to NRC. It is a reactive program by nature because it is dependent upon licensees’ employees feeling free and able to come forward to NRC with information about potential licensee misconduct. While NRC follows up on those plants that have a much higher number of allegations than other plants to determine what actions licensees are taking to address any trends in the nature of the allegations, the number of allegations may not always provide an indication of a poor safety culture, and in fact, may be the reverse. For example, the number of allegations at Davis-Besse prior to the discovery of the cavity in the reactor head in March 2002 was relatively small. Between 1997 and 2001, NRC received 10 allegations from individuals at the plant. In contrast, NRC received an average of 31 allegations per plant over the same 5-year period from individuals at other plants. NRC’s lessons-learned reviews, such as the one conducted for Davis- Besse, are generally conducted when an incident having potentially serious safety consequences has already occurred. With respect to NRC’s enforcement of employee protection regulations, NRC, under its current enforcement policy, would normally only take enforcement action when violations are of very significant or significant regulatory concern. This regulatory concern pertains to NRC’s primary responsibility for ensuring safety and safeguards and protecting the environment. Examples of such violations would include the failure of a system designed to prevent a serious safety incident not working when it is needed, a licensed operator being inebriated while at the control of a nuclear reactor, and the failure to obtain prior NRC approval for a license change that has implications for safety. If violations of employee protection regulations do not pose very significant or significant safety, safeguards, or environmental concerns, NRC may consider such violations minor. In such cases, NRC would not normally document such violations in inspection reports or records, and would not take enforcement action. NRC’s Reactor Oversight Process, instituted in April 2000, focuses on seven specific “cornerstones” that support the safety of plant operations to ensure reactor safety, radiation safety, and security. These cornerstones are: (1) the occurrence of operations and events that could lead to a possible accident if safety systems did not work, (2) the ability of safety systems to function as intended, (3) the integrity of the three safety barriers, (4) the effectiveness of emergency preparedness, (5) the effectiveness of occupational radiation safety, (6) the ability to protect the public from radioactive releases, and (7) the ability to physically protect the plant. NRC’s process also includes three elements that cut across these seven cornerstones: (1) human performance, (2) a licensee’s safety- conscious work environment, and (3) problem identification and resolution. NRC assumes that problems in any of these three crosscutting areas will be evidenced in one or more of the seven cornerstones in advance of any serious compromise in the safety of a plant. However, as evidenced by the Davis-Besse incident, this assumption has not proved to be true. NRC also cites lessons-learned task force recommendations to improve NRC’s ability to detect problems in licensee’s safety culture, as a means to achieve our recommendation to directly assess licensee safety culture. These lessons-learned task force recommendations include (1) developing inspection guidance to assess the effect that a licensee’s fuel outage shutdown schedule has on the scope of work conducted during a shutdown; (2) revising inspection guidance to provide for assessing the safety implications of long-standing, unresolved problems; corrective actions being phased in over the course of several years or refueling outages; and deferred plant modifications; (3) revising the problem identification and resolution inspection approach and guidance; and (4) reviewing the range of NRC’s inspections and assessment processes and other NRC programs to determine whether they are sufficient to identify and dispose of the types of problems experienced at Davis-Besse. While we commend these recommendations, we do not believe that revising such guidance will necessarily alert NRC inspectors to early declines in licensee safety culture before they result in negative safety outcomes. Further, because of the nature of NRC’s process for determining the relative safety significance of violations under NRC’s new Reactor Oversight Process, we do not believe that any indications of such declines will result in a cited violation. 6. We have revised the report to reflect that boron in the form of boric acid crystals is dissolved in the cooling water. (See p. 13.) 7. On page 41 of the report, we recognize that NRC also relied on information provided by FirstEnergy regarding the condition of the vessel head. For example, in developing its risk estimate, NRC credited FirstEnergy with a vessel head inspection conducted in 1996. However, NRC decided that the information provided by FirstEnergy documenting vessel head inspections in 1998 and 2000 was of such poor quality that it did not credit FirstEnergy with having conducted them. As a result, NRC’s risk estimate was higher than had these inspections been given credit. 8. The statement made by the NRC regional branch chief was taken directly from NRC’s Office of the Inspector General report on NRC’s oversight of Davis-Besse during the April 2000 refueling outage.9. We agree that up until the Davis-Besse event, NRC had not concluded that boric acid corrosion was a high priority issue. We clarified the text of the report to reflect this comment. (See p. 25.) 10. We agree that plant operators in France decided to replace their vessel heads in lieu of performing the extensive inspections instituted by the French regulatory authority. The report has been revised to add these details. (See p. 31.) 11. We agree that caked-on boron, in combination with leakage, could accelerate corrosion rates under certain conditions. However, even without caked-on boron, corrosion rates could be quite high. Westinghouse’s 1987 report on the corrosive effects of boric acid leakage concluded that the general corrosion rate of carbon steel can be unacceptably high under conditions that can prevail when primary coolant leaks onto surfaces and concentrates at the temperatures that are found on reactor surfaces. In one series of tests that it performed, boric acid solutions corroded carbon steel at a rate of about 0.4 inches per month, or about 4.8 inches a year. This was irrespective of any caked-on boron. In 1987, as a result of that report and extensive boric acid corrosion found at two other nuclear reactors that year—Salem unit 2 and San Onofre unit 2—NRC concluded that a review of existing inspection programs may be warranted to ensure that adequate monitoring procedures are in place to detect boric acid leakage and corrosion before it can result in significant degradation of the reactor coolant pressure boundary. However, NRC did not take any additional action. 12. We agree that NRC has requirements and processes that provide a number of circumstances in which a plant shutdown would or could be required. We also recognize that there were no legal objections to the draft enforcement order to shut down the plant, and that the basis for not issuing the order was NRC’s belief that the plant did not pose an unacceptable risk to public health and safety. The statement in our report that NRC is referring to is discussing one of these circumstances—the licensee’s failure to meet NRC’s technical specification—and whether NRC believed that it had enough proof that the technical specification was not being met. The statement is not discussing the basis for NRC issuing an enforcement order. We revised the report to clarify this point. (See p. 34.) 13. The basis for our statement that NRC staff concluded that the first safety principle was probably not met was its November 29, 2001, briefing to NRC’s Executive Director’s Office and its November 30, 2001, briefing to the NRC commissioners’ technical assistants. These briefings, the basis for which are included in documented briefing slides, took place shortly before NRC formally notified FirstEnergy on December 4, 2001, that it would accept its compromise shutdown date. 14. We are referring to the same document that NRC is referring to—NRC’s December 3, 2002, response to FirstEnergy (NRC’s ADAMS accession number ML023300539). The response consists of a 2-page transmittal letter and an 7.3-page enclosure. The 7.3-page enclosure is 3 pages of background and 4.3 pages of the agency’s assessment. The assessment includes statements that the safety principles were met but does not provide an explanation of how NRC considered or weighed deterministic and probabilistic information in concluding that each of the safety factors were met. For example, NRC concluded that the likelihood of a loss-of-coolant accident was acceptably small because of the (1) staff’s preliminary technical assessment for control rod drive mechanism cracking, (2) evidence of cracking found at other plants similar to Davis-Besse, (3) analytical work performed by NRC’s research staff in support of the effort, and (4) information provided by FirstEnergy regarding past inspections at Davis-Besse. However, the assessment does not explain how these four pieces of information successfully demonstrated if and how each of the safety principles was met. The assessment also states that NRC examined the five safety principles, the fifth of which is the ability to monitor the effects of a risk-informed decision. The assessment is silent on whether this principle is met. However, in NRC’s November 29, 2001, briefing to NRC’s Executive Director’s Office and in its November 30, 2001, briefing to the NRC commissioners’ technical assistants, NRC concluded that this safety principle was not met. As noted above, NRC formally notified FirstEnergy on December 4, 2001, that it would accept FirstEnergy’s February 16, 2002, shutdown date. 15. See comment 3. We do not agree that the report statements mischaracterize the facts. Rather, we are concerned that NRC is misusing basic quantitative mathematics. In addition, with regard to NRC’s concept of an annual average change in the frequency of core damage, NRC stated that the agency averaged the frequency of core damage that would exist for the 7-week period of time (representing the period of time between December 31, 2001, and February 16, 2002) over the entire 1-year period, using the assumption that the frequency of core damage would be zero for the remainder of the year—February 17, 2002, to December 31, 2002. According to our consultants, this calculation artificially reduced NRC’s risk estimate to a level that is acceptable under NRC’s guidance. By this logic, our consultants stated, risks can always be reduced by spreading them over time; by assuming another 10 years of plant operation (or even longer) NRC could find that its calculated “risks” are completely negligible. They further stated that NRC’s approach is akin to arguing that an individual, who drives 100 miles per hour 10 percent of the time, with his car otherwise garaged, should not be cited because his time-average speed is only 10 miles per hour. Further, our consultants concluded that the “annual-average” core damage frequency approach was also clearly unnecessary, since one need only convert a core damage frequency to a core damage probability to handle part-year cases like the Davis-Besse case. Lastly, we find no basis for the calculation in any NRC guidance. According to our consultants, this new interpretation of NRC’s guidance is at best unusual and certainly is inconsistent with NRC’s guidelines regarding the use of an incremental core damage frequency. This interpretation also reinforces our consultants’ impression that perhaps there was, in November 2001 and possibly is still today, some confusion among the NRC staff regarding basic quantitative metrics that should be considered in evaluating regulatory and safety issues. As noted in comment 3, we found no evidence of this calculation prior to February 2004. 16. While we agree that vessel head corrosion as extensive as later found at Davis-Besse was not anticipated, NRC had known that leakage of the primary coolant from a through-wall crack could cause boric acid corrosion of the vessel head, as evidenced by the Westinghouse work cited above. Regardless of information provided to NRC by individual licensees, such as FirstEnergy, NRC’s model should account for known risks, including the potential for corrosion. 17. We agree that NRC was aware of control rod drive mechanism nozzle cracking at French nuclear power plants. NRC provided us additional information consisting of a December 15, 1994, internal memo, in which NRC concluded that primary coolant leakage from a through-wall crack could cause boric acid corrosion of the vessel head. However, because some analyses indicated that it would take at least 6 to 9 years before any corrosion would challenge the structural integrity of the head, NRC concluded that cracking was not a short-term safety issue. We revised the report to include this additional information. (See p. 40.) 18. See comment 15. 19. We agree that while not directly relevant to the Davis-Besse situation, NRC uses regulatory guide 1.177 to make decisions on whether certain equipment can be inoperable while a nuclear reactor is operating, which can pose very high instantaneous risks for very short periods of time. However, we include the reference to this particular guidance in the report because it was cited by an NRC official involved in the Davis- Besse decision-making process as another piece of guidance used in judging whether the risk that Davis-Besse posed was acceptable. 20. While regulatory guide 1.174 comprises 25 pages of guidance on how to use risk in making decisions on whether to allow license changes, it does not lay out how NRC staff are to use quantitative estimates of risk or probabilistic factors, or how robust these estimates must be in order to be considered along with more deterministic factors. The regulatory guide, which was first issued in mid-1998, had been in effect for only about 1.5 years when NRC staff was tasked with making their decision on Davis-Besse. According to the Deputy Executive Director of Nuclear Reactor Programs at the time the decision was being made, the agency was trying to bring the staff through the risk-informed decision-making process because Davis-Besse was a learning tool. He further stated that it was really the first time the agency had used the risk-informed decision-making process on operational decisions as opposed to programmatic decisions for licensing. At the time the decision was made, and currently, NRC has no guidance or criteria for use in assessing the quality of risk estimates or clear guidance or criteria for how risk estimates are to be weighed against other risk factors. 21. The December 3, 2002, safety assessment or evaluation did state that the estimated increase in core damage frequency was consistent with NRC’s regulatory guidelines. However, as noted in comment 3, we disagree with this conclusion. In addition, while we agree that NRC has staff with risk assessment disciplines, we found no reference to these staff in NRC’s safety evaluation. We also found no reference to NRC’s statement that these staff gave more weight to deterministic factors in arriving at the agency’s decision. While we endorse NRC’s consideration of deterministic as well as probabilistic factors and the use of a risk-informed decision-making process, we continue to maintain that NRC needs clear guidance and criteria for the quality of risk estimates, standards of evidence, and how to apply deterministic as well as probabilistic factors in plant shutdown decisions. As the agency continues to incorporate a risk-informed process into much of its regulatory guidance and programs, such criteria will be increasingly important when making shutdown as well as other types of decisions regarding nuclear power plants. 22. The information that NRC provided us indicates that completion dates for 2 of the 22 high priority recommendations have slipped. One, the completion date for encouraging the American Society of Mechanical Engineers to revise vessel head penetration nozzle inspection requirements or, alternatively, for revising NRC’s regulations for vessel head inspections has slipped from June 2004 to June 2006. Two, the completion date for assessing NRC’s requirements that licensees have procedures for responding to plant leakage alarms to determine if the requirements are sufficient for identifying reactor coolant pressure boundary leakage has slipped from March 2004 to March 2005. 23. We agree with this comment and have revised the report to reflect this clarification. (See p. 49.) 24. Our estimate of at least an additional 200 hours of inspection per reactor per year is based on: NRC’s new requirement that its resident inspectors review all licensee corrective action items on a daily basis (approximately 30 minutes per day). Given that reactors are intended to operate continuously throughout the year, this results in about 3.5 hours per week for reviewing corrective action items, or about 182 hours per year. In addition, resident inspections are now required to determine, on a semi-annual basis, whether such corrective action items reflect any trends in licensee performance (16 to 24 hours per year). The total increase for these new requirements is about 198 to 206 hours per reactor per year. A new NRC requirement that its resident inspectors validate that licensees comply with additional inspection commitments made in response to NRC’s 2002 generic bulletin regarding reactor pressure vessel head and vessel head penetration nozzles. This requirement results in an additional 15 to 50 hours per reactor per fuel outage. 25. Our draft report included a discussion that NRC management’s failure to recognize the scope or breadth of actions and resources necessary to fully implement task force recommendations could adversely affect how effective the actions may be. We made this statement based on NRC’s initial response to the Office of the Inspector General’s October 2003 report on Davis-Besse. That report concluded that ineffective communication within NRC’s Region III and between Region III and NRC headquarters contributed to the Davis-Besse incident. NRC, in its January 2004 response to the report, stated that among other things, it had developed training on boric acid corrosion and revised its inspection program to require semi-annual trend reviews. In February 2004, the Office of the Inspector General criticized NRC for limiting the agency’s efforts in responding to its findings. Specifically, it stated that NRC did not address underlying and generic communication failures identified in the Office’s report. In response to the criticism, on April 19, 2004 (while our draft report was with NRC for review and comment), NRC provided the Office of the Inspector General with additional information to demonstrate that its actions to improve communication within the agency were broader than indicated in the agency’s January 2004 response. Based on NRC’s April 19, 2004, response and the Office’s agreement that NRC’s actions appropriately address its concerns about communication within the agency, we deleted this discussion in the report. 26. We recognize that the lessons-learned task force did not make a recommendation for improving the agency’s decision-making process because the task force coordinated with the Office of the Inspector General regarding the scope of their respective review activities and because the task force was primarily charged with determining why the vessel head degradation was not prevented. (See p. 55.) 27. We agree that NRC’s December 3, 2002, documentation of its decision was prepared in response to a finding by the Davis-Besse lessons- learned task force. We revised our report to incorporate this fact. (See p. 55.) 28. We agree that NRC’s lessons-learned task force conducted a preliminary review of reports from previous lessons-learned task forces and, as a result of that review, made a recommendation that the agency perform a more detailed effectiveness review of the actions taken in response to those reviews. We revised the report to reflect that NRC’s detailed review is currently underway. (See p. 55.) GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition, Heather L. Barker, David L. Brack, William F. Fenzel, Michael L. Krafve, William J. Lanouette, Marcia Brouns McWreath, Judy K. Pagano, Keith A. Rhodes, and Carol Hernstadt Shulman made key contributions to this report. Related GAO Products Management Weaknesses Affect Nuclear Regulatory Commission Efforts to Address Safety Issues Common to Nuclear Power Plants. GAO-RCED- 84-149.Washington, D.C.: September 19, 1984. Probabilistic Risk Assessment: An Emerging Aid to Nuclear Power Plant Safety Regulation. GAO-RCED-85-11. Washington, D.C.: June 19, 1985. The Nuclear Regulatory Commission Should Report on Progress in Implementing Lessons Learned from the Three Mile Island Accident. GAO-RCED-85-72. Washington, D.C.: July 19, 1985. Nuclear Regulation: Oversight of Quality Assurance at Nuclear Power Plants Needs Improvement. GAO-RCED-86-41.Washington, D.C.: January 23, 1986. Nuclear Regulation: Efforts to Ensure Nuclear Power Plant Safety Can Be Strengthened. GAO-RCED-87-141. Washington, D.C.: August 13, 1987. Nuclear Regulation: NRC’s Restart Actions Appear Reasonable—but Criteria Needed. GAO-RCED-89-95.Washington, D.C.: May 4, 1989. Nuclear Regulation: NRC’s Efforts to Ensure Effective Plant Maintenance Are Incomplete. GAO-RCED-91-36. Washington, D.C.: December 17, 1990. Nuclear Regulation: NRC’s Relationship with the Institute of Nuclear Power Operations. GAO-RCED-91-122. Washington, D.C.: May 16, 1991. Nuclear Regulation: Weaknesses in NRC’s Inspection Program at a South Texas Nuclear Power Plant. GAO-RCED-96-10. Washington, D.C.: October 3, 1995. Nuclear Regulation: Preventing Problem Plants Requires More Effective NRC Action. GAO-RCED-97-145. Washington, D.C.: May 30, 1997. Nuclear Regulatory Commission: Preventing Problem Plants Requires More Effective Action by NRC. GAO-T-RCED-98-252. Washington, D.C.: July 30, 1998. Nuclear Regulatory Commission: Strategy Needed to Develop a Risk- Informed Safety Approach. GAO-T-RCED-99-71. Washington, D.C.: February 4, 1999. Nuclear Regulation: Strategy Needed to Regulate Safety Using Information on Risk. GAO-RCED-99-95. Washington, D.C.: March 19, 1999. Nuclear Regulation: Regulatory and Cultural Changes Challenge NRC. GAO/T-RCED-00-115. Washington, D.C.: March 9, 2000. Major Management Challenges and Performance Risks at the Nuclear Regulatory Commission. GAO-01-259. Washington, D.C.: January 2001. Nuclear Regulation: Progress Made in Emergency Preparedness at Indian Point 2, but Additional Improvements Needed. GAO-01-605. Washington, D.C.: July 30, 2001. Nuclear Regulation: Challenges Confronting NRC in a Changing Regulatory Environment. GAO-01-707T. Washington, D.C.: May 8, 2001. Nuclear Regulatory Commission: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-760. Washington, D.C.: June 29, 2001. Managing for Results: Efforts to Strengthen the Link between Resources and Results at the Nuclear Regulatory Commission. GAO-03-258. Washington, D.C.: December 10, 2002. Nuclear Regulatory Commission: Oversight of Security at Commercial Nuclear Power Plants Needs to Be Strengthened. GAO-03-752. Washington, D.C.: September 4, 2003. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
In March 2002, the most serious safety issue confronting the nation's commercial nuclear power industry since Three Mile Island in 1979 was identified at the Davis- Besse plant in Ohio. After the Nuclear Regulatory Commission (NRC) allowed Davis-Besse to delay shutting down to inspect its reactor vessel for cracked tubing, the plant found that leakage from these tubes had caused extensive corrosion on the vessel head--a vital barrier preventing a radioactive release. GAO determined (1) why NRC did not identify and prevent the corrosion, (2) whether the process NRC used in deciding to delay the shutdown was credible, and (3) whether NRC is taking sufficient action in the wake of the incident to prevent similar problems from developing at other plants. NRC should have but did not identify or prevent the corrosion at Davis- Besse because its oversight did not generate accurate information on plant conditions. NRC inspectors were aware of indications of leaking tubes and corrosion; however, the inspectors did not recognize the indications' importance and did not fully communicate information about them. NRC also considered FirstEnergy--Davis-Besse's owner--a good performer, which resulted in fewer NRC inspections and questions about plant conditions. NRC was aware of the potential for cracked tubes and corrosion at plants like Davis-Besse but did not view them as an immediate concern. Thus, NRC did not modify its inspections to identify these conditions. NRC's process for deciding to allow Davis-Besse to delay its shutdown lacks credibility. Because NRC had no guidance specifically for making a decision on whether a plant should shut down, it used guidance for deciding whether a plant should be allowed to modify its operating license. NRC did not always follow this guidance and generally did not document how it applied the guidance. The risk estimate NRC used to help decide whether the plant should shut down was also flawed and underestimated the amount of risk that Davis-Besse posed. Further, even though underestimated, the estimate still exceeded risk levels generally accepted by the agency. NRC has taken several significant actions to help prevent reactor vessel corrosion from recurring at nuclear power plants. NRC has required more extensive vessel examinations and augmented inspector training. However, NRC has not yet completed all of its planned actions and, more importantly, has no plans to address three systemic weaknesses underscored by the incident. Specifically, NRC has proposed no actions to help it better (1) identify early indications of deteriorating safety conditions at plants, (2) decide whether to shut down a plant, or (3) monitor actions taken in response to incidents at plants. Both NRC and GAO had previously identified problems in NRC programs that contributed to the Davis-Besse incident, yet these problems continue to persist.
Background The FCS concept is designed to be part of the Army’s Future Force, which is intended to transform the Army into a more rapidly deployable and responsive force that differs substantially from the large division-centric structure of the past. The Army is reorganizing its current forces into modular brigade combat teams, each of which is expected to be highly survivable and the most lethal brigade-sized unit the Army has ever fielded. The Army expects FCS-equipped brigade combat teams to provide significant warfighting capabilities to DOD’s overall joint military operations. The Army is implementing its transformation plans at a time when current U.S. ground forces continue to play a critical role in the ongoing conflicts in Iraq and Afghanistan. The Army has instituted plans to spin out selected FCS technologies and systems to current Army forces throughout the program’s system development and demonstration phase. As we were preparing this report, the Army made a number of adjustments to its plans for the FCS program. The revised program will no longer include all 18 systems as originally planned. The FCS family of weapons is now expected to include 14 manned and unmanned ground vehicles, air vehicles, sensors, and munitions that will be linked by an advanced information network. The systems include eight new types of manned ground vehicles to replace current tanks, infantry carriers, and self-propelled howitzers; two classes of unmanned aerial vehicles; several unmanned ground vehicles; and an attack missile. Fundamentally, the FCS concept is to replace mass with superior information—allowing soldiers to see and hit the enemy first rather than to rely on heavy armor to withstand a hit. This solution attempts to address a mismatch that has posed a dilemma to the Army for decades: the Army’s heavy forces had the necessary firepower needed to win but required extensive support and too much time to deploy while its light forces could deploy rapidly but lacked firepower. If the Future Force becomes a reality, then the Army would be better organized, staffed, equipped, and trained for prompt and sustained land combat, qualities intended to ensure that it would dominate over evolving, sophisticated threats. The Future Force is to be offensively oriented and will employ revolutionary concepts of operations, enabled by new technology. The Army envisions a new way of fighting that depends on networking the force, which involves linking people, platforms, weapons, and sensors seamlessly together in a system-of-systems. If successful, the FCS system-of-systems concept will integrate individual capabilities of weapons and platforms, thus facilitating interoperability and open system designs. This would represent significant improvement over the traditional approach of building superior individual weapons that must be retrofitted and netted together after the fact. This transformation, in terms of both operations and equipment, is under way with the full cooperation of the Army warfighter community. In fact, the development and acquisition of FCS is being accomplished using a uniquely collaborative relationship among the Army’s developers, the participating contractors, and the warfighter community. The Army has employed a management approach for FCS that centers on a lead systems integrator to provide significant management services to help the Army define and develop FCS and reach across traditional Army mission areas. Because of its partner-like relationship with the Army, the lead systems integrator’s responsibilities include requirements development, design, and selection of major system and subsystem subcontractors. The team of Boeing and Science Applications International Corporation is the lead systems integrator for the FCS system development and demonstration phase of acquisition, which is expected to extend until 2017. The FCS lead systems integrator acts on behalf of the Army to optimize the FCS capability, maximize competition, ensure interoperability, and maintain commonality in order to reduce life- cycle costs. Boeing also acts as an FCS supplier in that it is responsible for developing two important software subsystems. The Army advised us that it did not believe it had the resources or flexibility to use its traditional acquisition process to field a program as complex as FCS under the aggressive timeline established by the then-Army Chief of Staff. The Army will maintain oversight and final approval of the lead systems integrator’s subcontracting and competition plans. The FCS lead systems integrator originally operated under a contractual instrument called an “other transaction agreement.” In 2006, the Army completed the conversion of that instrument to a more typical contract based on the Federal Acquisition Regulation. As required by section 115 of the John Warner National Defense Authorization Act for Fiscal Year 2007, we are reviewing the contractual relationship between the Army and the lead systems integrator and will be reporting on that work separately. Elements of a Business Case We have frequently reported on the wisdom of using a solid, executable business case before committing resources to a new product development effort. In the case of DOD, a business case should be based on DOD acquisition policy and lessons learned from leading commercial firms and successful DOD programs. The business case in its simplest form is demonstrated evidence that (1) the warfighter’s needs are valid and that they can best be met with the chosen concept, and (2) the chosen concept can be developed and produced within existing resources—that is, proven technologies, design knowledge, adequate funding, adequate time, and management capacity to deliver the product when it is needed. A program should not go forward into product development unless a sound business case can be made. If the business case measures up, the organization commits to the product development, including making the financial investment. At the heart of a business case is a knowledge-based approach to product development that is both a best practice among leading commercial firms and the approach preferred by DOD in its acquisition policies. For a program to deliver a successful product within available resources, managers should demonstrate high levels of knowledge before significant commitments are made. In essence, knowledge supplants risk over time. This building of knowledge can be described as three levels or points that should be attained over the course of a program. First, at program start, the customer’s needs should match the developer’s available resources—mature technologies, time, funding, and management capacity. An indication of this match is the demonstrated maturity of the technologies needed to meet customer needs. The ability of the government acquisition workforce to properly manage the effort should also be an important consideration at program start. Second, about midway through development, the product’s design should be stable and demonstrate that it is capable of meeting performance requirements. The critical design review is the vehicle for making this determination and generally signifies the point at which the program is ready to start building production- representative prototypes. Third, by the time of the production decision, the product must be shown able to be manufactured within cost, schedule, and quality targets and have demonstrated its reliability. It is also the point at which the design must demonstrate that it performs as expected through realistic system-level testing. A delay in attaining any one of these levels delays the points that follow. If the technologies needed to meet requirements are not mature, design and production maturity will be delayed. In successful commercial and defense programs that we have reviewed, managers were careful to develop technology separately from and ahead of the development of the product. For this reason, the first knowledge level is the most important for improving the chances of developing a weapon system within cost and schedule estimates. DOD’s acquisition policy has adopted the knowledge- based approach to acquisitions. DOD policy requires program managers to demonstrate knowledge about key aspects of a system at key points in the acquisition process. Program managers are also required to reduce integration risk and demonstrate product design prior to the design readiness review and to reduce manufacturing risk and demonstrate producibility prior to full-rate production. The FCS program is about one-third of the way into its scheduled product development. At this stage, the program should have attained knowledge point one, with a strategy for attaining knowledge points two and three. Accordingly, we analyzed the FCS business case first as it pertains to firming requirements and maturing technologies, which indicate progress against the first knowledge point. We then analyzed FCS’s strategy for attaining design and production maturity. Finally, we analyzed the costs and funding estimates made to execute the FCS business case. Agency and Congressional Actions Since Our Last Report In our previous report on the FCS program, released in March 2006, we reported that the program entered the development phase in 2003 without reaching the level of knowledge it should have attained in the pre- development phase. The elements of a sound business case were not reasonably present, and we noted that the Army would continue building basic knowledge in areas such as requirements and technologies for several more years. We concluded that in order for the FCS program to be successful, an improved business case was needed. The Defense Acquisition Board met in May 2006 to review the FCS program. That review approved the Army approach to spin out certain FCS technologies to current Army forces in 2008 and directed the Army to continue with yearly in-process reviews and a Defense Acquisition Board meeting in the late 2008 timeframe. Performance expectations were also established for the review. During the meeting, it was noted that significant cost and schedule risk remains for the program and that reductions in scope and more flexibility in schedule are needed to stay within current funding constraints. Also in 2006, Congress mandated that the Secretary of Defense conduct a milestone review for the FCS program, following the preliminary design review scheduled for early 2009. Congress stated that the review should include an assessment of whether (1) the needs are valid and can be best met with the FCS concept, (2) the FCS program can be developed and produced within existing resources, and (3) the program should continue as currently structured, be restructured, or be terminated. The Congress required the Secretary of Defense to review specific aspects of the program, including the maturity of critical technologies, program risks, demonstrations of the FCS concept and software, and a cost estimate and affordability assessment and to submit a report of the findings and conclusions of the review to Congress. Additionally, Congress has required the Secretary of Defense to provide an independent cost estimate that will encompass costs related to the FCS program and a report on the estimate. The Institute for Defense Analyses is expected to deliver this analysis to Congress by April 2007. Finally, in response to concerns over funding shortfalls and other resource issues for fiscal years 2008 to 2013, the Army has recently made a number of changes to its plans for the FCS program. Although complete details are not yet available, the Army plans to reduce the number of individual systems from 18 to 14 including eliminating 2 unmanned aerial vehicles; slow the rate of FCS production from 1.5 to 1 brigade combat team change the total quantities to be bought for several systems; and reduce the number of planned spin-outs from four to three. Full details of the Army’s plans were not available at the time of this report. Based on what is known, program officials expect that the production period for the 15 brigade combat teams would be extended from 2025 to 2030. The initial operating capability date would also be delayed by 5 months to the third quarter of fiscal year 2015. Despite Progress, FCS Requirements Must Still Prove Technically Feasible and Affordable The Army has made considerable progress in defining system-of-systems level requirements and allocating those requirements to the individual FCS systems. This progress has necessitated making significant trade-offs to reconcile requirements with technical feasibility. A key example of this has been to allow a significant increase in manned ground vehicle weight to meet survivability requirements which in turn has forced trade-offs in transportability requirements. The feasibility of FCS requirements still depends on a number of key assumptions about immature technologies, costs, and other performance characteristics like the reliability of the network and other systems. As current assumptions in these areas become known, more trade-offs are likely. At this point, the Army has identified about 70 high technical risks that need to be resolved to assure the technical feasibility of requirements. Army Has Made Progress in Defining System-Level Requirements The Army has defined 552 warfighter requirements for the FCS brigade combat team that are tied to seven key performance parameters: network- ready, networked battle command, networked lethality, transportability, sustainability/reliability, training, and survivability. Collectively, the Army has stated that the FCS-equipped brigade combat teams must be as good as or better than current Army forces in terms of lethality, responsiveness, sustainability, and survivability. In August 2005, the Army and the lead systems integrator translated the warfighter requirements into 11,500 more specific system-of-systems level requirements, established the functional baseline for the program, and allocated requirements to individual FCS systems. Since then, the contractors have clarified their design concepts and provided feedback on the technical feasibility and affordability of the requirements. In an August 2006 review, the Army and its lead systems integrator reduced the number of warfighter requirements to 544, but increased the system-of-systems requirements to 11,697. Of the system-of-system requirements, 289 have “to be determined” items and 819 have open issues to be resolved. At this review, the FCS requirements were translated further down to the individual system level, totaling about 90,000. The system level requirements provide the specificity needed for the contractors to fully develop detailed designs for their individual systems. While the stages of translating requirements for FCS are typical for weapon systems, the enormous volume suggests the complex challenge that a networked system-of-systems like FCS presents. Figure 2 illustrates how the FCS requirements are translated from the warfighter to the individual systems. Leading up to the review, the lead systems integrator and the subcontractors identified over 10,000 “to-be-determined” items and issues to be resolved related to the flow-down of the system-of-systems requirements to the FCS system-level requirements. The “to-be- determined” items generally involve the need for the user community and the developers to come to an understanding on a way to better specify or quantify the requirement. A common issue to be resolved involves the need for compromise between the users and developers when the design solution may not be able to fully meet the initially allocated requirement. The Army and lead systems integrator plan to resolve the “to-be- determined” items and issues prior to the preliminary design review in early 2009. The Army and lead systems integrator are also developing a network requirements document that is intended to provide end-to-end network requirements in an understandable format to inform the system-level requirements. The number of network requirements in this document has not yet been determined. However, the Army and lead systems integrator have identified about 2000 “to-be-determined” items and issues to be resolved in this area that need to be addressed and clarified. The Army and lead systems integrator expect to complete this work by the time of the preliminary design review. Some Key Requirements and Design Trade-offs Have Been Made The Army and its subcontractors have already made some trade-offs as they continue to refine their system design concepts and the FCS system- level requirements. One key trade-off came in the area of the projected weight of the manned ground vehicles and their transportability by aircraft. Originally, the manned ground vehicles were to weigh less than 20 tons so they could be carried on the C-130 aircraft. These vehicles were to be lightly armored at 19 tons and with add-on armor bringing the total vehicle weight up to about 24 tons. However, the Army and its contractor team found that this design did not provide sufficient ballistic protection. Currently, the vehicle designs with improved ballistic protection are estimated to weigh between 27 and 29 tons. At this weight, it is practically impossible to transport the vehicles on the C-130s, and they are now being designed to be transported by the larger C-17 aircraft. Illustrative of the FCS design challenges, the added weight of the vehicles could have ripple effects for the designs of the engine, suspension, band track, and other subsystems. The Army still wants vehicles to be transportable by the C-130 when stripped of armor and other equipment, so that C-130 cargo size and weight limits will still serve to constrain the design of the manned ground vehicles. As these are primarily paper and simulated designs, the potential for future trade-offs is high. Another example involves the requirement that the manned ground vehicles be able to operate for several hours on battery power and without the engine running. Based on the analyses to date, it has been determined that current battery technologies would permit less than one hour of this “silent watch” capability. The Army, lead systems integrator, and the FCS subcontractors are continuing their assessments, as is the user community, which is re-evaluating which internal manned ground vehicle subsystems may need to operate in these situations. With less demand for power, the batteries are expected to last somewhat longer. As that work concludes, the Army will be able to determine the specific level of silent watch capability it can expect for the manned ground vehicles and how best to change the operational requirements document. The Army plans to finalize this and other requirement changes and numerous clarifications by the time of the preliminary design review in early 2009. Technical Feasibility of System-Level Requirements Based on Numerous Assumptions The Army and lead systems integrator believe that most of the FCS system-level requirements are technically feasible and have decided that design work should proceed. However, as the design concepts and technologies mature, their actual performance does not necessarily match expectations, and trade-offs have to be made. To date, the Army has had to make a number of requirements and design changes that recognize the physical constraints of the designs and the limits of technology. Ideally, these trade-offs are made before a program begins. Because many technologies are not yet fully mature, significant trade-offs have been made and will continue to be necessary. The technical feasibility of FCS requirements still depends on a number of key assumptions about the performance of immature technologies, thus more trade-offs are likely as knowledge replaces assumptions. The challenge in making additional changes to requirements is at least two-fold: first is assessing the potential ripple effect of changing a requirement for one system on the thousands of other system requirements; the second is assessing the cumulative effect of numerous system level requirements changes on the overall characteristics of survivability, lethality, responsiveness, and supportability. Technical Feasibility Dependent on Addressing Some High Level Risks The Army has identified numerous known technical risks, about 70 of which are considered to be at a medium or high level. These involve the information network, characteristics like weight and reliability that cut across air and ground vehicles, and several system-specific risks. The Army is focusing management attention on these risks and has risk reduction plans in place. Nonetheless, the results of these technology development efforts will have continuing implications for design and requirements trade-offs. FCS survivability depends on the brigade-wide availability of network- based situational awareness plus the inherent survivability of the FCS platforms. There is hardly any aspect of FCS functionality that is not predicated on the network, and for many key functions, the network is essential. However, the FCS program manager has stated that the Army still has a lot yet to learn on how to successfully build such an advanced information network. Some of the network medium and high level risks include: End-to-end quality of service on mobile ad-hoc networks. The probability is high that the FCS network will not be able to ensure that the information with the highest value is delivered to the recipients. Failure to support the warfighter in defining and implementing command intent for information management will result in substantially reduced force effectiveness, in a force that trades information for armor. Wideband waveform availability. The current Joint Tactical Radio System Ground Mobile Radio program continues to pose risks because its schedule is not yet synchronized with the schedule for the core FCS program or FCS spin-outs. Any schedule slip in this area could lead to further delays. This consequence will mean integrators will not have Joint Tactical Radio System hardware in sufficient quantities, capability, and function to support the FCS schedule. In addition to schedule delays this could also jeopardize the network spin-outs, experiments, and the integration of the core program requirements. Soldier radio waveform availability. The soldier radio waveform provides functional capabilities that are needed to support many FCS systems but may not be completed in time to support FCS development. These functional capabilities facilitate interoperability and gateway functions between the FCS family of systems. These systems are critical to FCS performance and delays of these functional capabilities will negatively impact the FCS schedule. Spectrum availability and usage. There is a high likelihood that more frequency spectrum is required for all of the communications needs than will be available given current design assumptions. Lack of system spectrum may force a choice to operate without critical data due to reduced data throughput, reducing mission effectiveness and leading to possible failure. Unmanned vehicle network latency. Unmanned ground and air vehicles are completely dependent on the FCS network for command and control interaction with their soldier/operators. Inadequate response time for unmanned payload tele-operation and target designation will result in degraded payload performance and targeting when these modes are required. Net-ready critical performance parameter verification and testability. The Army recognizes the risk that FCS will not be able to adequately verify and test compliance with this parameter as it relates to the Global Information Grid. FCS is expected to have extensive connectivity with other services and agencies via the Grid. The risk is due to, among other things, the many yet-to-be- defined critical or enterprise interfaces which are being delivered in parallel. Failure to meet the net-ready testability requirements could result in, among other things, fielding delays and cost and schedule overruns. All of the unmanned and manned ground vehicles and several other FCS systems are expected to have difficulty meeting their assigned weight targets. According to program officials, about 950 weight reduction initiatives were being considered just for the manned ground vehicles. The Army expects the FCS program to make substantial progress toward meeting these goals by the time of the preliminary design review. It is not yet clear what, if any, additional trade-offs of requirements and designs may be needed to meet the FCS weight goals. High levels of reliability will be needed for the FCS brigade combat teams to meet their requirements for logistics footprint and supportability. Current projections indicate that many FCS systems—including the Class IV unmanned aerial vehicle, communications subsystems, and sensors— may not meet the Army’s high expectations for reliability. The Army plans to address these issues and improve reliability levels by the time of the preliminary design review in 2009. The Army and lead systems integrator have also identified other medium to high risk issues that could affect the requirements and design concepts for individual FCS systems. These include: Class I unmanned aerial vehicle heavy fuel engine. The Class I vehicle requires a heavy fuel engine that is small in size, lightweight, and operates with high power efficiency. Such an engine does not currently exist, and no single candidate system will meet all FCS requirements without additional development. An engine design that cannot balance size and power will critically affect compliance with several key requirements. Lightweight track component maturation. Current band track designs do not meet mine blast requirements and may not meet the FCS durability requirement or the critical performance parameter requirements for reducing logistics footprint and reduced demand for maintenance and supply. Without enhanced mine blast resistance, vehicle mobility will be diminished, which could result in survivability impacts. Vehicular motion effects. There is likelihood that system design may not preclude vehicular-induced motion sickness capable of degrading the crews’ ability to execute their mission. These effects may reduce the ability of the crew to perform cognitive tasks while in motion, thereby reducing operational effectiveness. Safe unmanned ground vehicle operations. If necessary operational experience and technology maturity is not achieved, the brigade combat teams may not be able to use these vehicles as planned. Also, if a high level of soldier confidence in the reliability and accuracy of fire control of weapons on moving unmanned ground vehicles is not achieved, the rules of engagement of these systems may be severely restricted. Cost Could Force Additional Requirements Trade-offs Unit cost reduction goals have been established at the FCS brigade combat team level and have been allocated down to the individual FCS systems and major subsystems. Many FCS systems are above their assigned average cost levels, and stringent reduction goals have been assigned. In particular, the manned ground vehicles have a significant challenge ahead to meet their unit cost goals. In order to meet these goals, requirements and design trade-offs will have to be considered. The Army faces considerable uncertainty about how much investment money it will have in the future for FCS. The Army has capped the total amount of development funding available for FCS, and the contract contains a provision to identify trade-offs to keep costs within that cap. Hence, if costs rise, trade-offs in requirements and design will be made to keep within the cap. Recent events provide a good example of this situation. In 2006, the Army conducted a study to determine the number and type of unmanned aerial vehicles it can and should maintain in its inventory. All four of the FCS unmanned aerial vehicles were included in that study, and a decision has recently been made to remove the Class II and III vehicles from the core program. While this will free up money for other needs, the Army will have to reallocate the requirements from those unmanned aerial vehicles to other FCS systems. Considerations for the 2009 FCS Milestone Review As it proceeds to the preliminary design review and the subsequent go/no- go milestone, the Army faces considerable challenges in completing the definition of technically achievable and affordable system-level requirements, an essential element of a sound business case. Those challenges include completing the definition of all system-level requirements for all FCS systems and the information network (including addressing the “to-be-determined” items and issues to be resolved); completing the preliminary designs for all FCS systems and clearly demonstrating that FCS key performance parameters are obtaining a declaration from the Army user community that the likely outcomes of the FCS program will meet its projected needs; clearly demonstrating that the FCS program will provide capabilities that are clearly as good as or better than those available with current Army forces, a key tenet set out by the Army as it started the FCS development program in 2003; mitigating FCS technical risks to significantly lower levels; and making demonstrable progress towards meeting key FCS goals including weight reduction, reliability improvement, and average unit production cost reduction. Army Reports Significant Progress, but Major Technological Challenges Remain The Army has made progress in the areas of critical technologies, complementary programs, and software development. In particular, FCS program officials report that the number of critical technologies they consider as mature has doubled in the past year. While this is good progress by any measure, FCS technologies are far less mature at this point in the program than called for by best practices and DOD policy, and they still have a long way to go to reach full maturity. The Army has made some difficult decisions to improve the acquisition strategies for some key complementary programs, such as Joint Tactical Radio System and Warfighter Information Network-Tactical, but they still face significant technological and funding hurdles. Other complementary programs had been unfunded, but Army officials told us that these issues have been addressed. Finally, the Army and the lead systems integrator are utilizing many software development best practices and have delivered the initial increments of software on schedule. On the other hand, most of the software development effort lies ahead, and the amount of software code to be written—already an unprecedented undertaking—continues to grow as the demands of the FCS design becomes better understood. The Army and lead systems integrator have recognized several high risk aspects of that effort and mitigation efforts are underway. FCS Critical Technologies Are Maturing Faster Than Predicted Last Year Last year, we reported that an independent review team assessment revealed that 18 of the program’s 49 critical technologies had reached Technology Readiness Level (TRL) 6—a representative prototype system in a relevant environment. The independent team projected that by 2006, 22 of FCS’s 49 critical technologies would reach TRL 6. The FCS program office currently assesses that 35 of 46 technologies are at or above TRL 6—a significantly faster maturation pace than predicted last year. Figure 3 compares the readiness levels of FCS technologies over a 3-year period. Several of these technologies jumped from a TRL 4 (low-fidelity breadboard design in a laboratory environment) to a TRL 6 including cross domain guarding solutions and the ducted fan for the Class 1 unmanned aerial vehicle. The program’s technology officials maintain that such a leap can be made, even though it was not anticipated by the independent assessment. They cited the ducted fan technology for small unmanned aerial vehicles as an example. This technology was largely considered immature until a single demonstration showcased the system’s capabilities in demanding conditions, which convinced Army leadership that the ducted fan technology was at a TRL 6. Appendix IV lists all critical technologies, their current TRL status, and the projected date for reaching TRL 6. However, not all of the FCS technologies are truly at a TRL 6. Two of the most important technologies for the success of manned ground vehicles and the overall FCS concept are lightweight armor and active protection. The Army has previously been more optimistic about the development pace for these technologies. However, during the past year, the Army recognized that the particular solutions they were pursuing for lightweight armor were inadequate and active protection only satisfied the conditions for a TRL 5. Active Protection System An active protection system is part of the comprehensive FCS hit avoidance system architecture that will protect the vehicles from incoming rounds, like rocket-propelled grenades and anti-tank missiles. The active protection system would involve detecting an incoming round or rocket propelled grenade and launching an interceptor round from the vehicle to destroy the incoming weapon. In mid-2006, the lead systems integrator (with Army participation) selected Raytheon from among numerous candidates to develop the architecture to satisfy FCS short-range active protection requirements. A subsequent trade study evaluated several alternative concepts and selected Raytheon’s vertical launch concept for further development. While the FCS program office’s most recent technology readiness assessment indicates that the active protection system is at TRL 6, a 2006 trade study found that the Raytheon concept had only achieved a TRL 5. Active protection system is a vital technology for the FCS concept to be effective, and the FCS manned ground vehicles survivability would be questionable without that capability. Not only will the active protection system concept chosen need additional technology development and demonstration, but it also faces system integration challenges and the need for safety verifications. Indeed, the Army recognizes that it faces a challenge in demonstrating if and how it can safely operate an active protection system when dismounted soldiers are nearby. Lightweight Hull and Vehicle Armor A fundamental FCS concept is to replace mass with superior information—that is to see and hit the enemy first rather than to rely on heavy armor to withstand a hit. Nonetheless, the Army has recognized that ground vehicles cannot be effective without an adequate level of ballistic protection. As a result, the Army has been developing lightweight hull and vehicle armor as a substitute for traditional, heavier armor. In the past year, the Army concluded that it would need additional ballistic protection and the Army Research Laboratory is continuing armor technology development to achieve improved protection levels and to reduce weight. The Army now anticipates achieving TRL 6 on the new armor formulation in fiscal year 2008, near the time of the manned ground vehicle preliminary design review. Armor will continue to be a technology as well as integration risk for the program for the foreseeable future. Technology Maturity Must Be Seen in a Broader Context As noted above, the Army’s progress in FCS technology is notable compared with the progress of previous years. This progress, however, does need to be put in a broader context. The business case for a program following best practices in a knowledge-based approach is to have all of its critical technologies mature to TRL 7 (fully functional prototype in an operational environment) at the start of product development. For the FCS, this would mean having had all technologies at TRL 7 by May 2003. By comparison, even with the progress the program has made in the last year, fewer than 35 of FCS’s 46 technologies have attained a lower maturity—TRL 6—3½ years after starting product development. Immature technologies are markers for future cost growth. In our 2006 assessment of selected major weapon systems, development costs for the programs that started development with mature technologies increased by a modest average of 4.8 percent over the first full estimate, whereas the development costs for the programs that started development with immature technologies increased by a much higher average of 34.9 percent. FCS program officials do not accept these standards. Rather, they maintain they only need to mature technologies to a TRL 6 by the time of the critical design review which is now scheduled for 2011. According to the Army’s engineers, once a technology achieves TRL 6, they are no longer required to track the technology’s progress. They maintain that anything beyond a TRL 6 is a system integration matter and not necessarily technology development. Integration often involves adapting the technologies to the space, weight, and power demands of their intended environment. To a large extent, this is what it means to achieve a TRL 7. This is work that needs to be accomplished before the critical design reviews and is likely to pose additional trade-offs the Army will have to make to reconcile its requirements with what is possible from a technology and engineering standpoint. Accordingly, the FCS program has singled out several critical technologies that have been assessed at TRL 6 but yet continue to have moderate or high risk that could have dire consequences for meeting program requirements if they are not successfully dealt with. Examples include: High density packaged power. Current battery technology may not meet the performance levels needed to support the initial production of FCS. Among other things, calendar life, cost, cooling methods, safety, and thermal management have not been demonstrated. The potential impacts of this risk could affect not only vehicle propulsion but also lethality and supportability. High power density engine. The Army has recognized that there is a risk that engine manufacturers may not have the capability to build a reliable, cost effective engine that will meet FCS requirements within the FCS program schedule. Engines have been tested that meet the power density required but not at engine power levels consistent with manned ground vehicle needs. The mitigation strategy includes engine testing to identify and correct potential engine design issues as soon as possible. Hull anti-tank mine blast protection. The Army recognizes that there is a probability, given the weight constraints on FCS platforms and evolving blast mitigation technology, that the FCS hull and crew restraints will not protect the crew from life threatening injury due to anti-tank blast mines equal to (or greater than) the threshold requirement. The potential consequence is that the mobility and survivability of the brigade combat team will be affected. The FCS program and Army Research Laboratory are developing an anti-tank mine kit for each manned ground vehicle to meet requirements. Highband networking waveform. FCS needs a high data rate capability to send sensor data and to support the FCS transit network. The Wideband Information Network-Tactical does not yet meet the performance requirements for size, weight, and power; signature management; and operational environments. There may be significant schedule and cost risk involved in getting that radio to meet the requirements. Without the high data rate capability, sensor data may not be presented in an adequate or timely fashion to perform targeting or provide detailed intelligence data to the warfighter. Cross-domain guarding solution. FCS needs this technology to ensure the security of information transmitted on the FCS information network. The Army recognizes that it will be difficult to obtain certification and accreditation as well as to meet the space, weight, and power and interface requirements of FCS. Failure to address these concerns in a timely manner will result in delays in fielding FCS-equipped units and additional costs. The FCS program will continue to face major technological challenges for the foreseeable future. The independent technology assessment planned to coincide with the preliminary design review in early 2009 should provide objective insights regarding the Army’s progress on technology maturity and system integration issues. Army Reassessing Complementary Programs The FCS program may have to interoperate or be integrated with as many as 170 other programs, some of which are in development and some of which are currently fielded programs. These programs are not being developed exclusively for FCS and are outside of its direct control. Because of the complementary programs’ importance to FCS—52 had been considered essential to meeting FCS key performance parameters— the Army closely monitors how well those efforts will synchronize with the FCS program. However, many of these programs have funding or technical problems and generally have uncertain futures. We reported last year that the Army is reassessing the list of essential complementary programs given the multiple issues surrounding them and the budgetary constraints the Army is facing. In addressing the constrained budget situation in the 2008 to 2013 program objective memorandum, program officials said the Army is considering reducing the set of systems. When the set of complementary programs is finalized, the Army will have to determine how to replace any capabilities eliminated from the list. Two complementary programs that make the FCS network possible, the Joint Tactical Radio System (JTRS) and the Warfighter Information Network-Tactical (WIN-T), were restructured and reduced in scope. A challenge in making changes in these programs is their individual and cumulative effects on FCS performance. JTRS is a family of software-based radios that is to provide the high capacity, high-speed information link to vehicles, weapons, aircraft, sensors, and soldiers. The JTRS program to develop radios for ground vehicles and helicopters—now referred to as Ground Mobile Radio — began product development in June 2002 and the Army has not yet been able to mature the technologies needed to generate sufficient power as well as meet platform size and weight constraints. A second JTRS program to develop variants of small radios that will be carried by soldiers and embedded in several FCS core systems—now referred to as Handheld, Manpack, and Small Form Factor radios—entered product development with immature technologies and a lack of well-defined requirements. In 2005, DOD directed the JTRS Joint Program Executive Office to develop options for restructuring the program to better synchronize it with FCS and to reduce schedule, technology, requirements, and funding risks. The restructuring plan was approved in March 2006 and is responsive to many of the issues we raised in our June 2005 report. However, the program still has to finalize details of the restructure including formal acquisition strategies, independent cost estimates, and test and evaluation plans. Further, there are still cost, schedule, and technical risks associated with the planned delivery of initial capabilities, and therefore it is unclear whether the capabilities will be available in time for the first spin-out of FCS capabilities to current forces in 2008. Fully developed prototypes of JTRS radios are not expected until 2010 or later. The Army is developing WIN-T to provide an integrated communications network to connect Army units on the move with higher levels of command and provide the Army’s tactical extension to the Global Information Grid. Although the program has been successful in developing some technologies and demonstrating early capabilities, the status of its critical technologies is uncertain. As a result of an August 2005 study, the WIN-T program is being re-baselined to meet emerging requirements as well as a shift in Army funding priorities. The Army’s proposal for restructuring would extend system development for about 5 years, and delay the production decision from 2006 to about 2011, while seeking opportunities to spin out WIN-T technologies both to FCS and to the current force. Despite this improvement, several risks remain for the program, and the restructuring does have consequences. Coupled with new FCS requirements, the restructure will increase development costs by over $500 million. Critical technologies that support WIN-T’s mobile ad hoc networking must still be matured and demonstrated, while the new FCS requirements will necessitate further technology development. Also, some WIN-T requirements are unfunded, and the Office of the Secretary of Defense recently non-concurred with part of the program’s Technology Readiness Assessment. In order to obtain concurrence, the WIN-T program manager is updating the body of evidence material to reaffirm the technology maturity estimates. Army Is Devoting Considerable Attention to Software Development, but Major Risks Need to be Addressed The FCS software development program is the largest in DOD history, and the importance of software needed for FCS performance is unprecedented. The Army is attempting to incorporate a number of best practices into their development, and some initial increments of software have been delivered on time. However, since the program started, the projected amount of software needed for FCS has almost doubled, to 63.8 million lines of code. Further, the Army must address a number of high risk issues that could impact delivery schedules, operational capabilities, and overall FCS performance. Disciplined Approach Needed to Manage Unprecedented Amount of Software Several numbers help illustrate the magnitude of the FCS software development effort 95 percent of FCS’s functionality is controlled by software, 63 million lines of code are currently projected to be needed for FCS, more than 3 times the amount being developed for the Joint Strike Fighter; FCS will have its own operating system, like Microsoft Windows, called the System-of-Systems Common Operating Environment; and Over 100 interfaces or software connections to systems outside FCS will have to be developed. Of primary importance to the success of FCS is the System-of-Systems Common Operating Environment software. This software is expected to act as the infrastructure for other FCS software. It is to standardize component-to-component communications within computers, vehicles, the virtual private networks, and the Global Information Grid, enabling interoperability with legacy Army, joint, coalition, government, and non- government organizations. Finally, it is to provide the integration framework for the FCS family of systems and enable integrated system-of- systems functionality and performance. We have previously reported that software-intensive weapon programs are more likely to reach successful outcomes if they used a manageable evolutionary environment and disciplined process and managed by metrics. The Army is attempting to follow such an approach to meet the software challenges on FCS. Specifically, FCS software will be developed in four discrete stages, or blocks. Each block adds incremental functionality in eight functional areas (command and control, simulation, logistics, training, manned ground vehicles, unmanned aerial vehicles, unmanned ground vehicles, and warfighting systems). The Army and lead systems integrator are also partitioning software into at least 100 smaller, more manageable subsystems. The FCS program is also implementing scheduled and gated reviews to discipline software development and have developed a set of metrics to measure technical performance in terms of growth, stability, quality, staffing, and process. Considerable Risks Remain with Software Development Apart from the sheer difficulty of writing and testing such a large volume of complex code, a number of risks face the FCS software development effort. As requirements have become better understood, the number of lines of code has grown since the program began in 2003. Specifically, in 2003, the Army estimated that FCS would need 33.7 million lines of code, compared to today’s estimate of 63.8 million. As the Army and its contractors learn more about the limits of technology and its design concepts, the amount and functionality to be delivered by software may change. FCS’s 63 million lines of software code can be broken down further into code that is new, reused, or commercial-off-the-shelf, as seen in figure 4. The Army maintains that new software code presents the greatest challenge because it has to be written from scratch. Reused code is code already written for other military systems that is being adapted to FCS. Similarly, commercial-off-the shelf software is code already written for commercial systems that is being adapted to FCS. A program official told us that estimates of software code that will be reused are often overstated and the difficulty of adapting commercial software is often understated in DOD programs. This optimism translates into greater time and effort to develop software than planned. An independent estimate of reuse and commercial software has concluded that these efforts have been understated for the FCS program, which will translate into higher cost and schedule slippage. If the independent estimate proves correct, more software development could be pushed beyond the production decision. A foundational block of software (Build 0) has already been completed and an interim package of the System-of-Systems Common Operating Environment software was recently tested and delivered. However, as can be seen in table 1, even if FCS stays on schedule, a portion—10 percent— of FCS software is planned to be delivered and tested after the early 2013 production decision that will limit the knowledge available to decision makers at that point. Currently, the Army estimates that 45 percent of the total 63 million source lines of code will have been written and tested by the early 2009 preliminary design review and 75 percent will be done by the 2011 critical design review. Although there has been no significant schedule slippage to date on the initial increments of software, both of these estimates may prove to be ambitious. Additionally, according to program officials, the most difficult part of software development is the last 10 percent. Although the Army is attempting to implement several software best practices, there are a number of factors that may complicate those efforts. One of the leading problems in software development is the lack of adequately defined requirements. Without adequate definition and validation of requirements and design, software engineers could be coding to an incorrect design, resulting in missing functionality and errors. As we discussed earlier, the ultimate system-level requirements may not be complete until the preliminary design review in 2009. The Army acknowledges that the FCS’s lack of adequate requirements and incomplete system architecture could result in software that does not provide the desired functionality or performance. This lack of top-level requirements and architecture definition also affects the accuracy of projected lines of code. Program risk charts suggest that software estimates could be understated by as much as 70 percent, which could impact overall schedule and performance. The Army has identified specific aspects of FCS software development as high risk and is developing plans to mitigate the risks: System-of-Systems Common Operating Environment Availability and Maturity. There is a recognized risk that the software may not reach the necessary technical maturity level required to meet program milestones. FCS software integration performance and development. Due to the complexity, functional scope, net-centric focus, and real-time requirements for the command and control software, software integration may not yield fully functional software that performs as desired. Block 1 incompatible software components during integration. There are a large number of diverse groups working on software components that need to be integrated into full units. A lack of early integration process and collaboration among the suppliers represents substantial risk to rework during integration and subsequent schedule impact. Software estimating accuracy. To date, estimating accuracy has been hampered by changing requirements, immature architecture, and insufficient time to thoroughly analyze software subsystems sizing. The difficulties associated with accurate software estimating is an indication that complexity increases as the design is better understood and this serves to increase the level of effort. Software supplier integration. The unprecedented nature, volatility, and close coupling of FCS suppliers’ software will frequently require various combinations of suppliers to share information and rapidly negotiate changes in their products, interfaces, and schedules. As these suppliers are traditionally wary competitors that are used to performing to fixed specifications, there are significant risks of slow and inflexible adaptation to critical FCS sources of change. Failure to do so will translate directly into missed delivery schedules, significantly reduced operational capabilities, and less dependable system performance. Considerations for the 2009 FCS Milestone Review As it approaches the preliminary design review and the subsequent go/no- go milestone review, the Army should have made additional progress in developing technologies and software as well as aligning the development of complementary programs with the FCS program. The challenges that will have to be overcome include demonstrating that all critical technologies are mature to at least the TRL 6 level. This assessment should be reviewed and validated by an independent review team; mitigating the recognized technical risks for the FCS critical technologies, including their successful integration with other FCS subsystems and systems; clearly demonstrating that the risks inherent in the active protection system and the lightweight hull and vehicle armor have been reduced to low levels; synchronizing the JTRS and WIN-T development schedules with FCS system integration and demonstration needs for both the spinouts and core program; mitigating the cost, schedule, and performance risks in software development to acceptably low levels; and establishing the set of complementary programs that are essential for FCS’s success, ensuring that that are fully funded, and aligning theirs and the overall FCS program schedules. Concurrent Acquisition Strategy Will Provide for Late Demonstration of FCS Capabilities The FCS acquisition strategy and testing schedule have become more complex as plans have been made to spin out capabilities to current Army forces. The strategy acquires knowledge later than called for by best practices and DOD policy. In addition, knowledge deficits for requirements and technologies have created enormous challenges for devising an acquisition strategy that can demonstrate the maturity of design and production processes. Even if requirements setting and technology maturity proceed without incident, FCS design and production maturity is not likely to be demonstrated until after the production decision is made. The critical design review will be held much later on FCS than other programs, and the Army will not be building production- representative prototypes with all of their intended components to test before production. Much of the testing up to the 2013 production decision will involve simulations, technology demonstrations, experiments, and single system testing. Only after that point, however, will substantial testing of the complete brigade combat team and the FCS concept of operations occur. However, production is the most expensive phase in which to resolve design or other problems found during testing. Spin-outs, which are intended to accelerate delivery of FCS capabilities to the current force, also complicate the acquisition strategy by absorbing considerable testing resources and some tests. Acquisition Strategy Will Demonstrate Design Maturity after Production Begins The Army’s acquisition strategy for FCS does not reflect a knowledge- based approach. Figure 5 shows how the Army’s strategy for acquiring FCS involves concurrent development, design reviews that occur late in the program, and other issues that are out of alignment with the knowledge-based approach that characterizes best practices and is supported in DOD policy. Ideally, the preliminary design review occurs at or near the start of product development. Activities leading up to the preliminary design review include, among others, translating system requirements into design specifics. Doing so can help reveal key technical and engineering challenges and can help determine if a mismatch exists between what the customer wants and what the product developer can deliver. Scheduling the preliminary design review early in product development is intended to help stabilize cost, schedule, and performance expectations. The critical design review ideally occurs midway into the product development phase. The critical design review should confirm that the system design performs as expected and is stable enough to build production-representative prototypes for testing. The building of production-representative prototypes helps decision makers confirm that the system can be produced and manufactured within cost, schedule, and quality targets. According to the knowledge-based approach, a high percentage of design drawings should be completed and released to manufacturing at critical design review. The period leading up to critical design review is referred to as system integration, when individual components of a system are brought together, and the period after the review is called system demonstration, when the system as a whole demonstrates its reliability as well as its ability to work in the intended environment. The Army has scheduled the preliminary design review in early 2009, about 6 years after the start of product development. The critical design review is scheduled in fiscal year 2011, just 2 years after the scheduled preliminary design review and 2 years before the initial FCS production decision in fiscal year 2013. This will leave little time for product demonstration and correction of any issues that are identified at that time. This is not to suggest that the two design reviews for the FCS could have been conducted earlier but rather that commitments to build and test prototypes and begin low-rate production are scheduled too soon afterward. The timing of the design reviews is indicative of how late knowledge will be attained in the program, even if all goes according to plan. With requirements definition not being complete until at least the final preliminary design review in early 2009 and technology maturation not until after that, additional challenges will have to be addressed within the system integration phase. System integration will already be a challenging phase due to known integration issues and numerous technical risks. The best practice measure for the completion of the system integration phase is the release of at least 90 percent of engineering drawings by the time of the critical design review. The Army is planning to have developmental prototypes of all FCS systems available for testing prior to low-rate initial production. For example, most of the manned ground vehicle prototypes are expected to be available in 2011 for developmental and qualification testing. However, these prototypes are not expected to be production-representative prototypes and will have some surrogate components. Whereas the testing of fully integrated, production-representative prototypes demonstrate design maturity and their fabrication can demonstrate production process maturity, neither of these knowledge points will be attained until after the initial production decision is made. System-Level Testing Compressed into Late Development and Early Production The FCS test program is unique because it is designed to field a new fighting unit and concept of operations to the Army, not just new equipment. To help do this, the Army has incorporated a new evaluation unit, known as the Evaluation Brigade Combat Team, to help with development and testing of the FCS systems and the tactics, techniques, and procedures necessary for the unit to fight. The test effort will involve four phases during development, which examine how the program is maturing hardware and software, during development. These phases are intended as check points. The first phase has a corresponding spin-out of mature FCS capabilities to current forces. The Army is proceeding with its plans to reduce FCS risks using modeling, simulation, emulation, and system integration laboratories. This approach is a key aspect of the Army’s acquisition strategy and is designed to reduce the dependence on late testing to gain valuable insights about many aspects of FCS development, including design progress. However, on a first-of-a-kind system—like FCS—that represents a radical departure from current systems and warfighting concepts, actual testing of all the components integrated together is the final proof that the FCS system-of - systems concept works both as predicted and expected. FCS program test officials told us that while they understand the limitations involved, the use of emulators, surrogates, and simulations gives the Army a tremendous amount of early information, particularly about the system-of- systems and the network. This early information is expected to make it easier for the Army to deal with the compressed period between 2010 and 2014 and give the Army the ability to fix things quicker. As we were preparing this report, it was not clear what, if any, impact the Army’s program adjustments would have on its testing and demonstration plans and schedules. Table 2 describes the key test events, as currently scheduled, throughout the FCS program. The majority of testing through 2012 is limited in scope and is more about confidence building than demonstrations of key capabilities. Much like the overall acquisition strategy, the FCS testing plan will provide key knowledge late in the systems development phase. Early test efforts will focus on experiments and development testing of individual systems. Some early systems will be tested as part of the Army’s efforts to spin out technologies to current forces, including unmanned ground sensors and the non-line-of-sight-launch system. The bulk of the developmental prototypes will not be available until 2010 and later for testing and demonstrations. The first large scale FCS test that will include a majority of the developmental prototypes and a large operational unit will not take place until 2012, the year before production is now slated to begin. This will mark the start of the Army’s testing of the whole FCS, including the overarching network and the FCS concept. For example, a limited user test in 2010 involves only a platoon and a few unmanned aerial vehicles while a similar test, in 2012, will involve two companies and developmental prototypes for each of the manned ground vehicles as well as other systems being tested at the brigade level. Starting in 2012, several key tests will occur that should give decision makers a clearer understanding of whether the FCS system-of-systems and concept actually work as expected. By the end of 2014, production representative vehicles are expected to be available and tested in a production limited user test. Another important test is the initial operational test and evaluation in 2016, which provides the first full assessment of the entire program including all of the FCS systems, the brigade combat team, network operations, and the actual operating concept. This test involves full spectrum operations in a realistic environment. There are two major risks in the FCS testing approach: schedule compression and testing of the network. The first risk centers on the lack of time available to identify, correct, and retest for problems that come up during early testing and the second on the lack of capabilities to test an essential element of the FCS concept, the information network. Independent test officials noted that it is unclear what the Army expects from the network. With the network identified as a major risk element of the program, as well as a major risk, test officials noted that the Army needs to set benchmarks for what will be demonstrated over time. Independent testing officials have also told us that the FCS test schedule is very tight and may not allow adequate time for “test-fix-test” testing. The test and evaluation master plan recognizes this possibility by noting that within each integration phase there is only time to test and fix minor issues. More substantial problems would have to be fixed in a succeeding integration phase. Overall, testing officials are concerned that the FCS program is driven by its schedule and that the Army may rush prematurely into operational testing and perform poorly when it is too late to make cost effective corrections. Testing of the network is critical because it must provide secure, reliable access and distribution of information over extended distances and, sometimes, when operating in complex terrain. Testing the large number of FCS sensors and the network’s ability to process the information will not be effective since test capabilities, methodologies, and expertise needed to test a tactical network of this magnitude are incomplete and insufficient. The first major test of the network and FCS together with a majority of prototypes will not take place until 2012, the year before low- rate production is now expected to begin. The FCS program is thus susceptible to late-cycle churn, that is, the effort required to fix a significant problem that is discovered late in a product’s development. In particular, churn refers to the additional—and unanticipated—time, money, and effort that must be invested to overcome problems discovered through testing. Problems are most serious when they delay product delivery, increase product cost, or escape to the customer. The discovery of problems through testing conducted late in development is a fairly common occurrence on DOD programs, as is the attendant late-cycle churn. Often, tests of a full system, such as launching a missile or flying an aircraft, become the vehicles for discovering problems that could have been found earlier and corrected less expensively. When significant problems are revealed late in a weapon system’s development, the reaction—or churn—can take several forms: extending schedules to increase the investment in more prototypes and testing, terminating the program, or redesigning and modifying weapons that have already made it to the field. While DOD has accepted such problems over the years, FCS offers particular challenges, given the magnitude of its cost in an increasingly competitive environment for investment funds. Problems discovered at the production stage are generally the most expensive to correct. Spin-Outs Support the Current Force but Place More Demands on FCS Test Resources When the Army restructured the FCS program in 2004, it revised its acquisition strategy to include a way to field various FCS capabilities— technologies and systems—to current forces while development of the core FCS program is still underway. This restructuring was expected to benefit the current forces as well as provide early demonstrations that would benefit the core FCS program. Known as spin-outs, the Army plans to begin limited low-rate production of the systems planned for Spin-Out 1 in 2009 and field those systems to current Army forces 2 years later. Leading up to the production decision in 2009 will be system development tests and a limited user test. Additional spin-outs are now planned to occur in 2010 and 2012. Using this method, the Army plans to deliver significant capabilities to the current force earlier than previously planned. Over the long-term, these capabilities include enhanced battle command capabilities and a variety of manned and unmanned ground and air platforms that are intended to improve current force survivability and operations. Currently, FCS Spin-Out 1 involves the non-line-of sight launch system and unmanned ground sensors as well as early versions of the System-of- Systems Common Operating Environment and Battle Command software subsystems. Also included are the kits needed to interface with current force vehicles. These capabilities will be tested and validated using the Evaluation Brigade Combat Team, which will provide feedback to help refine the FCS doctrine and other matters. These systems are expected to be fielded to operational units starting in 2010, although it is unclear yet if these elements of FCS will provide significant capability to the current forces at a reasonable cost. There are two test-related concerns with spin-outs. One is that spin-outs have complicated the FCS acquisition strategy because they focus early testing and test resources on a few mature systems that will be spun out to current Army forces. FCS program test officials told us that the primary focus of the program’s first integration phase will be on events supporting systems in that spin-out. It is unclear if subsequent integration phases will be similarly configured. If that were to occur, fewer overall FCS systems would be looked at and tested in each phase, and testing to evaluate how the FCS system-of-systems and concept of operations could come later than originally planned. A program official has noted that the schedule to deliver the needed hardware and software to the evaluation brigade combat team is ambitious and the schedule for tests leading up to a production decision for Spin-Out 1 is compressed. Some individual systems developmental and other testing began in 2006, but key user and operational tests will not occur until 2008, just prior to the production decision for systems in Spin-Out 1. Independent test officials have expressed concern not only over whether there will be enough time to test, fix and test again during these key tests but also whether there will be enough time to “reset” or refurbish the equipment being used from one test to another. For example, the technical field test, force development test and evaluation and pilot test, and the limited user tests for Spin-Out 1 are to be conducted back-to-back over a several month period just before the production decision. In addition, key tests including a limited user test for the non-line-of-sight launch system will take place after the Spin-Out 1 production decision. FCS program test officials have told us, however, that the program does not plan to fix and test again any problems discovered in a particular integration phase until the next integration phase. They also noted that the compressed event schedule allowed them to use the same resources and soldiers in each test. Considerations for the 2009 FCS Milestone Review As the Army proceeds to the preliminary design review, the FCS acquisition strategy will likely continue to be aggressive, concurrent, and compressed and one that develops key knowledge later in the development process than called for by best practices. Few FCS platforms will have been tested by this point. The majority of testing and the proof of whether the systems can be integrated and work together are left to occur after prototypes are delivered starting in the next decade. The Army faces a number of key challenges as it proceeds to and beyond the preliminary design review including completing requirements definition and technology maturity (at least to TRL 6) to be able to complete the final preliminary design review; clearly demonstrating spinout capabilities prior to committing to their initial production and fielding; completing system integration and releasing at least 90 percent of engineering drawings by the critical design review in 2011; allocating sufficient time, as needed, for test, fix and retest throughout the FCS test program; and allocating sufficient time to thoroughly demonstrate each FCS system, the information network, and the FCS concept prior to committing to low rate initial production in 2013. Likely Growth of FCS Costs Increases Tension between Program Scope and Available Funds Last year, we reported that FCS program acquisition costs had increased to $160.7 billion—76 percent—since the Army’s original estimate (figures have been adjusted for inflation.) While the Army’s current estimate is essentially the same, an independent estimate from the Office of the Secretary of Defense puts the acquisition cost of FCS between $203 billion and $234 billion. The comparatively low level of technology and design knowledge at this point in the program portends future cost increases. Our work on a broad base of DOD weapon system programs shows that most developmental cost increases occur after the critical design review, which will now be in 2011 for the FCS. Yet, by that point in time, the Army will have spent about 80 percent of the FCS’s development funds. Further, the Army has not yet fully estimated the cost of essential complementary programs and the procurement of spin-out items to the current force. The Army is cognizant of these resource tensions and has adopted measures in an attempt to control FCS costs. However, some of these measures involve reducing program scope in the form of lower requirements and capabilities, which will have to be reassessed against the user’s demands. Symptomatic of the continuing resource tension, the Army recently announced that it was restructuring several aspects of the FCS program, including the scope of the program and its planned annual production rates to lower its annual funding demands. This will have an impact on program cost, but full details are not yet available. New Independent Estimates Indicate Higher FCS Acquisition Costs The Army’s official cost estimate for FCS has changed only slightly from last year’s estimate, which reflected a major program restructuring from the original estimate. In inflated dollars, the program office estimates the acquisition cost will be $163.7 billion, up from the original 2003 estimate of $91.4 billion. However, independent cost estimates are significantly higher, as presented in table 3. Recent independent estimates from the Office of the Secretary of Defense’s Cost Analysis Improvement Group indicate that FCS acquisition costs could range from $203 billion to $234 billion in inflated dollars. The independent estimate reflected several additional years and additional staffing beyond the Army’s estimate to achieve initial operational capability. The difference in estimates is also attributable to the Cost Analysis Improvement Group’s assessment that FCS software development would require more time and effort to complete than the Army had estimated. The independent estimate also provided for additional risks regarding the availability of key systems to support the FCS network, such as the JTRS radios. Neither the Army nor the Defense Acquisition Board has accepted the independent estimate. Program officials believe the independent estimate of research and development costs is too high because it is too conservative regarding risks. The higher estimates of procurement costs reflect additional quantities of individual systems needed to provide full capabilities to the Brigade Combat Team. Neither the Army nor independent estimate reflects the recent decision to reduce the number of FCS systems and slow down the production rate. Prior to that decision, the Army had actually been contemplating expanding the scope of FCS to include additional Class IV unmanned aerial vehicles, additional unattended ground sensors, intelligent munitions systems, and test assets for the Army user community, as well as two new systems—a centralized controller device and a rearming module for the manned ground vehicles. This expansion would have increased the Army’s estimate to about $208 billion, but appears obviated by the recent decision to reduce scope. Soft Knowledge Base for Cost Estimates Portends Future Cost Growth Cost estimates for any program are limited by the level of product knowledge available. All of the FCS estimates are thus limited by the relatively low level of knowledge in the FCS program today. If the FCS program had been following knowledge-based acquisition practices, its 2003 estimate would have been based on mature technologies and the current estimate would have had the benefit of a complete preliminary design review and a considerable amount of work towards the critical design review. The program’s estimate would be based much more on demonstrated knowledge and actual cost versus assumptions. Instead, the current FCS estimates are built on a knowledge base without mature technologies, a preliminary design that is at least 2 years away, and a critical design review that is 3 to 4 years away. The Army must, therefore, make significant assumptions about how knowledge will develop. As experience has shown, in many DOD weapon systems, assumptions generally prove optimistic and result in underestimated costs. As it is currently structured, the Army is planning to make substantial financial investments in the FCS program before key knowledge is gained on requirements, technologies, system designs, and system performance. Table 4 shows the annual and cumulative funding, as reported in the program’s current cost estimate, and the level of knowledge to be attained each fiscal year. The impact of the Army’s recent program adjustments on the research and development funding stream were not known at the time this report was written. As can be seen in table 4, through fiscal year 2007, the program will have spent about a third of its development budget—over $11 billion. By the time of the preliminary design review and the congressionally mandated go/no-go decision in 2009, the Army will have spent about 60 percent of its FCS development budget—over $18 billion. At that point, the program should have matured most of the critical technologies to TRL 6, and the definition of system-level requirements should be nearing completion. This is the level of knowledge the program should have achieved in 2003 before being approved for development start, according to best practices and the approach preferred by DOD in its acquisition policies. The FCS critical design review is now scheduled for fiscal year 2011. By that time, the program will have spent about $24.7 billion, or about 81 percent of its expected research and development expenditures. The immature state of FCS technologies and the timing of its critical design review make the FCS cost estimate vulnerable to future increases. In our 2006 assessment of selected major weapon systems, we found that development costs for the programs with mature technologies increased by a modest average of 4.8 percent over the first full estimate, whereas the development costs for the programs with immature technologies increased by a much higher average of 34.9 percent. Similarly, program acquisition unit costs for the programs with mature technologies increased by less than 1 percent, whereas the programs that started development with immature technologies experienced an average program acquisition unit cost increase of nearly 27 percent over the first full estimate. Our work also showed that most development cost growth occurred after the critical design review. Specifically, of the 28.3 percent cost growth that weapon systems average in development, 19.7 percent occurs after the critical design review. The current cost estimates do not fully reflect the total costs to the Army. Excluded are the costs of complementary programs, such as the Joint Tactical Radio System, which are substantial. Also, the costs to procure the FCS spin-out items and needed installation kits—previously estimated to cost about $23 billion—are not included. In fact, the procurement of FCS spinout items was not previously funded; however, as we were preparing this report, Army officials told us that in finalizing its budget plans for fiscal years 2008 to 2013, there was a decision to provide procurement funding for FCS items to be spun out to current forces. Congress recently mandated an independent cost estimate to address the full costs of developing, procuring, and fielding the FCS to be submitted by April 1, 2007. Army Steps to Control FCS Program Costs The Army has taken steps to manage the growing cost of FCS. Program officials have said that they budgeted for development risk by building $5 billion into the original cost estimates to cover risk. They have also said that they will not exceed the cost ceiling of the development contract, but as a result, they may have to modify, reduce, or delete lower-priority FCS requirements. However, this approach would reduce capabilities, and a lesser set of FCS capabilities may not be adequate to meet the user’s expectations. Also, the Army is focusing on reducing the average unit production cost of the FCS brigade combat teams, which currently exceed the amount at which each brigade combat team is budgeted. The Army has established a glide path to reduce the unit costs; however, program officials have said they are struggling to further reduce the unit costs in many cases, particularly as a result of challenges with the manned ground vehicles. Further, any additional savings from such initiatives may not be realized until several years later into the program. The FCS contract allows for the program to make what is called “Program Generated Adjustments” whereby any known cost overrun or increase in scope of work that would require additional funding is offset by identifying work scope that can be deleted with minimal impact to the program. Each year, the government and lead systems integrator will identify a prioritized list of candidates for capabilities that can be partially or completely deleted and its associated budget re-directed to the new work scope or to offset a cost overrun. The Army and lead systems integrator monitor the performance of the FCS program through an earned value management system, which allows program management to monitor the technical, schedule, and cost performance of the program. As it proceeds, the Army and lead systems integrator can use the information gleaned from the earned value management system to make informed program decisions and correct potential problems early. According to earned value data, the FCS is currently tracking fairly closely with cost and schedule expectations. However, it is too early in the program for the data at this point to be conclusive. Historically, the majority of cost growth on a development program occurs after the critical design review. Further, according to program officials, due to the size and complexity of the program, coupled with an uncertain budget from year to year, detailed planning packages are only planned about 3 to 6 months in advance. While this may be unavoidable for a program as complex as FCS, the near term status of the program, as reported by the earned value management system, does not fully represent the extent of the challenges the Army still faces with FCS. Funding Constraints Have Forced the Army to Restructure Its FCS Plans FCS will command most of the Army’s investment budget and thus must compete with other major investments and operations. If FCS costs increase, demands outside FCS increase, or expected funding decreases, adjustments are likely to be necessary in FCS. Last year, we reported that the large annual procurement costs for FCS were expected to begin in fiscal year 2012, which was largely beyond the then-current budget planning period (fiscal years 2006 to 2011). This situation is called a funding “bow wave.” This means that more funds would be required in the years just beyond the years covered in the current defense plan that are subject to funding limits. As previously structured, the FCS program would require over $12 billion annually in its peak procurement years. If the Army budget remains at its current levels, FCS could represent 60-70 percent of the Army’s procurement budget in those years at a time that the Army was meeting other demands, including force modularity, FCS spin-outs, complementary programs, aviation procurement, missile defense, trucks, ammunition, and other equipment. Recently, this tension between FCS scope, costs, and competing demands has led to another set of changes in the FCS program. The FCS program manager has informed us that, in light of budget issues for the 2008 to 2013 planning period, the Army has reduced annual production rates, and plans to forego two of the originally planned unmanned aerial vehicles, among other adjustments. While this course of action is necessary to accommodate funding realities, it has other consequences, as it would increase the FCS unit costs and extend the time needed to produce and deploy FCS-equipped brigade combat teams. It would also necessitate evaluating the effects of these changes on individual system requirements and on the aggregate characteristics of lethality, survivability, responsiveness, and supportability. Details of the adjustment to the FCS program are not yet finalized; thus, we have not evaluated the full implications of the changes. Considerations for the 2009 FCS Milestone Review By the time of the preliminary design review and the congressionally mandated go/no-go milestone in 2009, the Army should have more of the knowledge needed to build a better cost estimate for the FCS program. The Army should also have more clarity about the level of funding that may be available to it within the long term budget projections to fully develop and procure the FCS program of record. Continuing challenges include developing an official Army cost position that narrows the gap between the Army’s estimates and the independent cost estimate planned for that time frame. In the cost estimate, the Army should clearly establish if it includes the complete set and quantities of FCS equipment needed to meet established requirements; ensuring that adequate funding exists in its current budget and program objective memorandum to fully fund the FCS program of record; and securing funding for the development of the complementary systems deemed necessary for the FCS as well as to procure the FCS capabilities planned to be spunout to the current forces. Conclusions The Army has been granted a lot of latitude to carry out a large program like FCS this far into development with relatively little demonstrated knowledge. Tangible progress has been made during the year in several areas, including requirements and technology. Such progress warrants recognition, but not confidence. Confidence comes from high levels of demonstrated knowledge, which are yet to come. Following the preliminary design review in 2009, there should be enough knowledge demonstrated to assess FCS’s prospects for success. It is thus important that specific criteria—as quantifiable as possible and consistent with best practices—be established now to evaluate that knowledge. At the same time, decision makers must put this knowledge in context. Specifically, if the FCS is able to demonstrate the level of knowledge that should be expected at a preliminary design review, it will be about at the point when it should be ready to begin system development and demonstration. Instead, by that time, FCS will be halfway through that phase, with only 4 years left to demonstrate that the system-of-systems design works before the planned production commitment is made. For that reason, decision makers will have to assess the complete business case for FCS. This will include demonstrative proof not only that requirements can be met with mature technologies and the preliminary design, but also that the remainder of the acquisition strategy adequately provides for demonstration of design maturity, production process maturity, and funding availability before the production decision is made. Clearly, it is in the nation’s interests for the FCS to be the right solution for the future and to be a successful development. FCS has not been an easy solution to pursue and underscores the commitment and vision of Army leadership. Nonetheless, in view of the great technical challenges facing the program, the possibility that FCS may not deliver the right capability must be acknowledged and anticipated. At this point, the only alternative course of action to FCS appears to be current Army weapons, increasingly upgraded with FCS spin-out technologies. It is incumbent upon DOD, then, to identify alternative courses of action to equip future Army forces by the time the go/no-go decision is made on FCS. Otherwise, approval to “go” may have to be given not because FCS is sufficiently developed, but because there is no other viable course of action. Recommendations for Executive Action We recommend that the Secretary of Defense establish criteria now that it will use to evaluate the FCS program as part of its go/no-go decision following its preliminary design review. At a minimum, these criteria should include a definition of acceptable technology maturity consistent with DOD policy for a program half way through system development and demonstration; determination which FCS technologies will be scored against those use of an independent assessment to score the FCS technologies; a definition of acceptable software maturity consistent with DOD policy for a program half way through system development and demonstration; an independent assessment to score FCS software; the likely performance and availability of key complementary systems; an assessment of how likely the FCS system-of-systems—deemed reasonable from the progress in technology, software, and design— is to provide the capabilities the Army will need to perform its roles in joint force operations (Such an assessment should include sensitivity analyses in areas of the most uncertainty.); a definition of acceptable levels of technology, design, and production maturity to be demonstrated at the critical design review and the production decision; an assessment of how well the FCS acquisition strategy and test plan will be able to demonstrate those levels of maturity; a determination of likely costs to develop, produce, and support the FCS that is informed by an independent cost estimate and supported by an acceptable confidence level; and a determination that the budget levels the Army is likely to receive will be sufficient to develop, produce, and support the FCS at expected levels of cost. We also recommend that the Secretary of Defense analyze alternative courses of action DOD can take to provide the Army with sufficient capabilities, should the FCS be judged as unlikely to deliver needed capabilities in reasonable time frames and within expected funding levels. Agency Comments and Our Evaluation DOD concurred with our recommendations and stated that the Defense Acquisition Board’s review, aligned with the FCS program’s preliminary design review in 2009, will be informed by a number of critical assessments and analyses. These include a technology readiness assessment, a system engineering assessment, an independent cost estimate, an evaluation of FCS capabilities, an affordability assessment, and ongoing analyses of alternatives that include current force and network alternatives. We believe that these are constructive steps that will contribute to the Defense Acquisition Board review of the FCS following the preliminary design review. We note that it is important that the board’s review be recognized as a decision meeting—albeit not technically a milestone decision—so that a declarative go/no-go decision can be made on FCS. Accordingly, while it is necessary that good information—such as that included in DOD’s response—be presented to the board, it is also necessary that quantitative criteria that reflect best practices be used to evaluate the information. These criteria, some of which were included in our recommendations, should be defined by DOD now. For example, while FCS technologies need to be independently assessed, it is likewise important to establish what level of technology maturity is needed for a program at that stage and to evaluate the FCS technologies against that standard. This is true for software as well. In the area of cost, Army cost estimates should be evaluated against recognized standards, such as confidence levels as well as the independent cost estimate. We had also recommended that criteria be established to serve as a basis for evaluating the FCS acquisition strategy, including what would constitute acceptable levels of technology, design, and production maturity to be demonstrated at the critical design review and the production decision. DOD did not respond to these aspects of our recommendations, but a response is important because they have to do with the sufficiency of the FCS business case for the remainder of the program. Finally, as DOD evaluates alternatives, there are several things to keep in mind. First, an alternative need not be a rival to the FCS, but rather the next best solution that can be adopted if FCS is not able to deliver the needed capabilities. Second, an alternative need not represent a choice between FCS and the current force, but could include fielding a subset of FCS, such as a class of vehicles, if they perform as needed and provide a militarily worthwhile capability. Third, the broader perspective of the Department of Defense—in addition to that of the Army—will benefit the consideration of alternatives. We also received technical comments from DOD which have been addressed in the report, as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretary of the Army; and the Director, Office of Management and Budget. Copies will also be made available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-4841 if you or your staff has any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contributors to this report were Assistant Director William R. Graveline, William C. Allbritton, Noah B. Bleicher, Marcus C. Ferguson, John P. Swain, Robert S. Swierczek, and Carrie R. Wilson. Appendix I: Scope and Methodology To develop the information on the Future Combat System program’s progress toward meeting established goals, the contribution of critical technologies and complementary systems, and the estimates of cost and program affordability, we interviewed officials of the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Army G-8; the Office of the Under Secretary of Defense (Comptroller); the Secretary of Defense’s Cost Analysis Improvement Group; the Director of Operational Test and Evaluation; the Assistant Secretary of the Army (Acquisition, Logistics, and Technology); the Army’s Training and Doctrine Command; Surface Deployment and Distribution Command; the Fraunhofer Center at the University of Maryland; the Program Manager for the Future Combat System (Brigade Combat Team); the Future Combat System Lead Systems Integrator; and Lead Systems Integrator One Team contractors. We reviewed, among other documents, the Future Combat System’s Operational Requirements Document, the Acquisition Strategy Report, the Selected Acquisition Report, the Critical Technology Assessment and Technology Risk Mitigation Plans, and the Integrated Master Schedule. We attended the FCS System-of-Systems Functional Review, In-Process Reviews, In-Process Preliminary Design Review, Board of Directors Reviews, and multiple system demonstrations. In our assessment of the FCS, we used the knowledge-based acquisition practices drawn from our large body of past work as well as DOD’s acquisition policy and the experiences of other programs. We discussed the issues presented in this report with officials from the Army and the Secretary of Defense and made several changes as a result. We performed our review from March 2006 to March 2007 in accordance with generally accepted auditing standards. Appendix II: Comments from the Department of Defense Appendix III: Technology Readiness Levels Technology Readiness Levels (TRL) are measures pioneered by the National Aeronautics and Space Administration and adopted by DOD to determine whether technologies were sufficiently mature to be incorporated into a weapon system. Our prior work has found TRLs to be a valuable decision-making tool because they can presage the likely consequences of incorporating a technology at a given level of maturity into a product development. The maturity level of a technology can range from paper studies (TRL 1), to prototypes that can be tested in a realistic environment (TRL 7), to an actual system that has proven itself in mission operations (TRL 9). According to DOD acquisition policy, a technology should have been demonstrated in a relevant environment (TRL 6) or, preferably, in an operational environment (TRL 7) to be considered mature enough to use for product development. Best practices of leading commercial firms and successful DOD programs have shown that critical technologies should be mature to at least a TRL 7 before the start of product development. Appendix IV: Technology Readiness Level Ratings Last year’s TRL 6 projections 4 Army, Joint, Multinational Interface 6 Cross Domain Guarding Solution 9 Mobile Ad Hoc Networking Protocols 10 Quality of Service Algorithms 15 Multi-Spectral Sensors and Seekers 17 Air (Rotary Wing/UAV)—to—Ground 18 Air (Fixed Wing)—to—Ground (Interim/Robust Solutions 20 Ground—to—Ground (Mounted) Related GAO Products Defense Acquisitions: Improved Business Case Key for Future Combat System’s Success, GAO-06-564T. Washington, D.C.: April 4, 2006. Defense Acquisitions: Improved Business Case is Needed for Future Combat System’s Successful Outcome, GAO-06-367. Washington, D.C.: March 14, 2006. Defense Acquisitions: Business Case and Business Arrangements Key for Future Combat System’s Success, GAO-06-478T. Washington, D.C.: March 1, 2006. DOD Acquisition Outcomes: A Case for Change, GAO-06-257T. Washington, D.C.: November 15, 2005. Force Structure: Actions Needed to Improve Estimates and Oversight of Costs for Transforming Army to a Modular Force, GAO-05-926. Washington, D.C.: September 29, 2005. Defense Acquisitions: Resolving Development Risks in the Army’s Networked Communications Capabilities is Key to Fielding Future Force, GAO-05-669. Washington, D.C.: June 15, 2005. Defense Acquisitions: Future Combat Systems Challenges and Prospects for Success, GAO-05-428T. Washington, D.C.: March 16, 2005. Defense Acquisitions: Future Combat Systems Challenges and Prospects for Success, GAO-05-442T. Washington, D.C.: March 16, 2005. NASA’s Space Vision: Business Case for Prometheus 1 Needed to Ensure Requirements Match Available Resources, GAO-05-242. Washington, D.C.: February 28, 2005. Defense Acquisitions: The Army’s Future Combat Systems’ Features, Risks, and Alternatives, GAO-04-635T. Washington, D.C.: April 1, 2004. Defense Acquisitions: Assessments of Major Weapon Programs, GAO-04-248. Washington, D.C.: March 31, 2004. Issues Facing the Army’s Future Combat Systems Program, GAO-03-1010R. Washington, D.C.: August 13, 2003. Defense Acquisitions: Army Transformation Faces Weapon Systems Challenges, GAO-01-311. Washington, D.C.: May 2001. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes, GAO-01-288. Washington, D.C.: March 8, 2001.
The Future Combat System (FCS) is central to Army transformation efforts, comprising 14 integrated weapon systems and an advanced information network. In previous work, GAO found that the elements of a sound business case--firm requirements, mature technologies, a knowledge-based acquisition strategy, a realistic cost estimate, and sufficient funding--were not present. As a result, FCS is considered high risk and in need of special oversight and review. Congress has mandated that the Department of Defense (DOD) decide in early 2009 whether FCS should continue. GAO is required to review the program annually. In this report, GAO analyzes FCS development, including its requirements definition; status of critical technologies, software development, and complementary programs; soundness of its acquisition strategy related to design, production and spin-out of capabilities to current forces; and reasonableness of costs and sufficiency of funding. The Army has been granted a lot of latitude to carry out a large program like FCS this far into development with relatively little demonstrated knowledge. Tangible progress has been made during the year in several areas, including requirements and technology. Such progress warrants recognition, but confidence that the program can deliver as promised depends on high levels of demonstrated knowledge, which are yet to come. Following the preliminary design review in 2009, there should be enough knowledge to demonstrate the soundness of the FCS business case. If significant doubts remain about the program's executability at that time, DOD will have to consider alternatives to proceeding with the program. Currently, GAO sees the FCS business case as follows. Requirements--Progress has been made in defining requirements and making some difficult trade-offs, but key assumptions about the performance of immature technologies and other technical risks remain to be proven. Technology--The Army has made progress in maturing technologies, but it will take several more years to reach full maturity. All key technologies should have been mature in 2003 when the program began. FCS software has doubled in size compared to original estimates and faces significant risks. The Army is attempting a disciplined approach to managing software development. Acquisition Strategy--The FCS acquisition strategy is compressed. Key testing to demonstrate FCS performance will not be completed, and maturity of design and production will not be demonstrated until after the production decision. Program Costs--New estimates place FCS costs significantly above the current estimate of $163.7 billion. The Army has recently proposed a plan to buy fewer systems and slow production rates. This recent program adjustment will affect program costs, but details are not yet available.
Background DOD Obligations and Workforce Trends From fiscal years 2001 through 2008, DOD’s reported obligations on contracts for services, when measured in real terms, more than doubled— from roughly $92 billion to slightly over $200 billion. These obligations accounted for over half of the department’s total contract obligations in fiscal year 2008. Over that same time period, DOD’s obligations on professional, administrative, and management support contracts nearly tripled from $14.2 billion to $42 billion. These services represented about 15 percent of DOD’s total obligations on services contracts in 2001 and 21 percent in 2008. As we have reported in the past, this increased use of contractor-provided services has been the result of thousands of individual decisions, not the result of strategic, comprehensive planning for the whole department in which the volume and composition of contracted services have been measured outcomes. We also noted that the absence of well-defined requirements, sound contracting arrangements, or effective management and oversight has contributed to schedule delays, cost overruns, and unmet expectations. Despite substantial increases in spending on both goods and services from fiscal year 2001 through 2008, DOD’s acquisition workforce has declined by 2.6 percent (see table 1). Without an adequate workforce to manage DOD’s billion-dollar acquisitions, there is an increased risk of poor acquisition outcomes and vulnerability to fraud, waste, and abuse. We reported in March 2009 that DOD lacked critical, departmentwide information needed to ensure that its acquisition workforce was sufficient to meet its national security mission. We found, for example, that DOD did not collect or track information on contractor personnel, despite the fact that those personnel providing professional and management support services make up a key segment of the total acquisition workforce. Additionally, DOD lacked complete information on the reasons personnel are contracted, thus limiting its ability to determine whether decisions to augment the in-house acquisition workforce with contractors were appropriate. Risks of Contractors Closely Supporting Inherently Governmental Functions Federal agencies acquire basic services, such as custodial and landscaping, to more complex professional and management support services, which may closely support the performance of inherently governmental functions. Tasks that require discretion in applying government authority or value judgments in making decisions for the government are defined by the FAR as inherently governmental functions; as such they are required to be performed by government employees, not private contractors. The FAR provides 20 examples of such work functions, including determining agency policy or federal program budget request priorities; directing and controlling federal employees; and awarding, administering or terminating federal contracts. The FAR also provides examples of functions that, while not inherently governmental, approach the category due to the nature of the function, the manner in which a contractor performs the task, or methods used by the government to administer performance under a contract (see app. II). Services that closely support inherently governmental functions include professional and management support services, such as those that involve or relate to supporting budget preparation; program planning; acquisition planning; technical evaluation for contract proposals or source selections; and development of statements of work. The decision to turn to contractors can, in some cases, create risks that the government needs to consider and manage. Of key concern is the loss of government control over and accountability for mission-related policy and program decisions when contractors provide services that closely support inherently governmental functions. The closer contractor services come to supporting inherently governmental functions, the greater the risk of their influencing the government’s control over and accountability for decisions that may be based, in part, on contractor work. This may result in decisions that are not in the best interest of the government and may increase vulnerability to waste, fraud, and abuse. Given this risk, the FAR and Office of Federal Procurement Policy (OFPP) guidance state that a greater scrutiny and an enhanced degree of management oversight is required when contracting for functions that closely support the performance of inherently governmental functions. Additionally, the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 required, among other things, that prior to entering into a contract for performance of acquisition functions that closely support inherently governmental functions, DOD must ensure that its personnel cannot reasonably be made available to perform these activities, that appropriate DOD personnel supervise contractor performance and perform all inherently governmental functions, and that DOD address any potential organizational conflict of interest of the contractor when performing these tasks. Table 2 provides examples of contracted services and their relative risk of influencing government decision making. Our previous work has identified that the use of contractors for services that closely support inherently governmental functions introduces other risks due to a potential loss of government control over program decisions. Such concerns include an increased potential for conflicts of interest, both organizational and personal; the potential for improper use of personal services contracts, which the FAR generally prohibits because of the employer-employee relationship they create between the government and contractor personnel; and the potential additional cost to the government of hiring contractors rather than government personnel. DOD Management and Peer Reviews The National Defense Authorization Act for Fiscal Year 2002 required the Secretary of Defense to establish a review process for the approval of DOD services acquisitions. USD(AT&L) issued guidance in May 2002 establishing its management reviews, intending to ensure that DOD’s services acquisitions are based on clear, performance-based requirements with measurable outcomes and that acquisitions are planned and administered to achieve intended results. Under this initial guidance USD(AT&L) was to review all proposed services acquisitions with an estimated value of $2 billion or more, and military department and other defense component officials were to review those below that threshold. The military departments each subsequently developed their own management review processes for acquisitions that contained several of the same elements. Chief among these was the requirement that written acquisition strategies must be reviewed and approved by senior officials before contracts may be awarded. The content of these strategies included, among other things, information on contract requirements, risks, and business arrangements. Once the acquisition strategies are approved, DOD contracting offices may continue the acquisition process, including soliciting bids for proposed work and subsequently awarding contracts. Based on further requirements in the National Defense Authorization Act for Fiscal Year 2006, in October 2006, USD(AT&L) issued a memorandum updating DOD’s acquisition of services policy. Under the updated policy, all proposed services acquisitions with a value estimated at more than $1 billion or designated as “special interest” should be referred to USD(AT&L) for review. The dollar threshold for military department reviews was also lowered. While the substance of the management reviews remained largely unchanged from the 2002 policy, it did incorporate a few additional, specific, acquisition strategy requirements concerning inclusion of any required waivers and top-level discussion of source selection processes. In 2006, we reported that although DOD had established formal management reviews for the approval of services acquisitions, issues with services contracts at the strategic and transactional level remained. We reported that DOD’s approach to managing services acquisitions tended to be reactive and that it had not developed a means for gauging whether ongoing and planned efforts were achieving intended results. At the transactional level, DOD focused primarily on elements associated with awarding contracts, with much less attention paid to requirements or the assessment of the actual delivery of contracted services. Moreover, the results of individual acquisitions were generally not used to inform or adjust strategic direction. We recommended that, among other actions, DOD take steps to determine areas of specific risk that were inherent in acquiring services and that should be managed with greater attention. DOD agreed with this recommendation and has identified actions under way to address our concerns. In response to a requirement in Section 808 of the National Defense Authorization Act for Fiscal Year 2008, USD(AT&L) established a multiphased, pre- and post-contract award independent management review, or peer review, process for services acquisitions. In December 2008, DOD Instruction 5000.02, Operation of the Defense Acquisition System, was revised and incorporated these peer reviews as well as the management reviews and dollar thresholds established in the October 2006 acquisition of services policy. DOD Policies Do Not Require an Assessment of Risk of Contractors Closely Supporting Inherently Governmental Functions at Key Acquisition Decision Points DOD and the military departments are to assess a number of risks when developing an acquisition strategy for services, but DOD policy does not require an assessment of risks associated with contractors closely supporting inherently governmental functions at two key decision points—when approving acquisition strategies or issuing task orders. All 7 of the proposed acquisitions for professional, administrative, and management services and more than 75 percent of the 64 related task orders we reviewed required the contractor to provide services that closely supported inherently governmental functions. A DOD instruction issued after the approval of the acquisition strategies we reviewed requires that consideration be given to using civilian personnel rather than contractors, specifically when the activities to be performed cannot be separated or distinguished from inherently governmental functions. However, once the decision to rely on contractors is made, DOD personnel are not required to identify and document risks posed when contractors are given responsibility for closely supporting inherently governmental functions or take steps to mitigate those risks. Risks Associated with Contractors Closely Supporting Inherently Governmental Functions Are Not among Those Assessed in Acquisition Strategies Submitted for Management Reviews DPAP representatives we met with noted that the acquisition strategy is a “big picture” document that examines whether the proposed methods for acquiring services are sound and how the acquisition compares with previous acquisitions. The strategy is developed, in part, with the assistance of both program office representatives, who identify the requirements, and contracting personnel, who will develop the basic contracts and subsequently issue task orders. Agency personnel are required to document in the acquisition strategy their assessment of current and potential technical, cost, schedule, and performance risks; the level of those risks; and a mitigation plan. As part of the acquisition strategy, this assessment is subsequently reviewed by senior DOD officials during management reviews. Documentation for all seven acquisition strategies we reviewed included a discussion of these risks as well as methods to mitigate their impact, as required in policy. DOD policy, however, does not require the military departments to consider and document in their acquisition strategies the extent to which the proposed services will be used to closely support inherently governmental functions. As a result, none of the acquisition strategies or related risk assessment documentation reviewed under DOD’s management reviews that we analyzed identified such concerns or any related mitigation strategies. The acquisition strategies and supporting documentation we reviewed included broad descriptions of the services to be provided over the course of the acquisition, which included acquisition, contracting, and policy development support services. Each of these services are identified in the FAR as examples of those that closely support inherently governmental functions. DOD and military department officials we spoke with indicated that, to some degree, DOD’s management review is not well suited to assess the risks of contractors closely supporting inherently governmental functions. These officials noted that the acquisition strategies generally described requirements in broad terms and that the timing of the review—often months before DOD actually solicits bids from the contractors or awards contracts and issues task orders for specific services—makes it challenging to know what specific risks might be encountered or whether a mitigation strategy is warranted. These officials indicated that identifying such risks would be more appropriate during the planning for the subsequent award of contracts or issuance of task orders, when the program offices have more clearly defined their specific needs and requirements. In most cases, however, the military departments had prior knowledge about their expected use of contractors to provide such services. For example, all of the acquisition strategies and supporting documentation we reviewed justified the need to obtain contractor support to perform these functions due to a lack of government resources needed to meet daily mission requirements. These documents and contracting officials indicated that the offices supported by these acquisitions have long relied, in some cases for over a decade, on contractor support to augment the government workforce and perform tasks that closely support inherently governmental functions. For example, although the Professional Acquisition Support Services strategy that supports the Air Force’s Electronic Systems Center was approved in 2006, the center has contracted for similar acquisition support services continuously since 1984. Likewise, the acquisition management services provided to the Air Force’s Air Combat Command through the Technical Acquisition Management Support 3 acquisition have been obtained through previous acquisitions dating back to 1989. Although officials stated that documentation explaining the need to contract for these services in the past was often unavailable, contracting officers and program officials indicated that reductions in government personnel have led to the increased use of contractors to perform activities government personnel would have performed in the past. Program and contracting officials stated that they would now prefer to use government personnel to perform these activities, but they noted the length of time it takes to hire federal employees and the lack of available personnel funds or positions necessitates them to use contractor support. Program and contract officials also informed us that the decision not to pursue additional federal employees instead of contractors was made by the supported programs before these officials became involved with the acquisition process. Further, they indicated they were not provided with the analyses used to support these decisions. We did not find any analysis or discussion of how these decisions were made in the acquisition strategies or supporting documentation submitted for the management review. Risks of Contractors Closely Supporting Inherently Governmental Functions Are Not Considered and Documented by DOD Personnel When Awarding Contracts or Issuing Task Orders According to DOD officials, personnel are not required to consider and document the risks associated with contractors closely supporting inherently governmental functions when awarding contracts or issuing task orders. Forty-nine of the 64 task orders we reviewed included services that, as described in the FAR, are examples of activities that closely support inherently governmental functions, including support for developing statements of work or contract documents; or budget preparation. Program managers and contracting officers we spoke with acknowledged that contractors closely supported inherently government functions, but none of the contract files identified them as such or indicated if any steps were taken to address related risks. The associated contract files for each of the task orders we reviewed included provisions specifically prohibiting the contractor from performing inherently governmental functions. Program managers and contracting officers informed us that they were aware of the importance of preventing contractors from performing inherently governmental functions as required by the FAR. These officials acknowledged that without contractor support, fulfilling mission requirements or conducting certain program activities could not continue and some recognized that the close working relationships that develop between government and contractor support personnel increase the risks of contractors performing inherently governmental functions. To prevent contractors from performing such tasks, program and contracting officials indicated that they reviewed task order requirements to ensure that they are within the scope of the acquisition and do not require contractors to perform tasks that should be left only to government employees. Officials further stated that when developing performance work statements they emphasize that the contractors’ role is to provide assistance to the government rather than make program decisions. Program and contract personnel that we interviewed who were responsible for overseeing the work done under the task orders were unaware, however, of the FAR requirement to provide greater scrutiny and an enhanced degree of management oversight and surveillance when contracting for services that closely support inherently governmental functions. Additionally, federal internal control standards require that agencies conduct an assessment of risks, such as risks that result from heavy reliance on contractors to perform critical agency operations. According to DOD officials, however, no specific guidance has been provided by DOD that defines how contracting and program officials should conduct such enhanced oversight. DPAP officials noted that additional information on how to oversee contractors that closely support inherently governmental functions would be useful to the military departments, but acknowledged that they have no ongoing efforts do so. Recent Guidance Requires DOD to Consider Using Civilian or Military Employees to Perform Activities That Closely Support Inherently Governmental Functions In May 2009, DOD issued guidance in response to legislation requiring DOD to devise and implement guidelines and procedures to ensure that consideration is given to converting, or in-sourcing, functions currently performed by contractors to DOD civilian personnel. This in-sourcing guidance instructs DOD personnel to prioritize the conversion of any currently contracted services to DOD civilian performance if the functions: are valid and enduring mission requirements; are inherently governmental functions; are exempted functions; are unauthorized personal services; have problems with contract administration; or are services that require “special consideration” for in-sourcing. Under the law and the guidance, one of the categories of services that should be given “special consideration” for in-sourcing is services that closely support inherently governmental functions. The May 2009 guidance also states that when making certain in-sourcing decisions, agency personnel should consult workforce management officials, as specified in DOD’s Guidance for Determining Workforce Mix. This instruction requires DOD personnel to pay particular attention when contracting for activities that closely support inherently governmental functions. If an activity is so closely associated with an inherently governmental function that it cannot be separated or distinguished from it, the instruction requires that the function be identified as inherently governmental and precluded from private sector performance. This safeguard is intended to prevent the transferring of governmental authority, responsibility, or accountability to the private sector. However, neither the May 2009 in-sourcing guidance nor DOD’s Guidance for Determining Workforce Mix require DOD contracting and program personnel to identify, document, or mitigate risks posed when contractors will be relied on to closely support inherently governmental functions. Further, under the May 2009 in-sourcing guidance, the conversion of services that require “special consideration” to government performance is to be based solely on cost and not risk; that is, these services may be in- sourced only if a cost analysis shows that performance by DOD civilian employees is more cost-effective. Officials from the Office of the Director for Cost Assessment and Program Evaluation stated that guidance for standardizing how these cost analyses should be performed was expected to be issued by December 2009. The May 2009 in-sourcing guidance also requires that when cost analyses indicate that the private sector is the more cost-effective provider of services, a written confirmation should be provided to contracting officers. Further, the guidance states that all documents leading to the decision to contract for such services should be retained in contract files. DPAP and military department officials stated that cost analyses are not required to be submitted in the documentation supporting management reviews. DOD Faces Challenges In Implementing Performance-Based Practices on Professional and Management Support Task Orders DOD faces challenges in defining requirements and outcome-based measures when using a performance-based approach to acquire professional and management support services. DOD personnel generally expressed task order requirements in terms of a broad range of services that the contractors might be required to perform and used a mix of objective and subjective measures to determine whether the contractor achieved assigned tasks within expected cost, schedule, and quality parameters. For example, 63 percent of the task orders that assessed contractor cost performance principally used objective performance measures, while two-thirds of the task orders that assessed the quality of contractor performance principally used subjective measures. We found objective measures generally provided more discrete information to assess contractor performance. In contrast, subjective measures, especially those to assess the quality of contractor work, tended to rely on customer feedback, such as the number of complaints lodged against the contractor. In several instances, DOD missed opportunities to include objective performance measures that may have been better suited to assess contract outcomes, in part because DOD personnel used the performance measures established in the base contract rather than attempt to measure the specific services being provided under the task order. DOD officials acknowledged there are challenges in developing measures that assess the outcomes of professional and management support contracts and noted recent actions to improve existing guidance. FAR and DOD Guidance Established Preference for Performance-Based Services Acquisition In 2000, Congress established performance-based approaches as the preferred acquisition method for acquiring most services. Under the FAR, all performance-based acquisitions should include: a performance work statement that describes outcome-oriented requirements in terms of results required rather than the methods of performance of the work; measurable performance standards describing how to measure contractor performance in terms of quality, timeliness, and quantity; and the method of assessing contract performance against performance standards, commonly accomplished through the use of a quality assurance surveillance plan. DOD issued its Guidebook for Performance-Based Services Acquisition in the Department of Defense in December 2000 to educate DOD personnel on, and promote the use of, performance-based practices. The guidebook suggests that personnel develop performance objectives that encompass at a top level all the tasks that must be completed to arrive at the desired outcome. The guidebook states that performance standards should be identified for the performance objectives so personnel will know if the desired outcome was satisfactorily achieved. It further states that determining an appropriate performance standard is a judgment call based on the needs of the mission and available expertise. DOD’s guidance also identifies that surveillance personnel may use various measures to assess the contractor’s performance, such as random or periodic sampling of the contractor’s work as well as customer feedback. The guidebook indicates, however, that customer feedback should be used prudently as it is subjective and does not always relate to the requirements of the contract. Lastly, the guidebook provides examples of performance objectives with corresponding standards and measures for various services and activities, but not specifically for professional and management support services (see table 3). Performance Standards and Measures Were Not Always Well Suited to Assess the Outcomes of the Broad Range of Contracted Services While DOD identified as performance-based all but one of the task orders we reviewed, we found that almost all of the task orders had broadly defined requirements that listed various categories of services and related activities the contractor may be required to perform over the course of the order rather than expected results. The task orders we reviewed were issued from base contracts that identified the categories of support services a contractor may be required to perform. The task orders then identified a broad range of activities that the contractor may be required to perform based on the customer program office’s needs. For example, the base contract for one task order identified four different categories of support services: acquisition, financial management, contracting, and administrative and human resources support. In turn, the task order identified several activities the contractor could perform, such as preparing acquisition-related documents, updating commanders on policies and procedures, tracking and analyzing funds, maintaining contract files, and preparing travel orders. DOD generally grouped contractor performance into a number of different performance objectives, including cost, schedule, and quality, and set standards that the contractor had to meet. These performance objectives required the contractor to, among other things, maintain control of costs by completing work within an acceptable range of projected costs, adhere to the government’s schedule by delivering products on time, and provide the government with high-quality work products. DOD personnel used a mix of objective and subjective measures to assess the contractors’ performance against the cost, schedule, and quality standards in 54 of the 64 task orders we reviewed; not all task orders established performance measures in all three categories (see table 4). The measures used varied depending on the area of performance assessed. For example, 63 percent of the task orders that assessed the contractor’s cost performance principally used objective performance measures, while two-thirds of the task orders that assessed the quality of the contractor’s performance principally used subjective measures. We found that the objective performance measures yielded better information on how well the contractor met desired cost and schedule contract outcomes than subjective measures. For example, the objective performance standards in one task order required the contractor to remain within projected costs and perform tasks within schedule 97 percent of the time. In this instance, the surveillance personnel maintained a database of the contractor’s cost and periodically contacted customers to identify if the contractor’s work was completed on time. The other task orders that included cost and schedule performance standards were assessed subjectively, based on the number of complaints lodged against the contractor. According to task order documentation and surveillance personnel we spoke with, a customer complaint is generated when a contractor fails to meet performance requirements. Ten of these task orders required the contractor to provide accurate cost forecasts and accomplish tasks with minimum delay to program mission objectives. None of these task orders, however, identified what would constitute a level of performance that would result in a valid customer complaint. Contractor performance documentation we reviewed for these task orders indicated that DOD assessed the contractor as having met performance requirements, but provided little to no information on what the contractor accomplished. Both the subjective and objective performance measures used to assess the quality of the contractors’ services provided little insight into the outcomes of the contractors’ work. DOD personnel relied on subjective measures to assess the quality of contractor services provided in 40 of the 64 task orders we reviewed. For example, 30 of these tasks orders directed surveillance personnel to assess quality based on the number of customer complaints a contractor received. Surveillance personnel we spoke with indicated that they regularly contacted government customers to inquire about their overall satisfaction with the contractor’s performance. Some surveillance personnel explained that it was difficult to determine if the contractor met or exceeded performance standards because guidance on making these determinations was not available. Consequently, they generally documented the contractor’s performance as acceptable if the customer said they were satisfied with the contractor’s performance. We found that when the contractor’s performance was rated as meeting requirements, surveillance personnel documented little detail about the quality of the contractor’s work because such descriptions were often required only when the contractor either exceeded or did not meet expectations. In another 10 task orders involving the purchase of engineering, technical, business, and acquisition support services, such as helping to identify the program office’s contracting requirements and assisting with developing requests for proposals and statements of work to fill those requirements, quality performance standards required the contractor to complete task order activities with little rework and with few minor and no significant problems at least 80 percent of the time. According to the documentation we reviewed, surveillance personnel indicated they periodically sampled the contractor’s work to verify the percentage that was redone and often rated the quality of the work as exceptional because little rework was reported. On four task orders for translation services in Afghanistan, Cuba, and other areas, DOD personnel did not attempt to measure the quality of the contractor’s services. In these cases, the task orders’ requirements included that the contractor deploy translators in response to mission requirements. The corresponding performance standard required the contractor to meet staffing requirements no less than 95 percent of the time, which was measured and documented by surveillance personnel. The task orders, however, did not include a performance objective for obtaining high-quality translation services. Contracting and program officials explained that they did not try to measure the quality of the translations provided because DOD lacks the personnel with translation skills necessary to make such an assessment. These officials stated that ensuring qualified personnel are provided in a timely manner is the best alternative to determining if the translations provided are of high quality. The military departments may have missed opportunities to include and use objective performance measures that were better suited to assess contract outcomes in several of the task orders we reviewed. In part, this occurred because DOD personnel used the general performance measures that were established in the base contract rather than develop measures that were tailored for the specific work required in the task orders in more than 80 percent of the task orders we reviewed. Consequently, in each of the following examples, the quality of the contractor’s performance was measured based on the number of validated complaints submitted by government personnel. Four task orders issued for acquisition, financial management, contracting, and administrative support required contractors to indicate at the end of each contract year at least two lessons learned, best practices, or improvements it made to the government’s processes in areas including acquisition and program management. The requirements did not, however, identify at the outset of the task order the type or extent of the improvements that DOD desired, such as reductions in the time required to complete activities or costs savings. In a $1.8 million task order for information and project management support, the contractor was required to identify and reduce the government’s unliquidated obligations by 25 percent in 6 months and by 50 percent in a year when measured against an identified baseline. Nevertheless, the task order did not include any performance measures that were directly related to whether the contractor met the reduction targets. The official responsible for assessing the contractor’s performance noted, however, that he considered the contractor’s efforts to reduce unliquidated obligations when he assessed the contractor’s performance. In four other task orders for acquisition and information management support, the contractor was required to review technical proposals and validate prices submitted by other contractors and make recommendations on the acceptance or rejection of these proposals as part of the support it provided to the program office’s pre-award activities. The task orders, however, did not include objective performance measures to assess the contractor’s performance, such as whether the contracting officer returned the contractor’s work to correct deficiencies or whether the reviews resulted in reducing the government’s costs. DOD Officials Identified Challenges in Developing Objective, Outcome- Oriented Measures Our previous work at the Department of Homeland Security noted that defining outcome-oriented requirements and measurable performance standards may be challenging for professional and management support services. We found similar concerns expressed by the DOD contracting, program, and surveillance officials we interviewed. These officials acknowledged that they find it difficult to identify and objectively measure the outcomes of professional and management support services contracts due to the broad range of support provided. They stated that these task orders encompassed a range of activities which, while not inherently governmental, would typically be performed by federal employees. Consequently, officials stated the performance work statements needed to be written broadly to provide the flexibility to obtain specific support as needed and that contractors were often viewed as simply augmenting the government’s workforce. Further, these officials noted that it was often not practical to measure the work contractors performed and that subjective measures, such as the number of customer complaints received, are frequently used as an alternative to assess whether the contractor met the government’s requirements. As a result, they generally considered the outcome of these task orders to be obtaining qualified people rather than a specific result the contractor was required to achieve. To address the challenges of developing performance-based requirements and measures for professional and management support, a Defense Acquisition University (DAU) official noted that DAU was reviewing performance work statements and surveillance plans from professional and management support contracts across DOD to identify good examples of outcome-based performance objectives, standards, and measures. DAU plans to launch a Web site in January 2010 that includes templates derived from these examples for contracting and program officials across DOD to tailor to their own needs. Additionally, this official noted that since January 2009, DAU has offered a 4-day services acquisition workshop tailored to individual acquisitions developed by program offices across DOD. According to the DAU official, the workshop brings together key acquisition personnel, from contracting officers to customers, to support the development of new acquisition strategies before they are reviewed by the military departments. The official added that by the end of the workshop, the requiring activity has a draft performance work statement and a quality assurance surveillance plan that meets performance-based requirements. DOD Efforts to Designate Trained Surveillance Personnel Show Progress, but Concerns Remain DOD’s efforts to ensure that trained surveillance personnel are assigned to monitor contractor performance on services contracts have shown progress, though on a number of task orders personnel were not always designated or trained in a timely fashion. Surveillance personnel are required to be qualified by training and designated in writing. In response to our and inspector general reports on continued shortcomings in DOD’s contract surveillance practices, DOD issued guidance on December 6, 2006, requiring that surveillance personnel be properly trained and designated before contract performance begins, and that properly trained surveillance personnel are identified on active contracts for services. At the time of our review, trained surveillance personnel were designated to all 64 task orders. DOD personnel responsible for five of the seven acquisitions we reviewed, however, did not maintain documentation on all surveillance personnel assigned over the task orders’ entire period of performance. DOD officials stated that in some cases additional personnel may have been designated to the task orders, and that some personnel may have received surveillance training on a date earlier than was indicated in the contract files, but were unable to provide documentation. In most cases, DOD was able to provide documentation of the first person designated to conduct surveillance and the person assigned at the time of our review. On the basis of this information, we found that surveillance personnel were designated after contract performance began on 3 of the 37 task orders awarded after the issuance of the December 2006 guidance. In 1 of these 3 cases, the person was designated more than 90 days after performance began on the task order. We also found that 61 of the 64 surveillance personnel designated on the task orders we reviewed had received training. For 1 of the 3 instances where personnel were not trained, a program official explained that the person was recently assigned and had been notified of the training requirements, but had not completed the training. In the 2 other instances, DOD officials were not able to identify a reason for the lack of training. For the 61 task orders with trained personnel assigned, 20 personnel had not received training prior to beginning their assignments. Furthermore, in 3 of these cases, surveillance personnel did not receive the required training until at least a year after they were assigned to monitor a contractor’s performance. The training that surveillance personnel received varied across and within the military departments, ranging in duration from 2 hours to 1 week, and included, among other things, reviewing training slides, completing an on- line course offered through DAU’s Web site, and completing an in-class course tailored by the program office responsible for the acquisition. There are no DOD-wide requirements for the content of surveillance training and personnel we spoke with provided mixed feedback on how well their training prepared them to conduct surveillance. Surveillance personnel noted that training provided an adequate basis for conducting their duties, but did not always provide enough instruction on how to effectively oversee contractors, especially for those who had little to no previous experience with assessing contractor performance. Several personnel stated that the most useful information that training provided was the contact information for personnel in the contracting offices that surveillance personnel could speak with if they had questions. DPAP officials acknowledged that the type and content of the training surveillance personnel receive varied and indicated that DOD is considering a certification system for these personnel that may include both training and experience requirements. Surveillance personnel identified a number of challenges that may affect their surveillance duties, such as numerous contractors to oversee in multiple locations and surveillance responsibilities being secondary to primary duties. For example, on one task order for translation services in Afghanistan, surveillance personnel were each responsible for monitoring over 1,000 contractors dispersed throughout the country. An official who oversaw this task order stated the ratio of surveillance personnel to contractors was so large that it affected the government’s ability to assess contractor performance. For a task order for translation services in Cuba, the contracting officer’s letter of designation to the contracting officer’s representative stressed the importance of on-site surve illance. Nevertheless, we found that there was an 8-month period during which surveillance personnel were absent from the site of contractor performance. Despite the absence of on-site surveillance personnel, contracting officials determined that the contractor should receive the full award fee based upon performance reports submitted before and after this 8-month period. Several personnel from other commands we visited told us they did not have sufficient time to focus on their surveillance responsibilities in addition to their primary duties. Finally, surveillance personnel at many of the commands we visited stated that they were unaware of requirements to provide enhanced oversight of contractors that closely support inherently governmental functions. Recent Initiative May Improve DOD’s Insight into Issues Affecting Professional and Management Support Contracts DOD is in the process of implementing additional processes to review services contracts both prior to and after contract award, which may provide additional insight into DOD’s management and oversight of professional and management support contracts. These reviews are intended to assess a number of issues that are not currently addressed by DOD’s management review, including contractors that closely support inherently governmental functions, implementation of performance-based practices, and proper surveillance of contractors. The National Defense Authorization Act for Fiscal Year 2008 required DOD to issue guidance to implement an independent management review of services contracts. In response, the Director of DPAP issued memorandums in September 2008 outlining pre- and post-award peer reviews and in February 2009 detailing criteria for the peer reviews for services acquisitions with estimated values of $1 billion or more, consistent with the threshold for reviewing proposed services acquisitions. To do so, DOD convenes peer review teams that consist of senior contracting leaders from across DOD as well as legal counsel to work closely with the offices responsible for developing acquisition strategies. This policy also required the military departments to establish their own procedures to conduct peer reviews for service acquisitions valued at less than $1 billion. As of October 2009, DPAP conducted 48 pre- award peer reviews on 31 different proposed supplies and services acquisitions. DPAP also conducted three post-award reviews on approved and ongoing services acquisitions, which included a review of task order documents. Of the 51 reviews conducted by DPAP, four pre-award reviews and one post-award review were conducted on three different professional and management support services acquisitions. The peer review process differs from DOD’s management reviews in a number of areas that may provide opportunities for the department to address key aspects of managing and overseeing professional and management support contracts. Whereas the premise of the management review is to assess and approve proposed acquisition strategies, the peer reviews are conducted after strategies have been approved and are intended to be advisory in nature. Peer reviews are designed to help: ensure that contracting officers across DOD implement policy and regulations in a consistent and appropriate manner; improve the quality of DOD’s contracting processes; and facilitate cross-sharing of best practices and lessons learned. Currently, a peer review team summarizes the results of its review in a memorandum provided to both the contracting office responsible for the acquisition and DPAP. According to DPAP officials, DOD is still determining how to share best practices and lessons learned from these reviews with the department’s acquisition community. While both the peer reviews and the management reviews contain pre- award components, the multiple phases of the peer reviews provide DOD the opportunity to address additional issues and examine documents not available during the management review (see fig. 1). For example, the pre- award management review of proposed services acquisitions occurs before performance work statements and quality assurance surveillance plans are developed. As a result, the management reviews do not include an assessment of how performance-based practices are implemented and whether proper contractor surveillance is conducted. Further, as previously noted, DOD’s management review guidance does not require department personnel to identify whether the services to be provided closely support inherently governmental functions or how the risks associated with contractors providing such services will be addressed. In contrast, the pre-award peer reviews occur later in the acquisition cycle and include the review of additional documents that may provide reviewers an opportunity to recommend improvements to performance work statements and surveillance plans. Additionally, the peer review process provides for a post-award phase for services that expands upon the management review requirements. The post-award peer review has eight metrics that provide criteria that reviewers use to evaluate ongoing acquisitions in terms of how contractor performance is assessed, including the use of objective criteria; whether surveillance personnel are appropriately staffed; and the extent of reliance on contractors to perform tasks closely associated with inherently government functions. Table 5 shows the various areas of focus of the peer reviews. Our review of the summary memoranda for the five peer reviews conducted on professional and management support contracts found that while the memoranda generally focused on business aspects, some memoranda made recommendations related to performance-based approaches and surveillance issues specific to the individual acquisition. For example, a pre-award peer review team recommended that the contracting office work with DAU to develop performance-based statements of work. The peer review team for the one post-award review recommended that the program office identify objective performance measures and present them to the contractor. Another pre-award peer review team recommended that the contracting office appoint surveillance personnel prior to the award of a contract. None of the memoranda we reviewed noted issues with contractors closely supporting inherently governmental functions. DOD officials stated that these issues were discussed, but that none appeared to warrant inclusion in the memoranda. While DPAP has completed peer reviews on 34 individual supplies and services acquisitions since September 2008, developing an approach that will lead to the achievement of the peer review’s objectives enterprisewide may prove challenging given the nature and volume of service contract activity. Nearly 1,900 task orders were issued under the seven professional and management support services acquisitions we reviewed, all of which required a contractor to perform multiple and varying tasks. Furthermore, we identified thousands of individual task orders associated with the professional and management support services acquisition strategies approved by the military departments from fiscal years 2004 through 2007, all of which require management and oversight by the department. These numbers are even greater when looking at all services acquisition strategies approved by the military departments in that same time period (see table 6). Conclusions DOD’s reliance upon contractor-provided professional, administrative, and management services to support its missions makes effective management and oversight of these contracts critical. Certain activities such as budget preparation, acquisition planning, and policy development can create risks that the government needs to consider and effectively manage. Of key concern is the loss of government control over and accountability for policy and program decisions. Nevertheless, DOD’s policies do not require that DOD personnel include an assessment of these risks when their proposals for contractor support are submitted for approval under DOD’s management review process. While recent legislation and DOD’s implementing guidance require that DOD consider whether to continue the use of contractors for critical services, including conducting a cost analysis if the situation warrants, these determinations and analyses are largely disconnected from the acquisition review process. Consequently, senior DOD leadership does not have the benefit of such analyses when making strategic decisions on obtaining long-term, high-dollar-value professional and management support. Similarly, key decisions at the transactional level— such as to award a contract or to issue a task order—are made with the recognition that DOD is dependent on contractors to support its missions and operations. Despite this dependency, DOD officials generally did not consider whether contractors may be unduly or inappropriately influencing government decision making. Further, while these services were often acquired through performance-based approaches, such efforts were hindered by DOD’s use of broad statements of work and the use of performance measures established in the base contract rather than attempting to measure the specific services being provided under the task order. Within DOD’s acquisition community, there is widespread recognition that developing outcome-oriented measures is particularly difficult for professional and management support contracts. DOD has efforts under way to help develop better outcome-oriented measures for professional and management support contracts, but it is too soon to know whether this effort will prove successful. Perhaps the most critical tool in assessing contractor performance is having properly trained personnel in sufficient numbers to effectively monitor contractor performance. While improvements were evident, lapses in designating such personnel, in particular during the initial stages of the contract, continue to expose DOD to an increased risk of poor contractor performance. DOD consideration, at both the strategic and transactional levels, of the risks of using contractors to closely support inherently governmental functions can help improve the context for successful professional and management support outcomes. DOD’s peer review process is beginning to assess this issue just prior to and then after contract award, but with only a handful of reviews performed on such contracts, it is too early to gauge whether the process will be successful in encouraging DOD personnel to address these issues across the range of DOD’s services contracts. Having similar information provided at a level of detail appropriate for when senior DOD and military department leadership review proposed acquisition strategies would inform decision makers and engender more proactive consideration earlier in the acquisition cycle. As DOD gets closer to awarding contracts or issuing task orders for specific services, risks move from potential to the more tangible. Reducing the possibility that DOD would enter into a contractual arrangement that exposes it to unintentional and undesired consequences requires that DOD personnel consider—based on the facts and circumstances of a particular acquisition—whether such risks are present and, if so, how best to mitigate them. The fact that DOD program and contracting personnel we contacted were generally unaware of the long-standing requirement to provide greater scrutiny and enhanced oversight on services closely supporting inherently governmental functions underscores the need to address these problems at multiple levels and manners. Recommendations for Executive Action To better inform acquisition decisions, assist DOD personnel in performing their management oversight responsibilities, and improve DOD’s surveillance of services contracts, we recommend that the Secretary of Defense take the following four actions: revise documentation requirements for DOD’s current management review to include information on the extent to which services to be provided will closely support inherently governmental functions as well as the consideration given to using DOD civilian employees to perform such functions; require before the award of any contract or issuance of task order for services closely supporting inherently governmental functions that program and contracting officials consider and document their assessment of the unique risks of these services and the steps that have been taken to mitigate such risks; develop guidance to identify approaches that DOD should take to enhance management oversight when contractors provide services that closely support inherently governmental functions; and direct the military departments to review their procedures to ensure that properly trained surveillance personnel have been assigned prior to and throughout a contract’s period of performance. Agency Comments and Our Evaluation DOD provided written comments on a draft of this report. DOD concurred with the four recommendations and also identified a number of actions that would be taken to address them. DOD acknowledged the need to continually refine its policies and procedures regarding the management of support contracts. DOD noted that while it intended to decrease funding for contracted support and scale back the use of contractors, DOD will continue to rely on service contracts to support its mission, making the effective management of professional, administrative, and management support contracts critical. DOD also provided technical comments, which were incorporated as appropriate. DOD’s comments are reprinted in appendix III. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Administrator of the Office of Federal Procurement Policy; and interested congressional committees. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology To determine whether the Department of Defense’s (DOD) policies and actions to improve its management of services contracts addressed issues affecting professional and management support contracts, we examined (1) the extent to which DOD considered the risks associated with contractors closely supporting inherently governmental functions at key acquisition decision points;(2) how DOD was implementing performance- based acquisitions practices, such as identifying requirements in terms of expected and measurable outcomes; (3) the extent to which DOD designated trained surveillance personnel; and (4) whether recent actions to implement a peer review process may improve DOD’s management and use of such contracts. To assess the extent to which DOD considered the risks associated with contractors closely supporting inherently governmental functions at key acquisition decision points, we reviewed relevant provisions of the National Defense Authorization Acts for fiscal years 2002 through 2009 that pertained to DOD’s acquisition of services. We also reviewed guidance issued by DOD in May 2002 and October 2006, as well as DOD Instruction 5000.02, Operation of the Defense Acquisition System, reissued in December 2008, which collectively established DOD’s management review processes to identify the risks that should be considered during these reviews. Further, we reviewed Office of Federal Procurement Policy Letter 93-1, Management and Oversight of Service Contracting and Federal Acquisition Regulation (FAR) requirements for the management and oversight of contractors that closely support inherently governmental functions. We also reviewed DOD Instruction 1100.22, Guidance for Determining Workforce Mix and DOD’s May 2009 in-sourcing guidance, In-Sourcing Contracted Services—Implementation Guidance, to determine how personnel should consider and document the risks of contractors performing activities that closely support inherently governmental functions and assess the appropriate mix of DOD civilian, military, and contractor personnel. We interviewed representatives from the Office of Defense Procurement and Acquisition Policy (DPAP), and representatives from each of the military departments that are responsible for implementing these policies, guidance, and reviews to identify how these risks are accounted for prior to approval of the proposed acquisition strategy. To assess how these risks were addressed under DOD’s management review for specific services acquisitions, we reviewed a DOD-provided list of 102 services acquisition strategies that were reviewed and approved by the Air Force, Army, or Navy from fiscal years 2004 through 2007. We obtained information from the military departments on the contracts that had been awarded after these 102 strategies had been approved. Using this information and data derived from the Federal Procurement Data System– Next Generation, we determined that the military departments had awarded 361 contracts and issued 13,650 task orders from these 102 acquisitions during fiscal years 2004 through 2007. We then identified product service codes associated with these contracts and used the Federal Procurement Data System–Next Generation to determine the number of contract actions and obligations for professional and management support services. We found that 32 of these acquisitions, with almost $15 billion in total combined obligations from fiscal years 2004 through 2008, included contracts for professional and management support. From these 32 acquisitions, we selected 7 acquisitions based on such factors as the percentage of obligations that were made for professional and management support services, the specific types of services acquired, and the agency awarding the contract. The 7 acquisitions we selected had over $4.3 billion in total combined obligations from fiscal years 2004 through 2008 (see table 7 for more information on these acquisitions). The seven acquisition strategies we reviewed were approved under DOD’s May 2002 acquisition of services policy, which required the Under Secretary of Defense for Acquisition, Technology, and Logistics to review acquisitions with an expected value of over $2 billion and for each of the DOD components, which includes the military departments, to review those acquisitions that were under that threshold. None of the acquisition strategies we selected were approved after DOD issued its October 2006 acquisition of services policy, which lowered the dollar thresholds for management reviews. The substance of the management reviews remained largely unchanged, incorporating a few additional, specific, acquisition strategy requirements, such as the inclusion of any required waivers and top-level discussion of source selection processes that were not significant to the objectives of our review. To assess how such risks were addressed at the contract or task order level, we used data from the Federal Procurement Data System–Next Generation to determine the number of task orders that had obligations of $500,000 or more that were issued from fiscal years 2004 through 2007 from each of these seven acquisitions. From that list, we randomly select 10 task orders for each acquisition, with the exception of the INSCOM Linguistics Part 2 acquisition for which we selected all four task orders that had been issued as of September 2007 that exceeded this threshold. Overall, we selected 64 task orders, which ranged from $530,000 to $227 million in obligations, for review. We did not review acquisitions approved after fiscal year 2007 since our analysis indicated that it was often a year or more from the time that the acquisition strategy was approved to the time when task orders were actually issued. For each of the task orders, we reviewed the acquisition strategy, base contract, task orders, statements of work, and other documentation supporting the need to acquire contract support and any risk assessments prepared. We also interviewed program and contracting officials who managed these acquisitions to obtain information concerning why these services were contracted for, the risks that were considered, and any additional steps that had been taken to enhance oversight of the contractors. We assessed the reliability of the Federal Procurement Data System–Next Generation to identify acquisitions and to select task orders that were within the scope of our review by verifying (1) the contract and task order identification numbers; (2) the contract award date; (3) that the task orders associated with the acquisitions were for professional and management support services; and (4) that the task orders had obligations exceeding $500,000. On the basis of this assessment we determined that the data were sufficiently reliable for the purposes of this review. To assess how DOD implemented performance-based acquisitions practices on contracts for professional and management support, we reviewed relevant provisions in the FAR and DOD guidance. We interviewed DOD and military department officials responsible for reviewing and approving services acquisitions to identify how these reviews addressed the implementation of performance-based practices. We reviewed performance work statements from each of the 64 task orders to assess whether contract requirements and performance measures were consistent with performance-based guidance, such as whether contract requirements were measurable and outcome based. We also analyzed documentation to determine how contractor performance was measured. We interviewed contracting and program officials associated with these acquisitions to identify how contract requirements and performance measures were developed. Finally, we interviewed DPAP officials and a representative from the Defense Acquisition University to obtain information on efforts to develop additional DOD guidance for implementing performance-based services acquisitions. To assess the extent to which DOD designated trained surveillance personnel, we reviewed the Defense Federal Acquisition Regulation Supplement and DOD policies and procedures to identify the department’s surveillance and training requirements. We then analyzed surveillance personnel appointment letters and training documentation associated with each of the 64 task orders to determine whether these requirements were met. We also interviewed DOD officials who were designated as surveillance personnel on one or more of the task orders we reviewed to obtain information on their training and responsibilities. To identify how actions to implement additional reviews of services acquisitions may improve DOD’s management and use of services contracts, we reviewed provisions contained in the National Defense Authorization Act for Fiscal Year 2008 that required DOD to establish an independent management review process. We review memoranda issued by DOD in September 2008 and February 2009 that provided guidance on the scope of these reviews, which DOD refers to as peer reviews. To obtain information on how peer reviews differ from the management reviews, we spoke with officials from DPAP and the military departments responsible for these reviews. We also obtained copies of the memoranda summarizing the findings and recommendations of the five peer reviews performed on professional and management support contracts as of September 2009. We conducted this performance audit from July 2008 through November 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Examples of Inherently Governmental and Approaching Inherently Governmental Functions Federal Acquisition Regulation section 7.503 provides examples of inherently governmental functions as well as services or actions that are not inherently governmental, but may approach being inherently governmental functions based on the nature of the function, the manner in which the contractor performs the contract, or the manner in which the government administers contractor performance. These examples are listed in tables 8 and 9 below. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments Acknowledgments In addition to the individual named above, key contributors to this report were Timothy DiNapoli, Assistant Director; Gary Guggolz; Justin Jaynes; Christopher Mulkins; Thomas Twambly; Richard Winsor; Arthur James, Jr.; Julia Kennon; Susan Neil; and Noah Bleicher.
In fiscal year 2008, the Department of Defense (DOD) obligated $200 billion on services contracts, including $42 billion for professional and management services. The Government Accountability Office (GAO) previously identified weaknesses in DOD's management and oversight of services contracts, contributing to DOD contract management being on GAO's high-risk list. For selected professional and management support contracts, GAO was asked to examine (1) the extent to which DOD considered the risks of contractors closely supporting inherently governmental functions at key decision points, (2) how DOD implemented performance-based practices, (3) the extent to which DOD designated trained surveillance personnel, and (4) whether a new review process may improve DOD's management of such contracts. GAO reviewed federal regulations, agency policies and guidance, and analyzed seven acquisitions approved from 2004 to 2007 and 64 related task orders for services. DOD policies do not require assessments of the risks associated with contractors closely supporting inherently governmental functions as part of its management reviews of acquisition strategies nor when task orders are issued for professional and management services. Such risks include the potential loss of government control over and accountability for mission-related policy and program decisions. Though all seven acquisitions and more than 75 percent of the task orders GAO reviewed provided for such services, GAO found no evidence that these risks were among those considered in the documentation reviewed. DOD guidance issued after these acquisitions were approved requires that consideration be given to using civilian personnel rather than contractors when the activities closely support inherently governmental functions. This guidance, however, does not require DOD personnel to consider and document risks posed when contractors perform these activities. Further, DOD personnel were unaware of the need to provide enhanced oversight when contracting for such services. DOD faces challenges in defining requirements and outcome-based measures when acquiring professional and management services. DOD personnel generally expressed task order requirements in terms of a broad range of activities that contractors may perform, but used standards and measures that were not always well-suited to assess outcomes. DOD made more use of objective measures to assess cost and schedule performance, but generally relied on subjective measures to assess the quality of the contractors' work. For example, DOD often measured quality based on the number of complaints lodged against the contractor, which provided little detail into how desired outcomes were achieved. DOD also missed opportunities to include objective measures that may have been better suited to assess outcomes. DOD officials stated that developing outcome-based, objective measures is challenging, but noted that initiatives are under way to better utilize such approaches. DOD has made progress in ensuring that trained surveillance personnel are assigned to monitor contract performance. Surveillance personnel were assigned to all 64 of the task orders GAO reviewed, and all but 3 had received required training. GAO identified, however, 3 instances of surveillance personnel who were not assigned before the contractor began work on a task order and 20 instances of personnel who did not receive training prior to beginning surveillance duties. In September 2008, DOD implemented a new peer review process that is tasked to address, among other issues, contractors closely supporting inherently governmental functions, the use of performance-based practices, and contractor surveillance. As of October 2009, four pre-award reviews and one post-award review of professional and management support contracts have been conducted and it is too early to tell whether such reviews will encourage DOD personnel to address these issues across the range of DOD's services contracts.
Background USDA manages and administers benefits programs that support farm and ranch production, natural resources and environmental conservation, and rural development. FSA is one of three USDA service center agencies that manages and administers these benefits to farmers and ranchers. FSA has three core program areas: farm programs, farm loan programs, and commodity operations. The largest of the program areas—farm programs—pays billions of dollars annually to approximately 2 million farmers and ranchers. As of November 2008, FSA reported that these five farm programs accounted for 95 percent of FSA’s budget and transactions.  Direct and Counter-Cyclical Payments Program: offsets losses for a drop in the market price for a specific crop.  Marketing Assistance Loan Program: provides interim financing to meet cash flow needs when market prices for commodities are at harvest time lows.  Noninsured Crop Disaster Assistance Program: provides aid for uninsured crops that are destroyed through natural disasters.  Crop Disaster Program: provides benefits for crop production or quality losses during the crop year.  Conservation Reserve Program: provides incentive payments and cost sharing for projects to reduce erosion, protect streams and rivers, enhance wildlife habitats, and improve air quality. FSA administers these programs primarily at its approximately 2,300 local offices using a variety of computing environments and software applications to process farm program data, including  a central “Web farm,” consisting of an array of interconnected computer servers that exchange data in support of data storage and Web-based applications;  a central IBM mainframe that hosts non-Web applications and data;  a distributed network of IBM Application System 400 computers and a common computing environment of personal and server computers at each local office. We, FSA, and others have reported challenges with the current systems used to deliver benefits. Specifically, FSA’s information systems  date to the 1980s and are obsolete and difficult to maintain. The maintenance contract on a key component—the Application System 400 computer—expires in 2013, and FSA anticipates that the contract will be difficult to renew.  provide farmers and ranchers with limited access to farm programs through the Internet, so they must primarily visit a local office to conduct transactions.  are not interoperable. FSA personnel at the local offices must switch between applications hosted on each system. In addition, the Application System 400 computers can only store customer information at a local office. Therefore, customers cannot use different offices to complete their transactions.  do not satisfy federal directives for internal controls and security.  are difficult to modify or change, hampering FSA’s ability to promptly implement new benefits programs. Goals and History of MIDAS In early 2004, FSA began planning the MIDAS program to streamline and automate farm program processes and to replace obsolete hardware and software. FSA identified these goals for the program:  Replace aging hardware: Replace the Application System 400 computers with a hosting infrastructure to meet business needs, internal controls, and security requirements.  Reengineer business processes: Streamline outmoded work processes by employing common functions across farm programs. For example, determining benefits eligibility might be redesigned (using business process reengineering) as a structured series of work steps that would remain consistent regardless of the benefits requested. Improve data management: Make data more readily available to FSA personnel and farmers and ranchers—including self-service capabilities—and increase data accuracy and security. Improve interoperability with other USDA and FSA systems: Integrate with other USDA and FSA modernization initiatives, including the Financial Management Modernization Initiative for core financial services that meet federal accounting and systems standards, the Geospatial Information Systems to obtain farm imagery and mapping information, and the Enterprise Data Warehouse to provide enterprise reporting. FSA drafted initial requirements for MIDAS in January 2004. It halted requirements development when program officials decided that the proposed customized solution would not meet future business needs. FSA subsequently changed its approach in the summer of 2006 from customized software to commercial off-the-shelf enterprise resource planning software. In February 2008, FSA analyzed how its farm program functions would map to functions available in an off-the-shelf enterprise resource planning software suite from vendor SAP, which was selected for two other USDA modernization initiatives—the Financial Management Modernization Initiative and the Web Based Supply Chain Management program. This analysis concluded that MIDAS processes generally mapped to the SAP software. Based on that analysis and a software alternatives analysis, FSA decided to proceed with SAP Enterprise Resource Planning as the solution for MIDAS. FSA also decided to accelerate the time frame for implementing the solution from the 10 years originally planned to 2 years for its 2008 business case. To accomplish this, FSA would compress the requirements analysis phase from 4 years to 5 months, and reduce the analysis and design phase from 3½ years to 9 months. In preparation for issuing a request for quotation and selecting a contractor to define, design, and implement MIDAS with the SAP software suite, FSA staff visited local offices to document farm program business processes and to determine requirements for the new system. The request for quotation for the MIDAS system integrator contract was released in July 2009; a contract based on this request was awarded to SRA International in December 2009. The contract start was delayed due to a bid protest, which was resolved in February 2010, and SRA International began work in May 2010. By this point, FSA had also awarded six other contracts for services to support additional aspects of this initiative, including software licenses, project management support, and technical support. FSA hired a MIDAS executive program manager in September 2007 and drafted a staffing plan in April 2009 that called for 35 to 40 full-time government employees to oversee the program and its supporting contracts. The program office reports to the FSA Chief Information Officer (CIO) and has three functional areas: requirements and project management, IT solutions, and change management and communications. The USDA CIO is responsible for MIDAS investment guidance and direction. Figure 1 depicts a timeline of key milestones for MIDAS from its inception through the initiation of work by the system integrator. In view of congressional concern about the complexity, scale, and challenges of FSA’s IT modernization, USDA has been required to report to the committees on Agriculture and Appropriations of the Senate and House of Representatives on key aspects of MIDAS management, including cost, schedule and milestones, oversight and investment management, and integration with other modernization initiatives. In response, USDA has submitted a series of reports to Congress that reflect the department’s approach toward the modernization program and its progress. Prior GAO Review Found That Program Cost and Schedule Estimates Were Inadequate In May 2008, at the request of the House and Senate Committees on Appropriations, we reported that MIDAS was in the planning phase and that FSA had begun gathering information and analyzing products to integrate its existing systems. We determined that the agency had not adequately assessed the program’s cost estimate, in that the estimate had been based on an unrelated USDA IT investment. Moreover, the agency had not adequately assessed its schedule estimate because business requirements had not been considered when FSA reduced the implementation time frame from 10 years to 2 years. As a result, we said that it was uncertain whether the department could deliver the program within the cost and schedule time frames it had proposed and recommended that FSA establish effective and reliable cost estimates using industry leading practices and establish a realistic and reliable implementation schedule that was based on complete business requirements. The department generally agreed with our recommendations. Leading Practices for IT Modernization Management Effective planning and management practices are essential for the success of large, complex IT modernization efforts. Our reviews of these practices and experience with federal agencies have shown that such practices can significantly increase the likelihood of delivering promised system capabilities on time and within budget. Organizations such as the Software Engineering Institute at Carnegie Mellon University have issued guidance on effective planning and management practices for developing and acquiring software-based systems. These practices include:  Project planning and monitoring: Project planning establishes a framework for managing the project by defining project activities and their estimated cost and schedule, among other things. Project monitoring provides an understanding of the project’s progress, so that appropriate corrective actions can be taken if performance deviates from plans. Effective planning and monitoring employ a range of resources and tools that promote coordination of and insight into the project’s activities, such as an integrated project schedule, which identifies a project’s dependencies with other projects to facilitate coordination of their tasks and resources.  Requirements management: Requirements establish what the system is to do, how well it is to do it, and how it is to interact with other systems. Effective management of requirements involves assigning responsibility for them, tracking them, and controlling requirements changes over the course of the project. It also ensures that requirements are validated against user needs and that each requirement traces back to the business need and forward to its design and testing.  Contract management: Effective contract management ensures that contractor activities are performed in accordance with contractual requirements and that the acquiring organization has sufficient visibility into the contractor’s performance to identify and respond to performance shortfalls. It also ensures that the roles of multiple contractors are clearly defined in a contract management plan, thus avoiding confusion or duplication of effort in managing the tasks.  Risk management: Risk management is a process for anticipating problems and taking appropriate steps to mitigate risks and minimize their impact on project commitments. It involves identifying and cataloging the risks, categorizing them based on their estimated impact, prioritizing them, developing risk mitigation strategies, and tracking progress in executing the strategies. For projects such as MIDAS, which involve complex and concurrent activities, it is important that proven practices be implemented early in the life of the project so that potential problems can be identified and addressed before they can significantly impact program commitments. Federal guidance, along with our framework for managing IT investments and our prior reviews of federal investments also point to the importance of having executive-level oversight and governance for the success of large IT investments. Executive attention helps to ensure that such projects follow sound business practices for planning, acquiring, and operating the IT system; meet cost, schedule, and performance goals; and detect and address risks and problems that could impede progress toward those goals. When multiple oversight boards govern an investment, it is critical to define the roles and coordination among them to avoid duplication of effort and to increase the effectiveness of the oversight. To help institutionalize such oversight, OMB requires capital planning and investment control processes, including a department-level board with the authority to commit resources and make decisions for IT investments. Such boards are to review the investments at key decision points against standard evaluation factors. OMB also requires annual and monthly reporting for such investments. Due to its concerns that investment review boards have not always been effective, OMB recently identified additional actions agencies should take to strengthen the boards, including improving the timeliness and accuracy of program data available to them. MIDAS Is Currently Being Defined; Cost and Schedule Estimates Are Uncertain FSA plans to modernize all the systems that support its 37 farm programs (listed in app. II) with the MIDAS program. The implementation cost estimate is approximately $305 million, with a life cycle cost of approximately $473 million. However, the implementation cost is uncertain because it has not been updated since 2007 and does not include key cost elements. MIDAS is in its second of four phases—proof of concept and system design. However, the schedule for the current program phase, which was to be completed in October 2011, is uncertain, and a key milestone, requirements review, is delayed. As a result, the completion date for the second phase, and its impact on subsequent phases, is unknown. FSA officials plan to revisit the cost and schedule estimates after completing requirements definition. Program Scope Is Generally Defined, but Is Not Reflected in Outdated Cost Estimate As currently defined, the scope of MIDAS includes modernization of FSA’s systems for all of its 37 farm programs (listed in app. II). The modernization effort is to address all of the goals of MIDAS: replace aging hardware; reengineer business processes across all the farm programs; improve data access, accuracy, and security; and provide interoperability with the financial management, geospatial, and enterprise data initiatives. Figure 2 conceptually depicts the proposed systems, components, and interconnections, in contrast with those currently used to deliver farm program benefits. The program’s estimated life cycle cost is approximately $473 million, with approximately $305 million for program planning, requirements definition, system design and development, deployment, and program support through 2014. FSA considers the implementation cost estimate— which was developed in 2007 and is the most current available—to be preliminary, with a large degree of uncertainty. FSA officials reported that approximately $66 million has been obligated for the program from fiscal year 2009 to June 2011, $61 million of which has been obligated for seven contracts that supported MIDAS during our review. Approximately $36 million has been obligated for the system integrator contract, which is to provide planning, development, design, and deployment. Approximately $25 million has been obligated for the remaining six contracts, which are to provide project management support, development, independent verification and validation, software licenses, and hosting infrastructure. Table 1 describes these contracts. FSA officials stated that they have not revised the 2007 cost estimate because the scope of MIDAS has not changed. However, FSA’s cost estimate for MIDAS does not reflect costs resulting from program changes identified since 2007, such as selection of SAP as the enterprise resource planning software and mechanism for enterprise reporting;  workshops held with stakeholders in 2010 to identify business  deployment of the financial management initiative and planned integration with the geospatial and enterprise data initiatives. In addition, estimated costs have not been included for modernizing program processes that cannot be supported with the SAP software, or for implementing any new farm program requirements that may be enacted in the 2012 farm bill. In April 2011, FSA officials stated that they would begin revising the program’s cost estimate in September 2011 and would incorporate new information gained from requirements development. However, they could not provide a date for completing the revised estimate because this information was still being identified. Design Milestones Have Slipped; Program Schedule Is Uncertain MIDAS is to be executed in four phases with incremental deployment of system capabilities, as recommended by OMB. FSA calls these four phases planning, proof of concept and system design, initial operating capability, and full operating capability. These phases were to run from fiscal year 2010 through fiscal year 2014, as shown in figure 3. FSA completed the program planning phase in October 2010. In April 2011, FSA officials reported that the second phase was under way and that the proof of concept demonstration was on schedule, but that key milestones for system design would not be completed as scheduled. FSA officials could not provide new completion dates for the system design milestones or the second phase. They stated that an update to the schedule, due in September 2011, would also not be completed as planned because information needed to revise the schedule is being identified as the second phase progresses. This uncertainty has implications for the remaining phases, as discussed in the following sections. Project planning. This phase began in May 2010 and was completed in October 2010—1 month later than planned due to FSA’s requirement that the system integrator address deficiencies in its planning deliverables. During this phase, the system integrator developed—and FSA approved—planning documents that define and detail the management of processes, products, activities, and milestones for the succeeding phases of MIDAS, including a project plan, concept of operations, SAP implementation road map, technical development approach, organizational change management strategy, and data management plan. FSA also established a federal program office for MIDAS and filled most program office positions, including key management positions for the program director and deputy directors for requirements and project management, IT solutions, and change management/communications. Proof of concept and system design. This phase, begun in November 2010, was scheduled to be completed in October 2011. The proof of concept is to demonstrate several functions of one farm program—the Marketing Assistance Loan farm program—with an interface to geospatial systems. This demonstration is to use SAP software in a stand-alone (i.e., not production) environment and is to validate certain SAP software functions. An FSA official stated that the first proof of concept demonstration was conducted in May 2011 and that field demonstrations are to be conducted through August 2011. The system design portion of this phase entails three efforts—defining requirements, allocating requirements to systems, and designing system functions. To define requirements, FSA is analyzing the 37 farm programs to identify the required business processes, including the steps, tasks, and data currently used for these programs. These processes are also being re-engineered or optimized by aligning them with nine common processes where possible. Tasks that do not align with common processes will be identified as program-specific processes. Both common and program-specific business processes are to be captured and baselined as requirements. Technical requirements are to be defined in conjunction with business requirements and will specify computer processing power, data storage, network bandwidth, and computer upgrades to support the processing of MIDAS functions, among other needs. They are also to address modernization goals, including consolidation of farm program processing to two existing computing centers, eliminating the obsolete computers in the local offices; allowing internal and external access to MIDAS through Web portals; and integrating MIDAS with the other USDA and FSA modernization initiatives. Following requirements definition, FSA plans to conduct an allocation analysis to determine which business requirements can be supported by the SAP software. Requirements that cannot be implemented using the SAP software are to be allocated to the Web farm for implementation. A high-level design of the MIDAS solution, to include both SAP and Web farm (non-SAP) system functions, will be based on this requirements allocation. In April 2011, FSA officials stated that two key system design milestones—the system requirements review and the high-level design review—would not be held as scheduled. According to the December 2010 program schedule, milestones for these events were originally scheduled for May 2011 and July 2011, respectively. However, FSA officials do not plan to conduct the system requirements review until December 2011, and a new date for the high-level design review has not yet been set because additional information and analysis are needed to plan this milestone. As a result, the completion date for the second phase is uncertain. Initial operating capability. This phase was to be conducted from July 2011 to December 2012––a schedule that has not yet been updated to reflect delays in the second phase. The initial activities of this phase are to run concurrently with the proof of concept and system design phase. Detailed requirements are to be defined for the Marketing Assistance Loan farm program, including required interfaces, computers, data storage, and networks. Plans call for augmenting the high-level system design to reflect these requirements, implementing the design for modernized Marketing Assistance Loan operations, and deploying it to all local offices. Full operating capability. This phase, scheduled from September 2012 to March 2014, is to include detailed requirements definition, design, and deployment for the 36 remaining farm programs and for farmer and rancher access to farm program services from their own computers. The schedule for this phase has also not been updated to reflect delays in the proof of concept and system design phase. MIDAS Plans Reflect Many Leading Management Practices, but Could Be Strengthened Delivering large IT modernization programs such as MIDAS on time and within budget presents challenges and risks. Program goals are more likely to be achieved when managers employ leading practices for managing program planning and monitoring, requirements, contracts, and risks. Prior to the proof of concept and system design phase, MIDAS plans were in place and managers were assigned for these practices. These plans largely incorporated certain leading practices, although each management area had at least one practice that was not fully satisfied. Program Planning and Monitoring Are Partially Defined; Implementation Is Incomplete The success of complex IT modernization initiatives such as MIDAS, which involve transforming business processes and integrating with other systems, requires effective program planning and monitoring to ensure that the intended results are achieved. The Software Engineering Institute, our work, and recent OMB guidance have identified leading practices that support effective planning and monitoring to include  assigning a full-time project manager and committed business sponsor to guide the program;  planning organizational change and communications management to obtain user acceptance of new ways of doing business;  establishing integrated project teams with external stakeholders and subject matter experts to facilitate coordination of project activities;  developing integrated project schedules to identify external dependencies among tasks and resources;  defining earned value management that is compliant with relevant guidelines to manage contractor and project office development work; and tracking and reporting the status of key program milestones—such as through OMB’s IT investment business case (known as the exhibit 300) and program status reports on OMB’s IT investment Web site (known as the IT Dashboard). Of these six practices, FSA has satisfied three, partially satisfied two, and not satisfied one (see table 2). Specifically, FSA has assigned a program manager and a business sponsor, has planned and initiated organizational change and communications management, and planned for earned value management. However, it has not yet established an integrated project team that formally commits the support of IT programs related to the project, developed an integrated project schedule that specifies related IT program dependencies, or reported clearly on key MIDAS milestones to accurately convey program progress. Without a committed integrated project team and an integrated project schedule that identifies MIDAS dependencies on initiatives outside the program office, the program may not obtain necessary and timely staff participation, expertise, and resources, and may not be able to adequately monitor integration with these initiatives. Without clear milestone reporting, Congress, OMB, department and agency management, and other interested parties will have difficulty tracking the delivery of MIDAS capabilities. Requirements Management Is Defined, but User Concerns Need to Be Fully Validated Defining and implementing disciplined processes for developing and managing the requirements for a new system can help improve the likelihood that the system will meet user needs and that it will perform or function as intended. Leading practices for requirements development and management include, among other things,  establishing a policy for developing and managing requirements;  assigning and defining the role and responsibilities for a requirements  eliciting and validating user needs;  defining a disciplined change control process; and  ensuring that system requirements are traceable back to business requirements and forward to detailed requirements, design, and test cases. FSA fully satisfied four of these practices, and partially satisfied one (see table 3). MIDAS requirements and change management plans address all of these leading practices. However, one practice—the validation of user needs—was not fully satisfied due to incomplete validation of user needs (called “pain points”) that had been identified prior to the award of the system integrator contract. Unless all the concerns previously expressed by field staff as “pain points” are systematically validated with respect to MIDAS requirements and appropriately resolved by the new system or some other means, MIDAS may not meet user expectations and its acceptance by field staff may be jeopardized. Contract Management Is Defined, but Tasks Could Be Better Delineated among Contractors Effective project management includes clear definition of authority, duties, and responsibilities among contractors, and between contractors and program management. According to the Software Engineering Institute and our prior work, effective processes to manage and oversee contracts that support IT projects include  establishing and maintaining a plan for managing and overseeing the contracts;  assigning responsibility and authority for performing contract identifying the contract work to be performed and the associated acceptance criteria; conducting reviews with contractors to ensure cost and schedule commitments are being met and risks are being managed; and  establishing processes for verifying and accepting contract deliverables. FSA fully satisfied four of these practices and partially satisfied one (see table 4). The system integrator contract and supporting documents indicate that FSA has planned to use these practices and has applied them in managing this contractor. In addition, the project management plan describes the management approach for all the contracts that support MIDAS, specifies responsibility for overseeing the contracts, and defines the process for reviewing contractor performance. The plan also requires that contractor deliverables and acceptance criteria be specified in the contracts. However, the plan does not clarify contractor roles for tasks supported by more than one contractor, and does not require that those roles be delineated in other program or contractor documents. Table 4 presents a detailed assessment of how FSA has addressed leading contract management practices. Unless program plans, schedules, and reports clearly delineate the work products and activities of individual contractors, program staff, contractors, and stakeholders may be confused about contractor responsibilities, which may negatively impact program deliverables or make it difficult to hold contractors accountable. By eliminating contracts with the potential for duplicate or confusing efforts, FSA has resolved the ambiguous roles contained in its plans and can now clearly present the unique roles of its contractors in updates to its program plans and other artifacts. Risk Management Is Defined and an Inventory Established, but Risks Are Not Regularly Tracked Risk management is critical in complex IT modernization programs such as MIDAS to detect and address risks before they adversely impact project objectives. Leading practices and our prior work recommend  establishing and documenting risk management processes in a risk management plan from the program’s inception;  assigning a risk manager with the authority to oversee the plan and its  defining a risk inventory, and documenting risks in it, along with decisions about their priority, probability of occurrence, and impact; and regularly tracking the status of risks and mitigation efforts and providing this input to project managers. FSA satisfied three of these practices and did not satisfy a fourth (see table 5). Specifically, it has defined its risk management processes in a risk management plan, designated a risk manager, and established a risk inventory. However, it has not maintained the risk inventory to track and report the current status of risks and mitigation efforts to inform MIDAS managers. Identifying risks according to the MIDAS risk management plan has provided FSA managers with an initial understanding of the risks faced by the program. However, until FSA ensures that its risks have been consistently identified throughout the course of the program and regularly updates the status of its risks, it cannot ensure that it is effectively managing the full set of risks it faces or that progress is being made in mitigating the risks throughout the life cycle of MIDAS. MIDAS Governance Is Not Clearly Defined and Does Not Follow Department Investment Guidance Oversight and governance of IT investments help to ensure that the investments meet cost, schedule, and performance goals. When an investment is governed by multiple boards or bodies, the roles and coordination among these bodies should be clearly defined, including the processes and criteria for escalating issues. In addition, we and OMB recommend that federal agencies establish an executive board, typically at the department level, to oversee major IT investments. This board should review investments against criteria at key decision points, such as investment selection. In addition, OMB requires departmental oversight of the business cases for major IT investments and monthly status updates of program cost, schedule, and performance information. Consistent with federal guidance, USDA requires an executive board to oversee major IT investments at key decision points and a monthly status review. Oversight and governance of MIDAS is the responsibility of several department and agency bodies. Department-level oversight is performed by the Senior Management Oversight Committee; the Project Management/Design Decision Committee, which reports to the senior committee; and a proposed third body called the Modernization Review Board. In addition, the Modernization Program Management Review Board operates at the agency level. FSA has not clearly identified this board’s position in the oversight hierarchy. Table 6 summarizes the purpose and meeting schedules for these bodies. However, the roles and coordination of these bodies are not clear in the following respects:  Certain roles have been assigned to governance bodies without clear delineation of their scope and criteria for escalating issues. Charters and plans for the department Project Management/Design Decision Committee and the agency Modernization Program Management Review Board describe similar—and potentially overlapping—roles for overseeing agency IT initiatives. Moreover, the extent of oversight by the active bodies and criteria for escalating issues related to cost, schedule, performance, and risk have not been defined in charters or plans.  A key role has not been assigned. According to the MIDAS risk register, the Project Management/Design Decision Committee and Senior Management Oversight Committee are to coordinate enterprise resource planning among MIDAS and other initiatives, such as financial management. However, this coordination role has not been described in charters or plans.  The role of the proposed board has not yet been defined. FSA officials stated that the USDA Modernization Review Board is to improve MIDAS governance, but its oversight responsibilities and processes to do so have not yet been defined. These concerns have been recognized to some extent by FSA and the department, but remain unresolved. In October 2010, the FSA Modernization Program Management Review Board appeared to be aware of this lack of clarity and recommended that a directory of governance boards be developed and their respective responsibilities, decision-making processes, and escalation path be defined. An April 2011 Project Management/Design Decision Committee briefing noted that the proposed Modernization Review Board would mitigate the risk of integrating MIDAS with other systems. However, as of May 2011, these recommended improvements had yet to be provided. Regarding oversight of MIDAS, none of these boards reviewed MIDAS at key decision points using criteria defined in department guidance. An official from the department’s CIO office stated that the Senior Management Oversight Committee serves as the IT investment executive board recommended by OMB and required by USDA, although the committee’s charter and other governance plans do not specify this role. The committee reviewed MIDAS at the planning gate in October 2010, but did not use the department’s review criteria. Instead, the review focused on contract deliverables and did not include project management office documents such as the MIDAS risk assessment and project management plan, as called for by department guidance. On the other hand, department officials reported that MIDAS has complied with department requirements for business case and monthly status reviews. A department official reported that USDA’s CIO office has conducted monthly reviews of MIDAS status and its business case using the department’s criteria and that the status is posted on the IT Dashboard. Nevertheless, the dashboard reported in January and March 2011 that improved oversight is needed for MIDAS. The lack of clarity and definition for the roles of MIDAS oversight and governance bodies may result in duplication or voids in program oversight and wasted resources. Moreover, because MIDAS is not being fully governed according to department investment guidance, the department may not be rigorously monitoring and managing the program and its risks, and may not have the information it needs to make timely and appropriate decisions to ensure the success of MIDAS. Conclusions After years of planning, USDA is moving forward with its farm program modernization effort known as MIDAS, which intends to remedy long- standing problems with the supportability, efficiency, and accuracy of existing systems. The agency has made key decisions regarding the scope of MIDAS, the contractors that will support system design and development, and the incremental approach it will use to execute the program. However, FSA’s implementation cost estimate has yet to reflect decisions and activities that have occurred since the estimate was developed in 2007. In addition, key events for the proof of concept and system design phase, currently under way, have been delayed. Consequently, agency managers are revising the plans for completing MIDAS requirements definition, system design, and the cost and schedule for the program, but are unlikely to finalize these plans until fiscal year 2012. Given the agency’s prior difficulty with developing reliable cost and schedule estimates, and our corresponding prior recommendation, it is critical that FSA and USDA adopt a rigorous and credible approach for revising estimates and complete them in a timely manner, so that the department has a basis for effectively managing program progress and making decisions about needed adjustments. The challenges USDA is facing in meeting its program commitments are more likely to be overcome if it can adopt and execute effective management practices. The management framework established by the agency in a series of plans reflects many leading practices for program planning and monitoring, requirements, contracts, and risk. Moreover, FSA has followed through on these plans to some extent by staffing government managers in these areas and instituting mechanisms to promote use of the practices, such as contract provisions for earned value management. However, MIDAS management could be further strengthened through improved definition and execution of these and other leading practices, specifically by chartering and operating an integrated project team; fully documenting MIDAS dependencies on other departmental IT initiatives in an integrated project schedule; clearly identifying and reporting key incremental milestones to OMB; validating all previously identified user concerns against MIDAS requirements; clearly delineating contractor roles and responsibilities; and consistently identifying and regularly tracking and reporting the status of MIDAS risks. By applying its plans and embracing other proven management practices, FSA will stand a better chance of surfacing and resolving issues before they can derail the program. The agencywide impact of MIDAS and its dependence on other IT initiatives point to the need for clearly defined and effectively executed oversight. However, the roles and coordination among oversight bodies are not clearly defined and USDA’s well-defined investment oversight guidance is not being fully executed. Providing adequate and efficient oversight for MIDAS in such an environment presents a challenge that could be avoided if USDA and FSA delineate governance roles and responsibilities and execute them accordingly. Recommendations for Executive Action To increase the likelihood that the United States Department of Agriculture (USDA) will be able to successfully define, develop, and deploy the Modernize and Innovate the Delivery of Agricultural Systems (MIDAS) program, we recommend that the Secretary of Agriculture direct the chief information officers of USDA and the Farm Service Agency (FSA) to take the following three actions:  To ensure that the department can effectively oversee MIDAS cost, schedule, and performance commitments, FSA should  develop timely cost estimates for MIDAS’s remaining phases, its overall development and deployment, and its life cycle, to incorporate the program changes previously omitted and any others recently identified and  develop complete and detailed schedules for the program’s current and remaining phases that take into account the milestone delays from the program’s second phase and a requirements baseline.  To ensure that FSA is employing leading practices for program planning and monitoring, requirements management, contract management, and risk management for MIDAS, the agency should charter and operate an integrated project team that commits stakeholders to the program from other USDA information technology (IT) initiatives;  establish an integrated project schedule that identifies tasks, dependencies, and resource commitments and contention between MIDAS and other department IT initiatives; clearly track key milestones, and report their status in the program’s business case and on the Office of Management and Budget’s IT Dashboard; validate all of the 591 user pain points against the requirements and document the results of this validation, including points that will not be addressed by MIDAS;  update the program’s management plans to clearly delineate the roles and responsibilities of contractors assigned to the same tasks; and  document the status of resolved and unresolved risks initially identified in November 2010, identify and maintain any unresolved risks from that period in the current risk register, and regularly track risks and update the risk register according to the program’s risk management plan.  To ensure the effectiveness of MIDAS oversight and the efficiency of its governance bodies, the department and agency should collaborate to  delineate the roles and responsibilities of the governance bodies and clarify coordination among them, to include criteria for escalating issues and  document how the department is meeting its policy for IT investment management for MIDAS, to include investment reviews. Agency Comments In written comments on a draft of this report signed by the Administrator, Farm Service Agency, and reprinted in appendix III, USDA generally agreed with the content and recommendations and described actions and time frames to address the recommendations. For example, the department stated that it will revise MIDAS schedule and cost estimates for this year’s capital planning submission based on fiscal year 2011 planning, requirements, and design sessions, and will be able to develop more precise estimates at the completion of primary blueprinting and design in the first quarter of fiscal year 2012. The department described improvements to address our other recommendations, including integration processes with other initiatives; requirements validation; risk management; and department-level governance, to be completed by the end of the second quarter of fiscal year 2012. We are sending copies of this report to interested congressional committees, the Director of the Office of Management and Budget, the Secretary of Agriculture, and the Administrator of the Farm Service Agency. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9286 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine (1) the scope and status of the Modernize and Innovate the Delivery of Agricultural Systems (MIDAS) program; (2) whether MIDAS has appropriate program management; and (3) whether MIDAS has appropriate executive oversight and governance. To determine the program’s scope and status, we reviewed planning documents to identify the farm programs included in MIDAS, the required interfaces to other United States Department of Agriculture (USDA) and Farm Service Agency (FSA) modernization initiatives, and the proposed technical approach to the program. We also reviewed the fiscal year 2012 business case (called the exhibit 300), program schedules, oversight reviews from October 2010, and a 2010 FSA report to Congress to identify the active contracts supporting MIDAS and to determine the program’s phases, due dates, and phase completion status. To determine whether FSA completed the planning phase as scheduled, we identified the deviation between the planned and actual completion dates. We also examined selected products produced during that phase. We identified the cost estimate and its limitations using these sources and a 2009 third- party report to Congress on FSA modernization. We interviewed USDA and FSA officials to clarify information in the documents we reviewed and to more fully understand the program’s progress and status. To determine whether MIDAS has appropriate program management, we identified leading management practices for four areas that we and others have previously found to be important for the success of large information technology (IT) programs—planning and monitoring, requirements management, contract management, and risk management. We then reviewed plans to determine if they addressed these leading practices. For the four management areas, we examined plans, organization charts, and program records to determine whether and when managers had been assigned. To the extent that MIDAS had progressed to a stage where implementation of these practices would be appropriate, we reviewed program artifacts and interviewed program officials to determine the extent to which the practices were in place. We assessed a practice as being satisfied if the evidence provided by USDA and FSA officials demonstrated all aspects of the leading practice. We assessed a practice as being not satisfied if the evidence did not demonstrate any aspect of the leading practice, or if no evidence was provided by USDA or FSA for that practice. Finally, we assessed a practice as being partially satisfied if the evidence demonstrated some, but not all, aspects of the leading practice. Additional considerations in our evaluation of each management area follow.  Project planning and monitoring: We compared program plans, including the project management plan and supporting documentation, against leading practices to determine whether such practices were specified in the plans. We also examined program artifacts and records to determine the extent to which an integrated project team, an integrated project schedule with external dependencies, and tracking and reporting of program progress outside the program were in place. Due to the early stage of the program, we did not verify whether earned value management had been executed and reported as planned or whether organizational change and communications activities had been executed as planned.  Requirements management: We compared the requirements management plan and related documents against leading practices to determine whether such practices had been specified in the plans. Because requirements were in the early stages of being defined during the period of this review, we did not verify whether FSA was executing its requirements management approach as planned. However, we reviewed a 2008 requirements document containing previously elicited user requirements and interviewed FSA officials to determine how those requirements had been validated.  Contract management: We compared program plans, including the project management plan and supporting documentation, against leading practices to determine whether such practices had been specified in the plans. Due to the critical role of the system integrator contract in achieving program goals, we focused our assessment on this contract, the deliverables specified in this contract, and the review criteria for one deliverable—the strategy plan to decompose the requirements document. We verified whether the review criteria had been applied to this deliverable. We did not verify whether other planning phase contract deliverables had been evaluated by the gate review panel according to corresponding review criteria. We compared the descriptions of contractor tasks from contract management documents to each other and when we identified similar or identical tasks for different contractors, we interviewed FSA officials to obtain their explanations for the roles of each contractor and to clarify the contract management documentation. We reviewed the risk inventory to determine whether duplicate contractor roles had been identified as risks and how the risks were described.  Risk management: We compared the risk management plan and supporting documentation against leading practices to determine whether such practices had been specified in plans. We also reviewed the November 2010 risk inventory to assess whether risks had been aligned with risk factors such as mitigation plans and status. We did not assess whether FSA assigned the risk indicators according to the criteria in the risk management plan. To assess whether the risk inventory was being updated, we compared the November 2010 risk inventory to risk inventories from January 2011 and May 2011 to characterize overall changes to risks, mitigation strategies, and status, and to determine whether the inventories clearly captured progress in addressing selected risks. To determine whether MIDAS has appropriate executive oversight and governance, we reviewed USDA guidance for investment management, project plans, charters, and meeting minutes for the governance bodies, agency presentations, and 2009 and 2010 USDA and FSA reports to Congress to identify the executive oversight and governance bodies, responsibilities, and hierarchy for MIDAS. We also interviewed USDA and FSA officials about MIDAS governance structure and practices. We compared the information we obtained with USDA’s capital planning and investment control guidance, which comports with federal IT investment management guidance, and with our IT investment management framework to ascertain whether USDA had complied with its own guidance for overseeing the investment and the extent to which governance bodies, their responsibilities, and processes had been defined. We performed our work at the USDA office in Washington, D.C. We conducted this performance audit from October 2010 to July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: MIDAS Farm Programs Appendix III: Comments from the Department of Agriculture Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, the following staff made key contributions to this report: Paula Moore (assistant director), Neil Doherty, Claudia Fletcher, Nancy Glover, Javier Irizarry, and Karl Seifert.
The United States Department of Agriculture's (USDA) Farm Service Agency (FSA) is responsible for administering billions of dollars annually in program benefits to farmers and ranchers. Since 2004, FSA has been planning to modernize its information technology (IT) systems that process these benefits with the Modernize and Innovate the Delivery of Agricultural Systems (MIDAS) program. GAO was asked to determine (1) the scope and status of MIDAS, (2) whether MIDAS has appropriate program management, and (3) whether MIDAS has appropriate executive oversight and governance. To do so, GAO reviewed relevant department guidance and program documents and interviewed USDA officials. FSA plans to modernize the systems supporting its 37 farm programs with MIDAS. The implementation cost estimate is approximately $305 million, with a life cycle cost of approximately $473 million. However, the implementation cost estimate is uncertain because it has not been updated since 2007 and does not include cost elements that have since been identified, such as the selection of a commercial enterprise resource planning product. Following completion of its initial phase of program planning in October 2010, MIDAS entered its second of four phases--proof of concept and system design. However, the schedule for this phase, which was to be completed in October 2011, is now uncertain. While FSA officials report that the proof of concept activities are proceeding as scheduled, they have delayed a requirements review milestone until December 2011 and have not yet set a new date for the design review. As a result, the completion date for the second phase and its impact on subsequent phases is uncertain. FSA officials plan to revisit the cost and schedule estimates after completing requirements definition. FSA's program management approach includes many leading practices, but could be strengthened. For example, prior to the proof of concept and system design phase, plans were in place for organizational change and communication, requirements management, and risk. However, a few practices were either partially addressed or not addressed at all in program plans or in the implementation of those plans. For example, an integrated team has not yet been formed with representatives from IT programs that MIDAS depends on for its success. Moreover, the plans do not explicitly call for, and FSA has not produced, a schedule that reflects dependencies with those programs, and risks are not being regularly tracked as planned. FSA's uneven adoption of leading practices is likely to limit the agency's effectiveness in managing system development, and thus its ability to deliver system capabilities on time and within budget. Executive-level governance for MIDAS has not been clearly defined and does not fully follow department IT investment management guidance. Specifically, oversight and governance has been assigned to several department and agency bodies, but roles and escalation criteria are not clearly defined among them. Department officials reported that department guidance is being followed for monthly status reviews, but not for department-level reviews at key decision points. The lack of clarity and definition for the roles of the governance bodies could result in duplication or voids in program oversight, as well as wasted resources. Moreover, because MIDAS is not being governed according to the department's investment guidance, the department may not be rigorously monitoring and managing the program and its risks, and may not have the information it needs to make timely and appropriate decisions to ensure the success of MIDAS.
Background Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: POES, managed by NESDIS of NOAA and DMSP, managed by DOD. The satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products and are the predominant input to numerical weather prediction models. These images, products, and models are all used by weather forecasters, the military, and the public. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies, such as climate monitoring. Unlike geostationary satellites, which maintain a fixed position above the earth, polar-orbiting satellites constantly circle the earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the earth rotates beneath it, each satellite views the entire earth’s surface twice a day. Today, there are two operational POES satellites and two operational DMSP satellites that are positioned so that they can observe the earth in early morning, mid morning, and early afternoon polar orbits. Together, they ensure that for any region of the earth, the data provided to users are generally no more than 6 hours old. Figure 1 illustrates the current operational polar satellite configuration. Besides the four operational satellites, six older satellites are in orbit that still collect some data and are available to provide some limited backup to the operational satellites should they degrade or fail. In the future, both NOAA and DOD plan to continue to launch additional POES and DMSP satellites every few years, with final launches scheduled for 2008 and 2011, respectively. Each of the polar satellites carries a suite of sensors designed to detect environmental data that is either reflected or emitted from the earth, the atmosphere, and space. The satellites store these data and then transmit them to NOAA and Air Force ground stations when the satellites pass overhead. The ground stations then relay the data via communications satellites to the appropriate meteorological centers for processing. The satellites also broadcast a subset of these data in real time to tactical receivers all over the world. Under a shared processing agreement among the four processing centers— NESDIS, the Air Force Weather Agency, Navy’s Fleet Numerical Meteorology and Oceanography Center, and the Naval Oceanographic Office—different centers are responsible for producing and distributing via a shared network different environmental data sets, specialized weather and oceanographic products, and weather prediction model outputs. Each of the four processing centers is also responsible for distributing the data to its respective users. For the DOD centers, the users include regional meteorology and oceanography centers, as well as meteorology and oceanography staff on military bases. NESDIS forwards the data to NOAA’s National Weather Service for distribution and use by government and commercial forecasters. The processing centers also use the Internet to distribute data to the general public. NESDIS is responsible for the long- term archiving of data and derived products from POES and DMSP. In addition to the infrastructure supporting satellite data processing noted above, properly equipped field terminals that are within a direct line of sight of the satellites can receive real-time data directly from the polar- orbiting satellites. There are an estimated 150 such field terminals operated by U.S. and foreign governments, academia, and many are operated by DOD. Field terminals can be taken into areas with little or no data communications infrastructure—such as on a battlefield or a ship—and enable the receipt of weather data directly from the polar-orbiting satellites. These terminals have their own software and processing capability to decode and display a subset of the satellite data to the user. Figure 2 depicts a generic data relay pattern from the polar-orbiting satellites to the data processing centers and field terminals. NPOESS Overview Given the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program, NPOESS, is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring. To manage this program, DOD, NOAA, and NASA formed a tri-agency Integrated Program Office located within NOAA. Within the program office, each agency has the lead on certain activities. NOAA has overall program management responsibility for the converged system, as well as satellite operations; DOD has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the costs of funding NPOESS, while NASA funds specific technology projects and studies. Figure 3 depicts the organizations coordinated by the Integrated Program Office and their responsibilities. Program acquisition plans call for the procurement and launch of six NPOESS satellites over the life of the program, as well as the integration of 13 instruments, consisting of 11 environmental systems and 2 subsystems. Together, the sensors are to receive and transmit data on atmospheric, cloud cover, environmental, climate, oceanographic, and solar-geophysical observations. The subsystems are to support nonenvironmental search and rescue efforts and environmental data collection activities. According to the program office, 7 of the 13 planned NPOESS instruments involve new technology development, whereas 6 others are based on existing technologies. In addition, the program office considers 4 of the sensors involving new technologies critical because they provide data for key weather products; these sensors are shown in bold in table 1, which presents the planned instruments and the state of technology on each. In addition, the NPOESS Preparatory Project (NPP), which is being developed as a major risk reduction initiative, is a planned demonstration satellite to be launched in 2006, several years before the first NPOESS satellite launch in 2009. It is scheduled to host three of the four critical NPOESS sensors (the visible/infrared imager radiometer suite, the cross- track infrared sounder, and the advanced technology microwave sounder), as well as one other noncritical sensor (the ozone mapper/profiler suite). NPP will provide the program office and the processing centers an early opportunity to work with the sensors, ground control, and data processing systems. Specifically, this satellite is expected to demonstrate the validity of about half of the NPOESS environmental data records and about 93 percent of its data processing load. NPOESS Acquisition Strategy When the NPOESS development contract was awarded, program office officials identified an anticipated schedule and funding stream for the program. The schedule for launching the satellites was driven by a requirement that the satellites be available to back up the final POES and DMSP satellites should anything go wrong during the planned launches of these satellites. In general, program officials anticipate that roughly 1 out of every 10 satellites will fail either during launch or during early operations after launch. Key program milestones included (1) launching NPP by May 2006, (2) having the first NPOESS satellite available to back up the final POES satellite launch in March 2008, and (3) having the second NPOESS satellite available to back up the final DMSP satellite launch in October 2009. If the NPOESS satellites were not needed to back up the final predecessor satellites, their anticipated launch dates would have been April 2009 and June 2011, respectively. These schedules were changed as a result of changes in the NPOESS funding stream. A DOD program official reported that between 2001 and 2002 the agency experienced delays in launching a DMSP satellite, causing delays in the expected launch dates of another DMSP satellite. In late 2002, DOD shifted the expected launch date for the final DMSP satellite from 2009 to 2010. As a result, DOD reduced funding for NPOESS by about $65 million between fiscal years 2004 and 2007. According to NPOESS program officials, because NOAA is required to provide no more funding than DOD provides, this change triggered a corresponding reduction in funding by NOAA for those years. As a result of the reduced funding, program officials were forced to make difficult decisions about what to focus on first. The program office decided to keep NPP as close to its original schedule as possible because of its importance to the eventual NPOESS development and to shift some of the NPOESS deliverables to later years. This shift affected the NPOESS deployment schedule. To plan for this shift, the program office developed a new program cost and schedule baseline. NPOESS Costs Have Increased, and Schedules Have Been Delayed The program office has increased the NPOESS life cycle cost estimate by $1.2 billion, from $6.9 to $8.1 billion, and delayed key milestones— including the expected availability of the first NPOESS satellite, which was delayed by 20 months. The cost increases reflect changes to the NPOESS contract as well as increased program management funds. The contract changes include extension of the development schedule, increased sensor costs, and additional funds needed for mitigating risks. Increased program management funds were added for non-contract costs and management reserves. The schedule delays were the result of stretching out the development schedule to accommodate the change in the NPOESS funding stream. In addition, the delayed launch dates of the NPOESS satellites have extended the maintenance and operation of the satellite system from 2018 to 2020. When we testified on the NPOESS program in July 2003, we reported that the program office was working to develop a new cost and schedule baseline due to a change in the NPOESS funding stream. The program office completed its efforts to revise the NPOESS cost and schedule baseline in December 2003. As a result of the revised baseline, the program office increased the NPOESS cost estimate by $638 million, from $6.9 to $7.5 billion. The program office attributed the $638 million cost increase to extending the development schedule to accommodate the changing funding stream, increased sensor costs, and additional funds needed for mitigating risks. The program office has since increased funds for non-contract costs and management reserves, which raised its estimate by an additional $562 million to bring the NPOESS life cycle cost estimate to $8.1 billion. According to program officials, non-contract costs included oversight expenses for the prime contract and sensor subcontracts. Management reserves, which are a part of the total program budget and should be used to fund undefined but anticipated work, are expected to last through 2020. Table 2 shows a breakdown of the cost increases resulting from the revised plan. Recently, program officials reported that a new life cycle cost estimate would be developed by the contractor and program office. The program office expects to brief its executive oversight committee on the results of its cost estimate analysis by December 2004. The new cost estimate will be used to help develop the NPOESS fiscal year 2007 budget request. Officials reported that the new estimate is necessary in order to ensure that the program will be adequately funded through its life. In addition to increasing the cost estimate, the program office has delayed key milestones, including the expected availability of the first satellite, which was delayed by 20 months. The program office attributed the schedule delays to stretching out the development schedule to accommodate the changing funding stream. Table 3 shows program schedule changes for key milestones. A result of the program office extension of several critical milestone schedules is that less slack is built into the schedules for managing development and production issues. For example, the first NPOESS satellite was originally scheduled to be available for launch by March 2008 and to launch by April 2009. This enabled the program office to have 13 months to resolve any potential problems with the satellite before its expected launch. Currently, the first NPOESS satellite is scheduled to be available for launch by November 2009 and to launch the same month. This will allow the program office less than one month to resolve any problems. The program office has little room for error, and should something go wrong in development or production, the program office would have to delay the launch further. NPOESS Could Experience Further Cost and Schedule Increases NPOESS costs and schedules could continue to increase in the future. The contractor’s continued slippage of expected cost and schedule targets indicates that the NPOESS contract will most likely be overrun by $500 million at contract completion in September 2011. Program risks, particularly with the development of critical sensors to be demonstrated on the NPP satellite, could also increase costs and delay schedules for NPOESS. Current Shortfalls in Cost and Schedule Targets Could Require Additional Funds to Meet Launch Deadlines To be effective, project managers need information on project deliverables and on a contractor’s progress in meeting those deliverables. One method that can help project managers track progress on deliverables is earned value management. This method, used by DOD for several decades, compares the value of work accomplished during a given period with that of the work expected in that period. Differences from expectations are measured in both cost and schedule variances. Cost variances compare the earned value of the completed work with the actual cost of the work performed. For example, if a contractor completed $5 million worth of work and the work actually cost $6.7 million, there would be a –$1.7 million cost variance. Schedule variances are also measured in dollars, but they compare the earned value of the work completed to the value of work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month, but was budgeted to complete $10 million worth of work, there would be a –$5 million schedule variance. Positive variances indicate that activities are costing less or are completed ahead of schedule. Negative variances indicate activities are costing more or are falling behind schedule. These cost and schedule variances can then be used in estimating the cost and time needed to complete the program. Using contractor-provided data, our analysis indicates that NPOESS cost performance was experiencing negative variances before the revised plan was implemented in December 2003, and continued to deteriorate after the implementation of the revised plan. Figure 4 shows the 15-month cumulative cost variance for the NPOESS contract. From March 2003 to November 2003, the contractor exceeded its cost target by $16.1 million, which is about 4.5 percent of the contractor’s budget for that time period. From December 2003 to May 2004, the contractor exceeded its cost target by $33.6 million, or about 5.7 percent of the contractor’s budget. The contractor has incurred a total cost overrun of about $55 million with NPOESS development less than 20 percent complete. This information is useful because trends tend to continue and can be difficult to reverse. Studies have shown that, once programs are 15 percent complete, the performance indicators are indicative of the final outcome. Our analysis also indicates that the program is showing a negative schedule variance. Figure 5 shows the 15-month cumulative schedule variance of NPOESS. From March 2003 to November 2003, the contractor recovered almost $11 million worth of planned work in the schedule. Program officials reported that within this time period, the program office ordered the contractor to stop some work until the new baseline was established. This work stoppage contributed to schedule degradation between March 2003 and August 2003. In September 2003, the program office implemented portions of the revised plan, which resulted in an improvement in schedule performance. The revised plan alleviated some of the cumulative schedule overrun by delaying the deadline for first unit availability by 20 months. However, based on our analysis, the cumulative schedule variance indicates slippage in the new schedule. Since December 2003, the contractor has been unable to complete approximately $19.7 million worth of scheduled work. The current inability to meet contract schedule performance could be a predictor of future rising costs, as more spending is often necessary to resolve schedule overruns. According to program office documents, cost and schedule overruns that occurred before December 2003 were caused by planning activities related to the revised plan, as well as by technical issues related to the development of the critical sensors and the spacecraft communications software. Since the completion of the revised plan, the program’s ability to meet the new performance goals continues to be hampered by technical issues with the design complexity, testing, and integration, among other things, of the critical sensors. These technical issues could cause further cost and schedule shortfalls. Based on contractor performance from December 2003 to May 2004, we estimate that the current NPOESS contract—which ends in September 2011 and is worth approximately $3.4 billion—will overrun its budget by between $372 million and $891 million. Our projection of the most likely cost overrun will be about $534 million, or about 16 percent of the contract. The contractor, in contrast, estimates about a $130 million overrun at completion of the NPOESS contract. Risks Could Further Affect NPOESS Cost and Schedule Risk management is a leading management practice that is widely recognized as a key component of a sound system development approach. An effective risk management approach typically includes identifying, prioritizing, resolving, and monitoring project risks. Program officials reported that they recognize several risks with the overall program and critical sensors that, if not mitigated, could further increase costs and delay the schedule. In accordance with leading management practices, the program office developed a NPOESS risk management program that requires assigning a severity rating to risks that bear particular attention, placing these risks in a database, planning response strategies for each risk in the database, and reviewing and evaluating risks in the database during monthly program risk management board meetings. The program office identifies risks in two categories: program risks, which affect the whole NPOESS program and are managed at the program office level, and segment risks, which affect only individual segments and are managed at the integrated product team level. The program office has identified 21 program risks, including 14 medium to medium-high risks. Some of these risks include the development of three critical sensors (the visible/infrared imager radiometer suite (VIIRS), the cross-track infrared sounder (CrIS), and the conical-scanned microwave imager/sounder (CMIS)) and the integrated data processing system; the uncertainty that algorithms will meet system performance requirements; and the effort to obtain a security certification and accreditation. Figure 6 includes the 21 program risks and their assigned levels of risk. Managing the risks associated with the development of VIIRS and CrIS, the integrated data processing system, and algorithm performance is of particular importance because these are to be demonstrated on the NPP satellite currently scheduled for launch in October 2006. Any delay in the NPP launch date could affect the overall NPOESS program because the success of the program depends on the lessons learned in data processing and system integration from the NPP satellite. At present, the program office considers the three critical sensors—VIIRS, CMIS, and CrIS—to be key program risks because of technical challenges that each is facing. VIIRS’s most severe technical issue, relating to flight- quality integrated circuits, was recently resolved; however, the program office continues to consider the schedule for the VIIRS sensor acquisition to be high risk. The prime contractor’s analysis of the current schedule indicated that the present schedule is unlikely to be achieved, considering the technical risks, the optimistically planned integration and test phase, and the limited slack in the schedule at this stage of the program. VIIRS is experiencing ongoing technical issues on major subcontracts related to the motors, rotating telescope, and power supply. As a result of the numerous ongoing issues—many of which affect system performance—significantly more modeling, budget allocation work, and performance reviews have been required than were originally planned. Until the current technical issues are resolved, delays in the VIIRS delivery and integration onto the NPP satellite remain a potential threat to the expected launch date of the NPP. The CMIS and CrIS sensor acquisitions are experiencing schedule overruns that may threaten their respective expected delivery dates. CMIS technical challenges include unplanned redesigns for receiver and antenna components, system reliability issues, and thermal issues. A significant amount of CrIS’s developmental progress has been impeded by efforts to address a signal processor redesign, vibration issues in an optical instrument, and the late subcontract deliveries of some parts. To the program office’s credit, it is aware of these risks and is using its risk management plans to help mitigate them. We plan to further evaluate the risk mitigation strategies of the Integrated Program Office in a follow-on review. Conclusions The next generation polar-orbiting environmental satellite program, NPOESS, recently underwent a replanning effort that increased the NPOESS cost estimate by $1.2 billion, from $6.9 to $8.1 billion and delayed key milestones, including the expected availability of the first satellite by 20 months. Other factors could further affect the revised cost and schedule estimates. Specifically, the current shortfalls in performance targets indicate that the NPOESS contract will most likely be overrun by $500 million at completion in September 2011 and program risks could contribute to additional cost and schedule slips. The program office is planning to develop new cost estimates but has not yet determined the impact of these risks. Given the history of large cost increases and the factors that could further affect NPOESS costs and schedules, continued oversight is more critical than ever. Accordingly, we plan to continue our review of this program. Agency Comments We provided a draft of this report to the Secretary of Commerce, Secretary of Defense, and the Administrator of NASA for review and comment. The departments generally agreed with the report and provided written and oral technical corrections, which have been incorporated as appropriate. NOAA, Integrated Program Office, DOD officials, including the System Program Director of the NPOESS Integrated Program Office and the Assistant for Environmental Monitoring from the Office of the Assistant Secretary of Defense, noted that changes in funding levels, triggered after the contract was awarded, were the primary reason for rebaselining the program’s costs and schedules. These funding level changes caused them to delay the development of the NPOESS system and led them to renegotiate the NPOESS contract. We revised our report to clarify the factors leading up to revising the baseline. Additionally, NOAA officials commented that the Integrated Program Office continues to aggressively manage the NPOESS program to ensure it is completed within cost, schedule, and performance. In regard to our estimate that the contract will overrun by at least $500 million, NOAA officials reported that the agency will manage the contract to ensure that any cost overrun is identified and addressed. To this end, NOAA has asked the contractor to develop a new life cycle cost estimate. NOAA and DOD officials also noted that in August 2004, the President directed the Departments of Defense, the Interior, Commerce, and NASA to place a LANDSAT-like imagery capability on the NPOESS platform. This new capability will collect imagery data of the earth’s surface similar to the current LANDSAT series of satellites, which are managed by the Department of Interior’s U.S. Geological Survey, and are reaching the end of their lifespans. Officials expect that this new sensor will be funded separately and will not affect the NPOESS program’s cost or schedule. Accordingly, while this recent event is important to the NPOESS program, it does not change the results of our report. We are sending copies of this report to the Secretary of Commerce, the Secretary of Defense, and the Administrator of NASA. In addition, copies will be available at no charge on the GAO Web site at http://www.gao.gov. Should you have any questions about this report, please contact me at (202) 512-9286 or Colleen Phillips, Assistant Director, at (202) 512-6326. We can also be reached by e-mail at [email protected] and [email protected], respectively. Other key contributors to this report included Carol Cha, Barbara Collier, John Dale, Neil Doherty, Karen Richey, and Eric Winter. Objectives, Scope, and Methodology Our objectives were to (1) identify any cost or schedule changes as a result of the revised baseline and determine what contributed to these changes and (2) identify factors that could affect the program baseline in the future. To accomplish these objectives, we focused our review on the Integrated Program Office, the organization responsible for the overall National Polar- orbiting Operational Environmental Satellite System (NPOESS) program. To identify any cost or schedule changes as a result of the revised baseline, we reviewed the new NPOESS cost and schedule baseline and compared it to the old acquisition baseline, as reported in our July 2003 testimony. To determine the factors that contributed to the cost and schedule changes in the new baseline, we reviewed program office plans and management reports. We also interviewed IPO officials to discuss these contributing factors. To identify factors that could affect the program baseline in the future, we assessed the prime contractor’s performance related to cost and schedule. To make these assessments, we applied earned value analysis techniques to data captured in contractor cost performance reports. We compared the cost of work completed with the budgeted costs for scheduled work for a 15-month period, from March 2003 to May 2004, to show trends in cost and schedule performance. We also used data from the reports to estimate the likely costs at the completion of the prime contract through established earned value formulas. This resulted in three different values, with the middle value being the most likely. We used the base contract without options for our earned value assessments. We reviewed these cost reports and program risk management documents and interviewed program officials to determine the key risks that negatively affect NPOESS’s ability to maintain the current schedule and cost estimates. We reviewed independent cost estimates performed by the Air Force Cost Analysis Agency and compared them with the program office cost estimates in order to determine possible areas for cost growth. To assess the potential effect of the NOAA-N Prime satellite incident on the current program baseline, we reviewed documentation related to the POES accident and alternatives for moving forward and interviewed officials from the National Aeronautics and Space Administration (NASA) and NOAA’s National Environmental Satellite, Data, and Information Service. We obtained comments on a draft of this report from officials at the Department of Defense (DOD), NOAA, and NASA, and incorporated these comments as appropriate. We performed our work at the Integrated Program Office, DOD, NASA, and NOAA in the Washington, D.C., metropolitan area between November 2003 and August 2004 in accordance with generally accepted government auditing standards. GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
Our nation's current operational polar-orbiting environmental satellite program is a complex infrastructure that includes two satellite systems, supporting ground stations, and four central data processing centers. In the future, the National Polar-orbiting Operational Environmental Satellite System (NPOESS) is to combine the two current satellite systems into a single state-of-the-art environment monitoring satellite system. This new satellite system is considered critical to the United States' ability to maintain the continuity of data required for weather forecasting and global climate monitoring through the year 2020. Because of changes in funding levels after the contract was awarded, the program office recently developed a new cost and schedule baseline for NPOESS. GAO was asked to provide an interim update to (1) identify any cost or schedule changes as a result of the revised baseline and determine what contributed to these changes and (2) identify factors that could affect the program baseline in the future. In commenting on a draft of this report, DOD, NOAA, and NASA officials generally agreed with the report and offered technical corrections, which we incorporated where appropriate. The program office has increased the NPOESS cost estimate by $1.2 billion, from $6.9 to $8.1 billion, and delayed key milestones, including the availability of the first NPOESS satellite--which was delayed by 20 months. The cost increases reflect changes to the NPOESS contract as well as increased program management costs. The contract changes include extension of the development schedule to accommodate changes in the NPOESS funding stream, increased sensor costs, and additional funds needed for mitigating risks. Increased program management funds were added for non-contract costs and management reserves. The schedule delays were the result of stretching out the development schedule to accommodate a change in the NPOESS funding stream. Other factors could further affect the revised cost and schedule estimates. First, the contractor is not meeting expected cost and schedule targets of the new baseline because of technical issues in the development of key sensors. Based on its performance to date, GAO estimates that the contractor will most likely overrun its contract at completion in September 2011 by at least $500 million. Second, the risks associated with the development of the critical sensors, integrated data processing system, and algorithms could also contribute to increased cost and schedule slips.
CMS Faces Challenges in Implementing Strategies to Prevent Fraud, Waste, and Abuse GAO has identified key strategies to help CMS address challenges it faces in preventing fraud, waste, and abuse, and ultimately to reducing improper payments. These strategies are: (1) strengthening provider enrollment processes and standards, (2) improving pre-payment review of claims, (3) focusing post-payment claims review on most vulnerable areas, (4) improving oversight of contractors, and (5) developing a robust process for addressing identified vulnerabilities. In the course of our work, we have found that CMS has made progress in some of these areas, and recent legislation may provide it with enhanced authority. However, CMS has not implemented some of our recommendations and other challenges remain. Strengthening Provider Enrollment Processes and Standards to Reduce the Risk of Enrolling Providers Intent on Abusing the Program Given the large number of providers filing claims with Medicare and the volume of payments the agency and its contractors handle, ensuring that providers are legitimate businesses before allowing them to bill Medicare is important. Checking the background of providers at the time they apply to become Medicare providers is a crucial step to reduce the risk of enrolling providers intent on defrauding or abusing the program. In particular, we have recommended stricter scrutiny of enrollment processes for two types of providers whose services and items CMS has identified as especially vulnerable to improper payments—home health agencies (HHA) and suppliers of durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS). CMS contractors are responsible for screening enrollment applications from prospective HHAs. We found that the screening process was not thorough. This may have contributed to a rapid increase in the number of HHAs that billed Medicare in certain states with unusually high rates of billing patterns indicative of fraud and abuse. For example, the contractors were not required to verify the criminal history of persons named on the application. We recommended that CMS assess the feasibility of such a criminal history verification of all key officials’ names on an HHA enrollment application; to date, CMS has not implemented this recommendation. Regarding DMEPOS suppliers, we also found that CMS had not taken sufficient steps to prevent entities intent on defrauding Medicare from enrolling in the program. In 2005, we reported that more effective screening and stronger enrollment standards were needed. CMS implemented new supplier enrollment standards in January 2008, partly in response to our recommendation. However, in that same year, we exposed persistent weaknesses when we created two fictitious medical equipment companies that were enrolled by CMS’s contractor and given permission to begin billing Medicare. As an enrollment requirement, suppliers are required to show that they have contracts for obtaining inventory—but the contracts provided with the applications of our fictitious companies would have been shown to be fabricated if they had been reviewed properly. Since January 2008, CMS has taken two additional steps to ensure that only legitimate DMEPOS suppliers can bill Medicare. First, it implemented a requirement for DMEPOS suppliers to post a surety bond to help ensure that the Medicare program recoups erroneous payments that result from fraudulent or abusive billing practices. Second, CMS required that all DMEPOS suppliers be accredited by a CMS-approved accrediting organization to ensure that they meet minimum standards. CMS told us that thousands of DMEPOS suppliers were removed as result of these requirements. In addition, Congress has directed CMS to implement a competitive bidding program for DME, which could also help reduce fraud, waste, and abuse because it authorizes CMS to select suppliers based in part on new scrutiny of their financial documents and other application materials. However, the program will not take effect until January 2011 and it will initially be implemented in only nine metropolitan areas. Implementation of additional authorities in PPACA and HCERA also may help the agency strengthen provider enrollment, including addressing vulnerabilities our work has identified. In particular, among other provisions, the legislation allows HHS to (1) add criminal and background checks to its enrollment screening processes, depending on the risks presented by the provider; and (2) impose a temporary moratorium on enrollment of providers, if the agency deems it necessary to prevent fraud and abuse. In addition, there are specific requirements for providers to disclose any current or previous affiliation with a provider or supplier that has uncollected debt, has been or is subject to a payment suspension under a federal health care program, has been excluded from participation under Medicare, Medicaid or the State Children’s Health Insurance Program (CHIP) or has had its billing privileges denied or revoked. HHS may deny enrollment to any such provider whose previous affiliations pose an undue risk. However, the effectiveness of these authorities is unknown and will depend on CMS’s implementation. CMS told us that the agency is in the process of implementing these authorities, including drafting regulations on criminal and background checks. Improving Pre-Payment Review of Claims Pre-payment reviews of claims are essential to helping ensure that Medicare pays correctly the first time; however, these reviews are challenging due to the volume of claims. Overall, less than 1 percent of Medicare’s claims are subject to a medical record review by trained personnel—so having robust automated payment controls called edits in place that can deny inappropriate claims or flag them for further review is critical. However, we have found weaknesses in these pre-payment controls. For example, in 2007, we found that contractors responsible for reviewing DMEPOS suppliers’ claims did not have automated pre-payment controls in place to identify questionable claims that might connote fraud, such as those associated with atypically rapid increases in billing or for items unlikely to be prescribed in the course of routine quality medical care. As a result, we recommended in 2007 that CMS require its contractors to develop thresholds for unexplained increases in billing and use them to develop automated pre-payment controls. Although CMS has not implemented that recommendation specifically, it has added edits to flag claims for services that were unlikely to be provided in the normal course of medical care. This is a valuable addition to the program’s safeguards, but additional pre-payment controls, such as using thresholds for unexplained increases in billing, could further enhance CMS’s ability to identify improper claims before they are paid. Focusing Post-Payment Claims Review on Most Vulnerable Areas Post-payment reviews are critical to identifying payment errors to recoup overpayments. CMS’s contractors have conducted limited post-payment reviews—for example, we reported in 2009 that two contractors paying home health claims conducted post-payment reviews on fewer than 700 of the 8.7 million claims that they paid in fiscal year 2007. Further, we found that they were not using evidence, such as findings from pre-payment reviews, to target their post-payment review resources on providers with a demonstrated high risk of improper payments. We recommended that post-payment reviews be conducted on claims submitted by HHAs with high rates of improper billing identified through pre-payment review. In response, CMS commented that other types of post-payment review may already include claims from these HHAs. We continue to believe including this targeted post-payment review should be a priority. Cross-checking claims for home health services with the physicians who prescribed them can be a further safeguard against fraud, waste, and abuse, but we have found that this is not always done. For example, CMS does not routinely provide physicians responsible for authorizing home health care with information that would enable them to determine whether an HHA was billing for unauthorized care. In one instance, a CMS contractor identified overpayments in excess of $9 million after interviewing physicians who had referred beneficiaries with high home health costs. The physicians indicated that their signatures had been forged or that they had not realized the amount of care they had authorized. We recommended that CMS require that physicians receive a statement of services beneficiaries received based on the physicians’ certification, but to date, the agency has not taken action. CMS’s new national recovery audit contracting program, begun in March 2009, was intended to address post-payment efforts; however, we continue to have concerns about post-payment reviews of HHAs and DMEPOS. Congress authorized the national program after completion of a three-year recovery audit contracting demonstration program in 2008. The national program is designed to help the agency supplement the pre- and post- payment reviews of other contractors. Recovery audit contractors (RAC) review claims after payment, with reimbursement to them contingent on finding improper overpayments and underpayments. Because RACs are paid on a contingent fee based on the dollar value of the improper payments identified, during the demonstration RACs focused on claims from inpatient hospital stays, which are generally more costly services. Therefore, other contractors’ post-payment review activities could be more valuable if CMS directed these contractors to focus on items and services where RACs are not expected to focus their reviews, and where improper payments are known to be high, specifically home health and durable medical equipment. Improving Oversight of Contractors Because Medicare is administered by contractors, such as drug plan sponsors, overseeing their activities to address fraud, waste, and abuse and prevent improper payment is critical. All drug plan sponsors are required to have programs to safeguard the Medicare prescription drug program from fraud, waste, and abuse. CMS’s oversight of these programs has been limited but is expanding. In March 2010, we testified that CMS had completed desk audits of selected sponsors’ compliance plans. At that time, CMS was beginning to implement an expanded oversight strategy, including revising its audit protocol and piloting on-site audits, to assess the effectiveness of these programs more thoroughly. As of June 2010, the agency has conducted 5 on-site audits and plans to conduct a total of 30 on-site audits by the end of the fiscal year. These audits are in response to a recommendation we made in our 2008 study that found that the five sponsors we reviewed (covering more than one-third of total Medicare prescription drug plan enrollees) had not completely implemented all seven of CMS’s required compliance plan elements and selected recommended measures for a Medicare prescription drug fraud, waste, and abuse program. In addition, CMS published a final rule in April 2010 to increase its oversight efforts and ensure that sponsors have effective compliance programs in place. In issuing the proposed rule, CMS noted that we requested that the agency take actions to evaluate and oversee fraud and abuse programs to ensure sponsors have effective programs in place. Developing a Robust Process for Addressing Identified Vulnerabilities Having mechanisms in place to resolve vulnerabilities that lead to improper payment is critical to program management, but CMS has not developed a robust process to specifically address identified vulnerabilities that lead to improper payment. Our Standards for Internal Control in the Federal Government indicate that part of an agency’s controls should include policies and procedures to ensure that (1) the findings of all audits and reviews are promptly evaluated, (2) decisions are made about the appropriate response to these findings, and (3) actions are taken to correct or otherwise resolve the issues promptly. Further, our Internal Control Management and Evaluation Tool affirms that in order to establish an effective internal control environment, the agency has to appropriately assign authority, including holding individuals accountable for achieving agency objectives. As we reported in March 2010, CMS did not establish an adequate process during its initial recovery audit contracting demonstration or in planning for the national program to ensure prompt resolution of identified improper payment vulnerabilities. During the demonstration, CMS did not assign responsibility for taking corrective action on these vulnerabilities to either agency officials, contractors, or a combination of both. According to CMS officials, the agency only takes corrective action for vulnerabilities with national implications, and leaves it up to the contractors that process and pay claims to decide whether to take action for vulnerabilities that may only be occurring in certain geographic areas. Additionally, during the demonstration CMS did not specify in a plan what type of corrective action was required or establish a timeframe for corrective action. The documented lack of assigned responsibilities impeded CMS’s efforts to promptly resolve the vulnerabilities that had been identified during the demonstration. For the recovery audit contracting national program, CMS established a corrective action team that will compile, review, and categorize identified vulnerabilities and discuss corrective action recommendations. CMS has also appointed the Director of the Office of Financial Management as responsible for the day-to-day operations of the program, and the CMS Administrator as the responsible official for vulnerabilities that span agency components. However, the corrective action process still does not include any steps to either assess the effectiveness of the corrective actions taken or adjust them as necessary based on the results of the assessments. Further, the agency has not developed time frames for implementing corrective actions. We recommended that CMS develop and implement a process that includes policies and procedures to ensure that the agency promptly (1) evaluates findings of RAC audits, (2) decides on the appropriate response and a time frame for taking action based on established criteria, and (3) acts to correct the vulnerabilities identified. CMS concurred with this recommendation. Agency officials indicated that they intended to review vulnerabilities on a case-by-case basis and were considering assigning them to risk categories that would help them prioritize action. However, this recommendation has not been implemented. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or other members of the subcommittees may have. For further information about this statement, please contact Kathleen M. King at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Sheila Avruch, Christine Brudevold, and Martin T. Gahart, Assistant Directors; Lori Achman; Jennie F. Apter; Thomas Han; Jennel Harvey; Amanda Pusey; and James R. Walker were key contributors to this statement. Related GAO Products Medicare Recovery Audit Contracting: Weaknesses Remain in Addressing Vulnerabilities to Improper Payments, Although Improvements Made to Contractor Oversight. GAO-10-143. Washington, D.C.: March 31, 2010. Medicare Part D: CMS Oversight of Part D Sponsors’ Fraud and Abuse Programs Has Been Limited, but CMS Plans Oversight Expansion. GAO-10-481T. Washington, D.C.: March 3, 2010. Medicare: CMS Working to Address Problems from Round 1 of the Durable Medical Equipment Competitive Bidding Program. GAO-10-27. Washington, D.C.: November 6, 2009. Medicare: Improvements Needed to Address Improper Payments in Home Health. GAO-09-185. Washington, D.C.: February 27, 2009. Medicare Part D: Some Plan Sponsors Have Not Completely Implemented Fraud and Abuse Programs, and CMS Oversight Has Been Limited. GAO-08-760. Washington, D.C.: July 21, 2008. Medicare: Covert Testing Exposes Weaknesses In The Durable Medical Equipment Supplier Screening Process. GAO-08-955. Washington, D.C.: July 3, 2008. Medicare: Competitive Bidding For Medical Equipment and Supplies Could Reduce Program Payments, but Adequate Oversight is Critical. GAO-08-767T. Washington, D.C.: May 6, 2008. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: January 31, 2007. Medicare: More Effective Screening and Stronger Enrollment Standards Needed for Medical Equipment Suppliers. GAO-05-656. Washington, D.C.: September 22, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has designated Medicare as a high-risk program since 1990, in part because the program's size and complexity make it vulnerable to fraud, waste, and abuse. Fraud represents intentional acts of deception with knowledge that the action or representation could result in an inappropriate gain, while abuse represents actions inconsistent with acceptable business or medical practices. Waste, which includes inaccurate payments for services, also occurs in the Medicare program. Fraud, waste, and abuse all can lead to improper payments, overpayments and underpayments that should not have been made or that were made in an incorrect amount. In 2009, the Centers for Medicare & Medicaid Services (CMS)--the agency that administers Medicare--estimated billions of dollars in improper payments in the Medicare program. This statement, will focus on challenges facing CMS and selected key strategies that are particularly important to helping prevent fraud, waste, and abuse, and ultimately to reducing improper payments, including challenges that CMS continues to face. It is based on nine GAO products issued from September 2005 through March 2010 using a variety of methodologies, including analysis of claims, review of relevant policies and procedures, stakeholder interviews, and site visits. GAO received updated information from CMS in June 2010. GAO has identified challenges and strategies in five key areas important in preventing fraud, waste, and abuse, and ultimately to reducing improper payments.
Background The United States Housing Act of 1937 established the Public Housing Program to provide decent, safe, and sanitary housing for low-income families. For many years, this act was interpreted to exclude Native Americans living in or near tribal areas. In 1961, however, HUD and the Bureau of Indian Affairs (BIA) determined that Native Americans could legally participate in the rental assistance for low-income families authorized by the 1937 act and issued regulations to implement this determination. In 1988, the Indian Housing Act established an Indian housing program separate from public housing under the Housing Act of 1937 and prompted HUD to issue regulations specific to this program. With the recently enacted Native American Housing Assistance and Self-Determination Act of 1996 (regulations are scheduled to take effect on October 1, 1997), the Congress completed the process of separating Indian housing from public housing. According to the May 1996 report by the Urban Institute, the housing needs of Native Americans are growing. Their population rose sixfold over the past four decades to over 2 million in 1990, 60 percent of whom live in tribal areas or in the surrounding counties. Compared to non-Indians, Native Americans are more family-oriented—37 percent of Native American households are married couples with children versus 28 percent of non-Indian households. Compared to non-Indians, Native Americans have a higher unemployment rate (14 percent versus 6 percent), a smaller number of workers in “for-profit” firms per thousand people (255 versus 362), and a higher share of households with very low incomes (33 percent versus 24 percent). Moreover, Indian housing conditions are much worse than housing conditions in other areas of the country: 40 percent of Native Americans in tribal areas live in overcrowded or physically inadequate housing, compared to 6 percent of the U.S. population. Through its Native American Programs headquarters office and its six field offices, and with the help of approximately 189 Indian housing authorities (IHA), HUD administers the majority of the housing programs that benefit Native American families in or near tribal areas. Several significant differences exist, however, between HUD’s assistance to these families and to families (non-Indian and Indian) living in urban and other areas. First, HUD’s support for Native Americans derives, in part, from the nation’s recognition of special obligations to the Native American population and is reflected in treaties, legislation, and executive orders. Second, the federal government deals with recognized tribes directly in a sovereign-to-sovereign relationship, rather than through the general system of state and local government. This status allows tribes to establish their own system of laws and courts. Third, the BIA often holds in trust a considerable amount of land for a tribe as a whole; thus, this land is not subdivided into many private holdings as occurs in the rest of the country. This trust arrangement has frustrated the development of private housing markets in tribal areas and has long been seen as providing special justification for government assistance in housing production. HUD Provides Most Funding for Housing Assistance Through Indian Housing Authorities Under current regulations, IHAs administer most of the low-income housing assistance that HUD provides to Native Americans. But HUD also provides some housing assistance directly to tribes and individuals. Funding provided through housing authorities is used to develop housing for eventual ownership by individual families through the Mutual Help Program, under which families lease and then buy their home by making payments to an IHA of approximately 15 percent of their income and must cover their own routine operating and maintenance expenses; develop and maintain rental housing for low-income families through the Rental Housing Program, under which, as with the public housing program, low-income families rent housing from the IHA at a cost of 30 percent of their adjusted income; modernize and rehabilitate established low-income housing through the public housing modernization program; and subsidize IHAs to defray operating expenses that rental income does not cover and provide rental vouchers for low-income families. Funding available to tribes and individuals includes loan guarantees for home mortgages, block grants through the HOME Investment Partnership Program for tribes to develop affordable housing in tribal areas, and community development block grants to enhance infrastructure and other economic development activities. Figure 1 shows the funding for fiscal year 1995 for these programs, and table 1 describes the programs’ results. As shown in figure 2 and table 2, over the past decade HUD provided a total of $4.3 billion for these programs, which have produced or planned to produce a total of 24,002 housing units. Additional information on the funding and the programs is contained in appendix I. Providing Housing Assistance for Native Americans Is Challenging and Costly The cultural and geographic environment of tribal areas differs from mainstream America and causes HUD and IHAs to encounter unique challenges and costly conditions as they administer and provide housing programs for Native Americans. Because there are over 550 separate Indian nations, with unique cultures and traditions, not all of these conditions are equally prevalent throughout tribal areas, nor do they have a common impact on developing and maintaining housing. Among the challenges and conditions highlighted in our discussions with officials of HUD and several IHAs, as well as in the May 1996 study by the Urban Institute, are the remoteness and limited human resources of many IHAs and the Native American communities they serve; the lack of suitable land and the inhospitality of the climate; the difficulty contractors and IHAs have in complying with statutory requirements to give hiring preference to Native Americans; and the pressure that vandalism, tenants’ neglect, and unpaid rent put on scarce maintenance funds. Remote Reservations Limit Infrastructure and Availability of Human Resources The extent and pattern of Native American landholding are very different today from what they were at the beginning of the 19th century. During that century, the land area over which Indians had sovereignty and which was available for creating reservations was often reduced to small pieces in isolated areas. The remoteness of some tribal areas has created significant problems for housing development. In contrast to metropolitan areas, where basic infrastructure systems (sewers, landfills, electricity, water supply and treatment, and paved roads) are already in place, remote tribal areas require a large capital investment to create these systems to support new housing. Table 3 shows the investment needed at the Gila River Housing Authority in Sacaton, Arizona, to build a single-family home. According to HUD officials, the cost of site improvements—creating and connecting to the “off-site” infrastructure—is 43 percent higher than for a public housing project in an urban area near the Gila River Indian Reservation. The remoteness of many of the tribal areas also increases the cost of transporting supplies, raises labor costs, and reduces the availability of supplies and of an “institutional infrastructure” of developers and governmental and private entities. For example, transporting a drilling rig over many miles and hours into the desert to a tribal area in California is far more costly than if the well had been needed in a less remote area. In addition, in its study of Native Americans’ housing needs, the Urban Institute found that private housing developers, contractors, and suppliers; governmental planners and building inspectors; private financial institutions; and nonprofit groups are all less available in remote tribal areas. The limited human resources of many IHAs also contribute to the high cost of developing and maintaining housing. HUD’s Deputy Assistant Secretary for Native American Programs told us that housing authorities that recruit their staff from a small tribal population often have difficulty finding qualified managers to administer multimillion-dollar housing grants. This problem is made worse when coupled with the statutory requirement to give Indians first consideration for such jobs. According to the Deputy Assistant Secretary, because many Indian applicants lack formal education, the time they need to become familiar with specialized housing operations can be longer than that needed by applicants from the larger pool enjoyed by a public housing authority in an urban area. The executive director at the Gila River Housing Authority echoed these views when he described his inability to hire skilled and dependable tribal members. He pointed out that many skilled members have personal problems caused by drugs and alcohol, causing the housing authority to search outside the tribal area for much of its labor force. He also said that because members of the available semiskilled workforce need a significant amount of training before they are employable, he cannot afford to hire them. Moreover, some of the tribe’s laborers are drawn to cities away from the reservation, he explained, because of the greater employment opportunities and higher wages there. This lack of skilled human resources is costly. HUD officials told us that as a general rule in the construction industry, labor costs should not exceed 50 percent of the total cost, but in tribal areas labor costs can run as high as 65 percent because contractors generally have to bring in skilled workers and pay for lodging or commuting costs. Land-Use Restrictions and the Inhospitality of the Land Complicate the Development and Maintenance of Low-Income Housing Although the dominant visual impression in many tribal areas is a vast expanse of unused land, a lack of available land is, in fact, a constraint that many IHAs face as they develop low-income housing. Factors that limit the availability of land for housing include the trusts in which BIA holds the land that, until this year, limited leases to 25 years in many instances. Special environmental and other restrictions also exist. For example, in planning for development, IHAs and tribes avoid archaeological and traditional burial sites because cultural and religious beliefs preclude using these sites for housing. In many cases, sufficient tribal land exists for housing, but environmental restrictions prohibit the use of much of it for housing. The Urban Institute’s survey of IHAs revealed that, overall, wetlands restrictions, water quality considerations, and contaminated soils add to the cost of housing in tribal areas. In the Western desert, once low-income housing is developed, the severity of the climate can complicate maintenance. The effects of the high salt and mineral content in the water and soil were evident at the Gila River Housing Authority, where water damages water heaters and copper and cast iron pipes. The executive director told us that the average life of a hot water heater costing $300 is about 6 months. To remedy the problem with the corrosion of plumbing, the Gila River IHA has begun placing plumbing in ceilings for better access and converting to plastic piping. The high mineral content in the water also damages water circulation systems of large fans called “swamp coolers,” used for summer cooling. The executive director told us that because of calcium buildup, the IHA must replace the coolers annually. He also explained that the soil’s high salt content causes housing foundations and sewer systems to deteriorate. Figures 3 and 4 illustrate the damage caused to swamp coolers and foundations. Complying With Indian Hiring Preference and Davis-Bacon Act Requirements Adds Additional Burden to IHAs Certain statutes, including the Indian Self Determination and Education Assistance Act and the Davis-Bacon Act, are intended to protect and provide opportunities for specific groups. However, IHA officials and HUD officials whom we contacted believe that these statutes can make developing housing in tribal areas more costly because they have the the effect of raising the cost of labor in comparison to local wage rates or restricting the supply of labor. The Indian Self Determination and Education Assistance Act of 1975 requires IHAs to award contracts and subcontracts to Indian organizations and Indian-owned economic enterprises. IHA executive directors find that complying with the requirement is difficult and believe that it adds to contractors’ time and cost to bid on work for IHAs. The officials said that factors that undermine the requirement include a lack of qualified Indian contractors in the area, the creation of fraudulent joint ventures that are not owned or managed by Indians, and the occasional need to use qualified firms outside the region that do not understand local conditions. Under the Davis-Bacon Act, firms that contract with IHAs for housing development must pay wages that are no less than the wage rates prevailing in the local area. However, HUD officials told us that this requirement generally increases IHAs’ cost of developing housing in tribal areas. The increased cost occurs because the applicable Davis-Bacon wage rate is often based on HUD’s wage surveys of large unionized contractors who are based in larger metropolitan areas; therefore, the rate is about $10.00 per hour higher than the wage rate prevailing in the local tribal area. Officials of the Chemehuevi Housing Authority, in California, told us that because of high Davis-Bacon wage rates, their cost to develop a single family home ranges between $85,000 and $98,000. Using the prevailing rate of approximately $6.50 to $8.00 per hour, they estimate the development cost to be between $65,000 and $80,000. . Neglect and Vandalism Draw on Maintenance Budgets That Are Shrinking Because of Unpaid Rent If housing units are abused through neglect or vandalism and not well-maintained on an ongoing basis, costly major repairs can be needed. These avoidable repairs put pressure on maintenance budgets that are shrinking because of the high rate of unpaid rent in tribal areas. Moreover, maintaining assisted housing for Native Americans is an increasingly difficult challenge because of its age—44 percent of the stock was built in the 1960s and 1970s. For housing units in HUD’s Rental Housing Program for Native Americans, the Urban Institute reported that 65 percent of the IHA officials responding to its telephone survey identified tenants’ abuse and vandalism of vacant homes as the factors contributing most to maintenance costs. The Urban Institute also reported that fewer than 10 percent of the officials identified any of the survey’s other contributors to maintenance costs, including poor materials, poor construction, and a lack of preventive maintenance. For units under the Mutual Help Program (which are owned or leased by the residents), the Urban Institute reported that IHA officials cited residents’ neglect to perform needed maintenance as accounting for 30 percent of poor physical conditions of this segment of the housing stock. Our discussions with IHA officials reinforce these findings. The executive director at the Gila River Housing Authority told us that vandalism by juveniles was a major problem for him and that because the tribal area borders Phoenix, Arizona, it is more susceptible to gang activity and violence. Chemehuevi Housing Authority officials pointed out that once a family that has neglected to perform expected maintenance moves out and the tribe turns the housing back to the IHA, the housing authority often incurs a large and unexpected rehabilitation cost before it can lease the unit to another family. Figure 5 shows the effects of vandalism at the Gila River Indian Reservation. The high level of unpaid rent among assisted Native American families has exacerbated the problem of accomplishing needed maintenance. Routine and preventive maintenance is an operating expense that an IHA pays for with rental income and an operating subsidy that HUD provides to help defray expenses. However, according to HUD, appropriations for these subsidies have not been sufficient to cover all operating expenses not covered by rental income. Therefore, shortfalls in rental income will generally result in less funds to spend on maintenance. In recent years, these shortfalls have been at high levels for both the Rental Housing and the Mutual Help programs. For example, the Urban Institute reported that at the end of 1993, 36 percent of all tenants in the rental program were delinquent in their rent payments, and the cumulative accounts receivable for tenants’ rent averaged $208 per rentable unit. In contrast, the average delinquency rate in public housing is only 12 percent. To counter shortfalls in rental income, some IHAs enforce strong eviction policies. Others, on the other hand, are either unwilling or unable to do so. They attributed their ineffective policies to such factors as tribal court systems that do not support evictions, the conflict of such policies with tribal culture, and their own lack of forceful management. Regardless of the reason, these shortfalls coupled with insufficient operating subsidies likely will lead to deferred maintenance and higher costs for major repairs in the future. Native American Housing Assistance and Self- Determination Act of 1996 Could Initially Increase HUD’s Workload By establishing a block grant mechanism to replace all housing assistance that tribes currently receive indirectly through their IHAs and HUD—except for the funding set aside for Native Americans in the Community Development Block Grant Program—the act allows greater discretion for tribes to address their housing needs. Moreover, block grants will ensure that Indian housing is separate from public housing not only administratively in HUD’s line organization—as the 1988 legislation accomplished—but also financially. The new statute stipulates that tribes will not receive less housing assistance under the new law than they did in fiscal year 1996 for the modernization of existing rental housing and operating subsidies to pay for expenses not covered by rental income. Among other provisions, the new statute also provides the following: The existing housing authority or some other entity designated by a tribe must administer the block grant funds and develop 1-year and 5-year housing plans for HUD’s approval. The 1-year plan must present (1) a statement of housing needs, (2) the financial resources available to the tribe, and (3) a description of how the available funds will be used to leverage additional resources. In distributing the block grants, HUD shall consider, among other factors, (1) the number of low-income housing units that a tribe already owns or operates, (2) the extent of poverty and economic distress and the number of Native American families, and (3) other objectively measurable conditions specified by HUD and the tribe. These conditions could include the relative affluence and other sources of income, if known, of the tribe. The Secretary of HUD shall monitor block grant recipients—who must submit performance reports to the Secretary—for compliance with the law and take corrective actions when a tribe or its housing entities do not comply with the program’s requirements. A tribe can pledge future grant funds to secure a guaranteed loan and can lease land held in trust for up to 50 years. The new act could, at least initially, cause HUD’s oversight workload to increase. One reason for this is that the number of entities receiving funding will likely rise: Under current law, HUD funds only 189 IHAs, while under the new law, HUD may have to fund all 550 tribes independently. Both HUD and IHA officials we contacted believe that tribes will abandon some “umbrella” IHAs—those that serve more than one tribe—that have not performed well. And some tribes will simply choose to manage their own housing assistance programs. HUD’s Deputy Assistant Secretary for Native American Programs believes that new requirements needing oversight, such as the housing plans, and tribes’ new opportunities, such as the borrowing program and ability to lease land for up to 50 years, will put added pressure on HUD’s field offices to work closely with the grant recipients. He said that during the first years after the new act is in effect, HUD will need to monitor all tribes or housing entities to determine their initial understanding of and compliance with the new statute and its provisions. Other HUD officials and Indian housing officials we contacted at two IHAs generally viewed the new legislation positively and said that the most attractive feature of the act is the new flexibility it offers for developing housing. They also cited the availability of a lease of 50 years—increased from the current 25 years—which will provide lenders an incentive to enter into mortgage agreements with Native Americans who lease land with the intention of building a home. The executive director of the National American Indian Housing Council said that the annual plans will require a kind of housing needs assessment that heretofore has not been done. She believes, moreover, that the new program’s success will depend on the extent to which HUD is effective in reviewing the required plans, monitoring the tribes’ implementation of the plans, and acting on potential noncompliance. Many Tribes Receive Gaming Revenues, but HUD Does Not Consider Them Directly When Determining Housing Assistance About 177, or half of the 356 federally recognized tribes in the continental United States operated gaming facilities as of July 1996. As we reported in August 1996, our analysis of financial statements submitted for fiscal years 1994 and 1995 by the 85 tribes that responded by May 5, 1996, to the National Indian Gaming Commission’s (NIGC) request for information shows that Indian gaming activities provide an additional, and in many cases significant, source of revenues. These 85 tribes operated 110 gaming facilities and earned a total net income (after all expenses) of $1.5 billion. HUD does not take these revenues into account directly as it assesses IHAs’ needs for federal housing assistance because the Department has not obtained the financial information describing tribes’ gaming activities or financial resources. Therefore, HUD cannot relate the revenues and assets to tribes’ housing needs. Moreover, tribes’ income from gaming, as from other sources, accrues directly to the tribes rather than the IHAs and can be used to provide a wide range of economic assistance to tribal communities. Thus, to the extent that gaming revenues enhance tribes’ overall economic well-being, HUD considers them indirectly in its funding allocations. For Many Tribes, Gaming Revenues Are Significant The 85 tribes reported total revenues from their gaming facilities of $3.8 billion, from which they derived their net revenues of $1.5 billion. Of this amount, 74 tribes received about $1.2 billion in transfers from their gaming facilities. The Indian Gaming Regulatory Act requires that tribes use these transferred revenues for tribal, governmental, or charitable purposes in accordance with a revenue allocation plan approved by BIA. The allocation plan may also provide for the distribution of a portion of the net income directly to individual tribal members. Our analysis of the income transferred from facilities to the 74 tribes shows that transfers ranged from about $17,000 to over $100 million. The remaining 11 tribes did not receive transfers from their gaming facilities. More than two-thirds of the 85 tribes received $10 million or less from gaming. Transfers to the tribes may not have occurred for several reasons, including a facility’s not having net income for the year-end, not having accumulated earnings from prior years, or retaining all of the year’s net income. About $300 million was retained by the gaming facilities. Figure 6 shows the results of our analysis of the transfers. Our analysis of the distribution to individual tribal members shows that BIA had approved 34 of the reported 177 tribes with gaming facilities to distribute a portion of their net revenues directly to tribal members. The proportion of net revenues to be distributed ranged from 2 percent to 69 percent. HUD Is Not Required and Lacks the Data to Take Gaming Revenues Directly Into Account For the most part, housing needs are the primary factor that HUD is required to consider when allocating funds to IHAs for housing programs. And tribes’ economic well-being, to the extent that HUD can determine it, is the deciding factor when allocating community development funding to them. HUD provides this funding to IHAs on the basis of projected operating expenses or applications for grant funds that demonstrate housing needs in accordance with a specified formula used to allocate funding across all housing authorities. By regulation, HUD also awards new housing development funds to IHAs through a competitive process based on factors such as housing needs, the length of time since last funding award, occupancy levels of existing units, and the current mix and status of units under construction. For community development programs—such as the Community Development Block Grant Program and the HOME Investment Partnership Program—HUD provides funding directly to tribes instead of to IHAs. For these programs, HUD officials explained that tribes compete for these funds on the basis of the number of low-income persons needing assistance. According to HUD officials, tribes that generate significant income from tribal businesses (including gaming) generally do not have a large enough number of low-income persons and, therefore, do not rank high enough to receive funds in these programs. The household income of low- and moderate-income beneficiaries of funds from the Community Development Block Grant Program, for example, generally must not exceed 80 percent of the median income for the area. In reporting these income levels to HUD, the applicants are required to identify distributions, if any, of tribal income (from gaming or other sources) to families, households, and individuals. The Deputy Assistant Secretary told us that none of these funding criteria requires HUD to consider the specific amount and use of revenues that tribes receive from gaming or other sources. However, we believe that such information could be available to HUD if the Department took the necessary actions to obtain it. Under Block Grants, HUD Could Compare Housing Needs With Business Revenues If They Were Known The Native American Housing Assistance and Self-Determination Act of 1996 does not specifically require HUD to take gaming or other revenues into account for funding purposes. Nevertheless, the act requires HUD to develop a housing assistance allocation formula that reflects the housing needs of Indian tribes and is based on (1) the number of low-income units owned or operated pursuant to a contract between the Department and IHAs; (2) the extent of poverty and economic distress within tribal areas; and (3) other objectively measurable conditions specified by HUD, which, we believe, could include business revenues. HUD’s Deputy Assistant Secretary for Native American Programs told us that if HUD is required or chooses to use a tribe’s gaming revenues to offset its need for housing assistance, then certain other information also would need to be known and factored into the funding allocation decision. For example, consistent treatment of all tribes would require that HUD also know the amounts of significant tribal revenues from other sources, such as land leases and mineral rights sales, as well as from other federal programs that assist Native Americans. Agency Comments We provided a draft this report to HUD, the National American Indian Housing Council, the Gila River Indian Housing Authority, and the Chemehuevi Housing Authority for review and comment. We discussed the report with officials of each agency, including the Deputy Assistant Secretary for Native American Programs and the executive directors of the Housing Council and the two IHAs. These officials commented that the report accurately described the results of HUD’s Indian housing programs and their special environmental and cultural conditions. Scope and Methodology For information on Native American housing programs, including funding and results and the factors that complicate HUD’s delivery of those programs, we obtained data from various sources. We reviewed pertinent legislation, HUD’s documentation on the programs, its regulations on Indian housing, reports by its Office of Inspector General, and reports by the Urban Institute’s Center for Public Finance and Housing. We discussed issues with officials from HUD’s headquarters Office of Native American Programs in Denver, Colorado, and field offices in Denver, Colorado, and Phoenix, Arizona. We also interviewed HUD’s Rocky Mountain District Inspector General for Audit and the Executive Director of the National American Indian Housing Council. In addition, we visited the Gila River Housing Authority, Sacaton, Arizona, and Chemehuevi IHA, Havasu Lake, California, to meet with officials and gain a perspective of HUD’s Indian housing programs and observe the condition of housing units. We drew from our August 1996 report on Indian gaming for information on the extent of gaming on tribal lands, its profitability, the revenue distribution. To determine gaming’s impact on HUD’s funding allocation decisions, we reviewed the regulations governing HUD’s funding of Indian housing and we spoke with officials from HUD, primarily HUD’s Deputy Assistant Secretary for Native American Programs. We performed our work from August 1996 through February 1997 in accordance with generally accepted government auditing standards. As arranged with your offices, we plan send copies of this report to other appropriate Senate and House committees; the Secretary of HUD; the Commissioner of Indian Affairs, BIA; and the Director, Office of Management and Budget. We will make copies available to others on request. Please call me at (202) 512-7631 if you or your staff have any questions. Major contributors to this report are listed in appendix II. Funding and Results for Major Housing Programs for Native Americans The funding for and accomplishments of HUD’s housing and community development programs for Native Americans have been steady or increasing in proportion to the increases in the Department of Housing and Urban Development (HUD) appropriations over the 1986-95 decade, as discussed below. Indian Housing Development Program Is the Primary Vehicle for Funding HUD’s Indian Housing Development Program consists of two components—the Rental Housing Program and the Mutual Help Program. From 1961, when Native Americans first began to receive assistance under the Housing Act of 1937, through fiscal year 1995, HUD provided Indian housing authorities (IHA) over $5 billion (in nominal dollars) for Indian housing programs and constructed over 82,000 units. About one-third of the construction has taken place during the 10-year period between 1986 and 1995. As shown in the figures below, over the 10-year period, HUD provided almost $2.4 billion to 189 IHAs specifically to develop housing for low-income families. With these funds, the IHAs have built or planned to build over 24,000 housing units. Sixty-five percent of these units, 15,721, were Mutual Help units and the remainder were Low-income Rental units. Modernization Program Also Supports IHAs’ Housing Under its housing modernization program, HUD provides funds to IHAs to rehabilitate properties in deteriorated physical condition and to upgrade the management and operation of existing Indian housing developments. HUD allocates modernization funds under both the Comprehensive Grant Program (CGP) to IHAs that own or operate 250 or more units and the Comprehensive Improvement Assistance Program (CIAP) to IHAs with fewer than 250 units. Overall, since 1986, congressional appropriations to HUD for the modernization of all public housing have steadily increased, and IHAs have benefitted proportionately. However, as is shown in table I.1, after HUD implemented the CGP in fiscal year 1992, funding for CIAP declined, and in fiscal year 1995, funding for both programs declined. Operating Subsidies Provide Funds for IHAs’ Ongoing Expenses Section 9 of the United States Housing Act of 1937, as amended, authorizes HUD to subsidize the operation of low-income public housing projects. Because rental income may not be sufficient to cover all the expenses incurred by a housing authority in its operation and maintenance of rental housing, HUD provides such subsidies to IHAs through its Performance Funding System on the basis of their projected operating expenses. The subsidy amount for an IHA is the difference between the projected estimate of operating costs and an estimate of income from rents and other sources. Overall, HUD has provided just over $500 million in operating subsidies to IHAs for Indian housing programs between 1985 and 1995. As shown in table I.2, since 1992 the trend in HUD’s funding for operating subsidies for the Mutual Help and Rental Housing programs has been upward, with a sharp increase (almost 34 percent) for the latter program between 1993 and 1994. Well over 60 percent of the operating subsidy funding HUD provided IHAs supported the Rental Housing Program. Loan Guarantee Program Is Another Source of Funding Section 184 of the Housing and Community Development Act of 1992 authorized the Indian Home Loan Guarantee Program to give Indian families and IHAs access to sources of private financing that might otherwise not be available without a federal guarantee. HUD uses the funds to guarantee loans for constructing, acquiring, or rehabilitating one to four family dwellings per loan. The guaranteed loans must be for homes that are standard housing (i.e., conforming to HUD’s standards), located on Indian trust lands, or in Native American tribal areas. The approval of guarantees is based on applicants’ having a satisfactory credit record, enough cash to close the loan, and sufficient steady income to make monthly mortgage payments without difficulty. During fiscal year 1995—the program’s first year of operation—HUD used the program’s appropriation of $3 million to guarantee $22.5 million in home loans in tribal areas. This funding guaranteed 74 homeownership loans for individuals and 403 loans administered by IHAs, with the loans ranging from a low of $21,000 to a high of $175,000. Home Investment Partnership Program Develops Affordable Housing The National Affordable Housing Act of 1990 created HUD’s HOME Investment Partnership to expand the supply of decent and safe affordable housing. HUD awards HOME funds competitively to federally recognized Indian tribes and Alaska Native villages. These governments, in turn, make loans or grants for rehabilitating, acquiring, or newly constructing both owner-occupied and rental housing. Recipients must have a low income (an adjusted family income must be 80 percent or less of the area’s median income), and in the case of rental housing, some tenants must have very low income (50 percent or less of the area’s median income). The HOME program first became available to Native Americans in 1992. Since then, under the program HUD has awarded a total of $51 million to Indian tribes, resulting in 560 new units constructed, 1,400 units rehabilitated, and 178 existing units purchased. Indian Community Development Block Grant Program Provides Needed Funds In 1978, HUD began providing IHAs with Indian Community Development Block Grants as a set-aside in the overall Community Development Block Grant for cities and towns across the country. The block grant program’s objective is to help IHAs and tribes develop viable communities that include decent housing, suitable living environments, and economic opportunities—primarily for persons of low and moderate income. HUD’s regulations provide for two categories of grants, “imminent threat grants” and “single-purpose grants.” For the first type, the HUD Secretary can set aside up to 5 percent of each year’s allocation for noncompetitive, first-come, first-served grants to eliminate problems that pose an imminent threat to public health and safety. The second type, single-purpose grants, constitutes the remainder of the funding; HUD provides these grants on the basis of annual competition governed by requirements and criteria set forth in a “notice of funds availability” published in the Federal Register. As funding for the total Community Development Block Grant Program has increased, so has the amount set aside for Native Americans. This amount has grown in real terms from $36 million in fiscal year 1986 to $46 million in fiscal year 1995. For fiscal year 1995, the set-aside for grants addressing imminent threats was $1.5 million, with $44.5 million remaining for single-purpose grants. Nationally, HUD received 217 applications from tribes/tribal organizations for 267 separate projects in 1995. As shown in table I.4, of those approved, the most requested projects were for infrastructure and buildings—accounting for about 87 percent of all the projects approved and funded. The three types of projects that directly address housing—new development, rehabilitation, and land to support new housing—received a very small portion of the funding: $5 million, or about 13 percent, of the grant funds approved. Major Contributors to This Report Resources, Community, and Economic Development Division Lawrence J. Dyckman, Associate Director Eric A. Marts, Assistant Director Leslie A. Smith Luis Escalante, Jr. Willie D. Watson The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement and a congressional request, GAO reviewed the Department of Housing and Urban Development's (HUD) housing programs for Native Americans, focusing on the: (1) funding history and measurable results of the housing programs administered by HUD for Native Americans in or near tribal areas; (2) significant factors that complicate and make costly the provision of housing assistance to Native Americans in or near tribal areas; (3) potential initial impact of the recently enacted Native American Housing Assistance and Self-Determination Act of 1996 on HUD's oversight of housing assistance to Native Americans living in or near tribal areas; and (4) the extent to which gaming occurs in tribal areas, its profitability, and whether HUD takes revenues from gaming into account when allocating funding to Native American housing authorities. GAO noted that: (1) from fiscal year (FY) 1986 through FY 1995, HUD provided $4.3 billion for housing and community development in tribal areas; (2) of this amount, HUD provided $3.9 billion to approximately 189 Indian housing authorities to develop and maintain affordable housing and assist low-income renters; (3) in this period, the authorities used the funds to construct over 24,000 single-family homes, operate and maintain existing housing, and encourage other development; (4) over the decade, HUD also provided direct block grants totalling over $424 million to eligible tribes for community development and mortgage assistance; (5) many factors complicate and make costly the development and maintenance of affordable housing for Native Americans; (6) these factors include the remoteness and limited human resources of many Indian housing authorities and the Indian communities they serve, land-use restrictions and the inhospitality of the land, the difficulty that contractors and Indian housing authorities have in complying with statutory requirements to give hiring preference to Native Americans, and vandalism and neglect, which draw on scarce maintenance funds; (7) HUD believes that, initially, its workload could increase as it monitors tribes' compliance with the new Indian housing legislation set to take effect on October 1, 1997; (8) the new act changes the way HUD provides housing assistance to Native Americans by requiring block grants to each of the over 550 federally recognized tribes instead of categorical grants to the 189 Native American housing authorities that currently exist; (9) moreover, to qualify for the block grants, tribes must submit housing plans for HUD's approval; (10) although the law requires HUD to conduct only a limited review of the tribes' plans, HUD officials believe that this activity will, for the first year at least, be a labor-intensive function for HUD field offices; (11) of the 356 Indian tribes in the continental United States alone, 177 operated 240 gaming facilities as of July 1996; (12) according to 1994 and 1995 data submitted by 85 of these tribes, their gaming revenues after expenses totalled about $1.5 billion; (13) HUD officials told GAO that they do not take gaming revenues directly into account when allocating funds because, in addition to these revenues, HUD would need to know other business revenues and federal assistance available to each tribe in order to determine a fair allocation; and (14) to the extent that HUD takes a tribe's general economic well-being and housing needs into account, it is indirectly factoring gaming revenues into its funding allocation decisions.
Background On May 22, 1998, President Clinton issued a pair of directives to guide federal efforts to address critical infrastructure vulnerabilities. Presidential Decision Directive 62 (PDD 62) highlighted the growing threat of unconventional attacks against the United States. It described a new and more systematic approach to fighting terrorism through interagency efforts to prepare for response to incidents involving weapons of mass destruction. Presidential Decision Directive 63 (PDD 63) further directed federal agencies to conduct risk assessments and planning efforts to reduce exposure to attack. Specifically, the assessments were to consider attacks that could significantly diminish the abilities of (1) the federal government to perform essential national security missions and ensure the general public health and safety; (2) state and local governments to maintain order and to deliver minimum essential public services; and (3) the private sector to ensure the orderly functioning of the economy and the delivery of essential telecommunications, energy, financial, and transportation services. PDD 63 called for the government to complete these assessment efforts no later than May 2003. According to the Office of Intelligence and Security’s (OIS) Associate Director for National Security (hereafter referred to as the Associate Director), the Transportation Infrastructure Assurance (TIA) program is, in part, DOT’s effort to meet these Presidential Decision Directive requirements. RSPA concentrates on multimodal issues (research that applies to more than one mode of transportation) that affect the entire U.S. transportation system rather than on a specific sector of the system. RSPA’s Office of Innovation, Research and Education is responsible for managing the TIA program. The Volpe National Transportation Systems Center, located in Cambridge, Massachusetts, is the research arm of RSPA and is conducting the program’s vulnerability assessments. OIS is the key transportation security stakeholder within DOT responsible for analyzing, developing, and coordinating departmental and national policies addressing national defense, border security, and transportation infrastructure assurance and protection issues. Other OIS responsibilities include: coordinating with the public and private sectors, international organizations, academia, and interest groups regarding issues of infrastructure protection; acting as the Secretary of Transportation’s liaison with the intelligence, law enforcement, and national defense communities and assisting departmental organizations in establishing and maintaining direct ties with those communities; and serving as the Secretary of Transportation’s primary advisor on significant intelligence issues affecting the traveling public, the transportation industry, and national security. According to OIS’s Associate Director, OIS has historically been involved in the department’s transportation security research efforts. He added that OIS’s lead role in fulfilling the department’s critical infrastructure responsibilities, including the implementation of Presidential Decision Directives addressing critical infrastructure vulnerabilities, is likely to change as the roles and responsibilities of the Transportation Security Administration (TSA) and the newly created Department of Homeland Security are defined. Congress established TSA in November 2001 to be responsible for ensuring transportation security, including identifying and undertaking research and development activities necessary to enhance transportation security. For fiscal year 2003, TSA received $110 million to fund transportation security research activities for all modes of transportation. Further, on November 25, 2002, the President signed the Homeland Security Act of 2002, which established the Department of Homeland Security with the responsibility of, among other tasks, coordinating efforts in securing America’s critical infrastructure. On March 1, 2003, TSA became part of the newly created Department of Homeland Security. TIA Program Is Scheduled to End in December 2003 with Completion of Four Vulnerability Assessments The TIA program is scheduled to end in December 2003, resulting in the completion of four vulnerability assessments aimed at identifying and finding ways to mitigate threats against the nation’s transportation infrastructure. RSPA officials said that two of these assessments (the interdependency of the transportation system with other critical infrastructures and transportation and logistical requirements for emergency response teams in dealing with weapons of mass destruction) were selected, in part, to meet DOT’s PDD 62 and 63 requirements, and are scheduled for completion in mid-2003 to meet the deadlines outlined in the presidential directives. The other two assessments (the feasibility of alternative backup systems for the global positioning system, and an assessment of the options to transition from hazardous materials transportation security guidelines to security requirements) were selected based upon a perceived need for assessments in these areas as defined by officials from RSPA’s Office of Hazardous Materials Safety and the Volpe National Transportation Systems Center, and are scheduled for completion in December 2003. RSPA’s Volpe Center is conducting the TIA program’s four assessments and has conducted research related to transportation infrastructure since 1996. (See app. I for a summary of the Volpe Center’s Workshops and Studies related to transportation infrastructure assurance from fiscal years 1996 to 2000.) Figure 1 shows the TIA program’s beginning and completion dates by specific vulnerability assessment. RSPA officials told us that it has no plans to include any additional or future assessments under the TIA program. The TIA program is assessing four vulnerabilities: Interdependency of the transportation system with other critical infrastructures: According to TIA program documentation, the development of alternative fuels, changes in telecommunication technologies, and the evolving financial role of the federal government in the security of privately operated transportation systems are affecting the relationship between the nation’s transportation infrastructure and some of the nation’s other critical infrastructures. The purpose of this assessment is to describe the current and evolving dependence between the nation’s transportation infrastructure and some of the nation’s other critical infrastructures including energy, electronic-commerce, banking and finance, and telecommunications. For example, the nation’s air traffic control system relies on telecommunications to manage the safety and efficiency of air transportation, as shown in figure 2. Researchers plan to determine the costs, in terms of economic disruption and loss of lives, associated with terrorists exploiting transportation infrastructure vulnerabilities. Transportation and logistical requirements for emergency response teams in dealing with weapons of mass destruction: The purpose of this assessment is to evaluate the transportation and logistics assets required in responding to terrorist activities. The assessment will include an analysis of transportation operations and procedures, personnel, supplies, and transportation assets such as vehicles, containers, and pallets. Specifically, researchers plan to analyze the institutional and economic implications of terrorist activities involving weapons of mass destruction in order to develop emergency transportation action plans and compile emergency transportation procedure best practices. Emergency teams were transported to respond to the terrorist attack on the World Trade Center on September 11, 2001, as shown in figure 3. Feasibility of alternative backup systems for the global positioning system: The purpose of this assessment is to provide a continuation of the August 2001 report by the Volpe National Transportation Systems Center, Vulnerability of the Transportation Infrastructure Relying On The Global Positioning System. The report concluded that the global positioning system is vulnerable to both intentional and nonintentional disruption, and identified a need for a backup for the global positioning system. To follow- up on the August 2001 report, researchers plan to analyze and describe the performance, cost, and practicality of backup systems and procedures. Figure 4 shows a picture of a global positioning satellite. Options to transition hazardous materials transportation security guidelines to security requirements: The purpose of this assessment is to evaluate the tradeoffs in the transportation of hazardous materials that exist between security, economic, proprietary, and delivery factors. RSPA plans to provide an analysis and description of these tradeoffs in different threat scenarios for different modes of transportation. Figure 5 provides an overview of the types of transportation being assessed. RSPA plans to work with OIS to disseminate the results of the program to private transportation system operators and to stakeholders in DOT and other federal agencies through 11 formal reports, presentations, workshops, and the Internet. Table 1 provides an overview of the program’s planned products and progress to date. Congress appropriated $1 million each year to RSPA for the TIA program in fiscal years 2001, 2002, and 2003. Figure 6 provides an overview of the TIA program funding for fiscal years 2001 through 2003 for each of the four vulnerability assessments. RSPA Has Not Fully Coordinated Their Activities with OIS in Selecting the Vulnerabilities to Be Assessed and in Implementing the Assessments for the TIA Program RSPA has not fully coordinated their activities with OIS—DOT’s key transportation security stakeholder—in selecting the vulnerabilities to be assessed or in implementing the assessments for the TIA program. RSPA coordinated with OIS in selecting two vulnerability assessments in fiscal year 2001. Specifically, in fiscal year 2001, RSPA worked with OIS to select one vulnerability for assessment and notified OIS of its selection of a second vulnerability for assessment. RSPA, however, did not coordinate with OIS officials in the selection of two additional vulnerability assessments in fiscal year 2002. RSPA’s coordination with OIS during the program’s implementation has been limited to only one of the four vulnerability assessments under review. RSPA’s Coordination with OIS in the Selection of the Vulnerabilities to Be Assessed in the TIA Program RSPA coordinated with OIS and used various criteria, such as PDD 62 and 63, in selecting only two of the four vulnerabilities to be assessed in the TIA program. For example, RSPA consulted with OIS to select one of the two vulnerabilities for assessment in fiscal year 2001 and notified OIS of its selection of a second vulnerability. Specifically, in a memorandum dated March 6, 2001, OIS identified and proposed a list of critical infrastructure protection research requirements for assessment and requested that RSPA address them as a high priority. In this initial proposal, the Director of OIS said that significant OIS involvement would be required to effectively implement the program given its responsibilities for defining transportation security vulnerabilities, ensuring that vulnerability assessments are conducted, and implementing actions to mitigate those vulnerabilities. On April 9, 2001, RSPA issued a memorandum to OIS outlining its research agenda for fiscal year 2001 and stating that OIS’s involvement in assuring the program’s quality, credibility, and review was critical. This memorandum confirmed RSPA’s plans to assess the interdependency of the transportation system with other critical infrastructures, as suggested by OIS’s proposed list, and notified OIS of RSPA’s intention to conduct a second assessment—the transportation and logistical requirements for emergency response teams in dealing with weapons of mass destruction—that was not included on OIS’s list. In the aftermath of the terrorist attacks of September 11, 2001, RPSA issued a solicitation on behalf of all DOT modes for additional transportation security technology research and concepts to be included in the TIA program or related transportation security programs. OIS officials participated with RSPA in reviewing the proposals received in response to the solicitation. However, according to the Associate Administrator of RSPA’s Office of Innovation, Research, and Education (hereafter referred to as the Associate Administrator), DOT did not receive the funds to pursue any of these proposals. During fiscal year 2002, RSPA did not coordinate with OIS to determine what additional assessments to select for inclusion in the program. Instead, RSPA selected two transportation vulnerabilities for assessment under the program after holding discussions with Volpe Center researchers and officials from RSPA’s Office of Hazardous Materials Safety. While the Associate Director of OIS said he was unaware that additional vulnerabilities had been selected for assessment in fiscal year 2002 prior to our discussions with him regarding the status of the program, he noted that both of these assessments—on the feasibility of alternative backup systems for the global positioning system, and an assessment on options to transition hazardous materials transportation security guidelines to security requirements—were valid and of high priority. According to OIS and RSPA officials, this lack of coordination resulted, in part, from disagreements and misunderstandings about the other’s respective role in the program. As indicated by a series of e-mail communications between OIS and RSPA officials during the period between October 2001 and January 2002, questions about the respective roles of OIS and RSPA in the program’s management, specific research areas, and the logistics of this research were raised on numerous occasions with no apparent resolution. Neither RSPA nor OIS were able to provide us with documentation to show that these issues were resolved. (See app. II for specific stakeholders involved and criteria used to select the vulnerabilities chosen for assessment under the TIA program in fiscal years 2001 and 2002.) RSPA’s Coordination with OIS in the Implementation of the Assessments in the TIA Program RSPA’s coordination with OIS, DOT’s security stakeholder, during the implementation of the TIA program has been limited to one of the four vulnerability assessments. While OIS has participated in meetings regarding the assessment of the options to transition hazardous materials transportation security guidelines to security requirements, RSPA did not similarly involve OIS in the program’s three other vulnerability assessments. OIS and RSPA officials said that this lack of coordination during the implementation of the program resulted, in part, from continued disagreements and misunderstandings about the other’s respective role in the program. Further, OIS’s Associate Director said that because of OIS’s lack of involvement in the TIA program, he was not aware of the program’s progress to date and therefore expressed uncertainty about whether the program’s research is meeting the requirements of PDD 62 and 63. OIS’s Associate Director also said that OIS’s working relationships with private industry stakeholders might have helped RSPA obtain industry- sensitive information for the program’s assessments. RSPA officials acknowledged that a primary challenge of the TIA program involves obtaining information on industry-specific, competition-sensitive issues. For example, RSPA officials said that private sector owners and operators, such as those from the oil industry, are cautious about releasing proprietary information because of the possibility that this information could be used by (1) business rivals to gain a competitive advantage, (2) terrorists to harm and destroy critical infrastructure, and (3) the federal government to pursue further regulations of the industry. As a result, TIA program researchers told us that they are limited in their ability to identify specific threats and weaknesses relating to some of the specific vulnerabilities under assessment. According to RSPA’s Associate Administrator, because of these limitations, the TIA program is, in some instances, examining vulnerability issues on a conceptual level rather than through specific case studies of industry infrastructure. For example, instead of assessing the vulnerabilities of specific privately owned infrastructures, such as oil refineries, RSPA is addressing some critical details of crude oil transport using ports in Louisiana and Texas to illustrate the complexities in defining the interdependency vulnerabilities between the nation’s transportation and energy infrastructures. (See app. III for a summary of OIS involvement in the implementation of the TIA program, as well as a listing of all of the significant stakeholders reported by RSPA who were consulted during the implementation of the TIA program.) We discussed our findings about the lack of coordination with RSPA’s Associate Administrator and OIS’s Associate Director and suggested they take steps to increase their coordination efforts. They agreed that increased coordination would be beneficial. Specifically, they agreed to hold bi-monthly updates on the progress of each of the vulnerability assessments, discuss program task methodologies and approaches, and identify options for addressing the challenges facing program researchers in conducting the program’s vulnerability assessments. The first update was held in March 2003. Furthermore, RSPA’s Associate Administrator agreed to provide TSA’s Director for Threat Assessment and Risk Management with information on the TIA program’s findings, challenges, and lessons learned. In our discussions with TSA’s Director for Threat and Risk Assessment, she said that such information regarding the TIA program would be helpful in guiding TSA’s future efforts in planning and conducting transportation security research. Because of actions taken by RSPA and OIS to improve coordination we are making no recommendations at this time. Agency Comments and Our Evaluation We provided a copy of the draft report to DOT and RSPA officials who agreed with the contents of the report and provided technical clarifications that we incorporated into the report. They did not provide written comments on the report. We will send copies of this report to the Secretary of Transportation, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have questions about this report, please call me on (202) 512-2834 or Chris Keisling on (404) 679-1917. Other key contributors included Colin Fallon, Bert Japikse, Steve Morris, and Jason Schwartz. Appendix I: Volpe National Transportation System Center Studies Related to Transportation Infrastructure Assurance Fiscal year 1996 Supervisory Control and Data Acquisition Vulnerabilities (1997) National Air Space Vulnerabilities (1997) Traffic (Surface) Central Systems Vulnerabilities (1997) White Papers: Electromagnetic Threats to Rail/Transit Operations (1997) Criminal Use of Transportation Infrastructure (1997) Railroad Bridges and Tunnels Vulnerability (1998) Railroad Signaling and Control Vulnerability (1998) Intermodal Cargo Security Best Practices (1999) Transportation Infrastructure Assurance Research and Development Plan (1999 and 2000) Workshops: Emerging Issues in Transportation Information Infrastructure Security (1996) Global Positioning Study Interference and Mitigation (1998) Chemical/Biological Incidents (1998) Marine Safety and Port Security (2000) Intermodal Cargo Security Best Practices (1999) Transportation Infrastructure Assurance Research and Development Plan (1999 and 2000) DOT Communications (Security) Reports (2001) Updated Supervisory Control and Data Acquisition (SCADA) Study (2002) Global Positioning System Vulnerability Study (2001) Appendix II: Stakeholders Involved and Criteria Used in Selecting the Vulnerabilities Assessed Under the TIA Program Appendix III: Entities Reported by RSPA Who Were Involved during the Implementation of the TIA Program
The events of September 11, 2001, increased attention on efforts to assess the vulnerabilities of the nation's transportation infrastructure and develop needed improvements in security. The Department of Transportation's (DOT) Research and Special Programs Administration (RSPA) had already begun research in this area in June 2001. The goals of RSPA's Transportation Infrastructure Assurance program are to identify, and develop ways to mitigate the impact of, threats to the nation's transportation infrastructure. DOT's Office of Intelligence and Security is responsible for defining the requirements for transportation infrastructure protection, ensuring that vulnerability assessments of transportation infrastructure are conducted, and taking action to mitigate those vulnerabilities. The House Committee on Appropriations asked GAO to determine (1) the status and anticipated results of the Transportation Infrastructure Assurance (TIA) program, and (2) the extent to which RSPA and the Office of Intelligence and Security have coordinated their activities in selecting the vulnerabilities to be assessed and implementing the vulnerability assessments for the program. DOT and RSPA officials reviewed a draft of the report, agreed with its contents, and provided technical clarifications that we incorporated. The Transportation Infrastructure Assessment program is scheduled to end in December 2003 after the completion of four transportation vulnerability assessments. Congress appropriated $1 million in each of the fiscal years from 2001 through 2003 to RSPA for the program. RSPA plans to disseminate reports, conduct workshops, and post information on the Internet to inform decision-makers in the transportation community about the results. Prior to March 2003, RSPA did not fully coordinate their activities with the Office of Intelligence and Security in selecting the vulnerabilities to be assessed, or in implementing the assessments for the program. We discussed this problem with officials from both offices who agreed that closer coordination would be beneficial, particularly to discuss options for addressing the challenges facing program researchers in conducting the program's vulnerability assessments. In March 2003, officials from both offices began regular meetings to facilitate this coordination.
Background Federal laws have been enacted over the years to determine the health and environmental hazards associated with toxic chemicals and to address these problems. Even with the existence of media-specific environmental laws enacted in the early 1970s, such as the Clean Air Act and the Clean Water Act, problems with toxic chemicals continued to occur. In addition, Congress became increasingly concerned about the long-term effects of substantial amounts of chemicals entering the environment. TSCA was enacted to authorize EPA to collect information about the hazards posed by chemical substances and to take action to control unreasonable risks by either preventing dangerous chemicals from making their way into use or placing restrictions on those already in commerce. Under the act, EPA can control the entire life cycle of chemicals from their production, distribution in commerce, and use to their disposal. Other environmental and occupational health laws generally control only disposal or release to the environment, or exposures in the workplace. The scope of TSCA includes those chemicals manufactured, imported, processed, distributed in commerce, used, or disposed of in the United States but excludes certain substances regulated under other laws. TSCA also specifies when EPA may publicly disclose chemical information it obtains from chemical companies and provides that chemical companies can claim certain information, such as data disclosing chemical processes, as confidential business information. EPA’s authority to ensure that chemicals in commerce do not present an unreasonable risk of injury to health or the environment is established in five major sections of TSCA. The purpose and application of these sections are shown in table 1 and described in further detail below. Under the provisions for chemical testing in section 4 of TSCA, EPA can promulgate rules to require chemical companies to test potentially harmful chemicals for their health and environmental effects. However, EPA must first determine that testing is warranted based on some toxicity or exposure information. Specifically, to require such testing, EPA must find that a chemical (1) may present an unreasonable risk of injury to human health or the environment or (2) is or will be produced in substantial quantities and that either (a) there is or may be significant or substantial human exposure to the chemical or (b) the chemical enters or may reasonably be anticipated to enter the environment in substantial quantities. EPA must also determine that there are insufficient data to reasonably determine or predict the effects of the chemical on health or the environment and that testing is necessary to develop such data. Under the provisions for new chemical review and significant new use rules in section 5 of TSCA, chemical companies are to notify EPA at least 90 days before beginning to manufacture a new chemical (premanufacture notice review). Section 5 also allows EPA to promulgate significant new use rules, which require companies to notify EPA at least 90 days before beginning to manufacture a chemical for certain new uses or in certain new ways (significant new use notice review). Such rules require existing chemicals to undergo the same type of review that new chemicals undergo. For example, EPA may issue a significant new use rule if it learns that a chemical that has previously been processed as a liquid is now being processed as a powder, which may change how workers are exposed to the chemical. Section 5 of the act also authorizes EPA to maintain a list of chemicals—called the chemicals of concern list—that present or may present an unreasonable risk of injury to health or the environment. Under the provisions for chemical regulation in section 6 of TSCA, EPA is to apply regulatory requirements to chemicals for which EPA finds a reasonable basis exists to conclude that the chemical presents or will present an unreasonable risk of injury to health or the environment. To adequately protect against a chemical’s risk, EPA can promulgate a rule that bans or restricts the chemical’s production, processing, distribution in commerce, disposal, or use or requires warning labels be placed on the chemical. Under TSCA, EPA must choose the least burdensome requirement that will adequately protect against the risk. Under the provisions for industry reporting of chemical data in section 8(a), EPA is to promulgate rules under which chemical companies must maintain records and submit such information as the EPA Administrator reasonably requires. This information can include, among other things, chemical identity, categories of use, production levels, by-products, existing data on adverse human health and environmental effects, and the number of workers exposed to the chemical, to the extent such information is known or reasonably ascertainable. Under section 8(a), EPA issues rules to update the TSCA inventory. For example, in August 2011, EPA finalized its TSCA Chemical Data Reporting rule (previously referred to as the Inventory Update Reporting Modifications Rule); the rule requires companies to report, among other things, exposure-related information, such as production volume and use data, on chemicals manufactured or imported over a certain volume per year. In addition, section 8(d) provides EPA with the authority to promulgate rules under which chemical companies are required to submit lists or copies of existing health and safety studies to EPA. Section 8(e) generally requires chemical companies to report any information to EPA that reasonably supports a conclusion that a chemical presents a substantial risk of injury to health or the environment. Under the provisions for disclosure of chemical data in section 14, EPA may disclose chemical information it obtains under TSCA under certain conditions. Chemical companies can claim certain information, such as data disclosing chemical processes, as confidential business information. EPA generally must protect confidential business information against public disclosure unless necessary to protect against an unreasonable risk of injury to health or the environment. Other federal agencies and federal contractors can obtain access to this confidential business information to carry out their responsibilities. EPA may also disclose certain data from health and safety studies. Historical Challenges EPA Has Faced Regulating Chemicals under TSCA We have previously reported that EPA has historically faced challenges implementing many of the provisions of TSCA, in particular (1) obtaining adequate information on chemical toxicity and exposure through testing provisions; (2) banning or limiting chemicals; and (3) disclosing chemical data and managing company assertions of confidentiality. Obtaining Adequate Information on Chemical Toxicity and Exposure EPA has found it difficult to obtain adequate information on chemical toxicity and exposure because TSCA does not require companies to provide this information and, instead, requires EPA to demonstrate that chemicals pose certain risks before it can ask for such information. Specifically, we reported in 2005 that under section 4—provisions for chemical testing—EPA has found its authority to be difficult, time- consuming, and costly to use. The structure of this section places the burden on EPA to demonstrate certain health or environmental risks before it can require companies to further test their chemicals. While TSCA authorizes EPA to review existing chemicals, it generally provides no specific requirement, time frame, or methodology for doing so. Instead, EPA conducts initial reviews after it receives information from the public or chemical companies that a chemical may pose a risk. As a result, EPA has only limited information on the health and environmental risks posed by these chemicals. In our June 2005 report, we suggested that Congress consider amending TSCA to provide explicit authority for EPA to enter into enforceable consent agreements under which chemical companies are required to conduct testing, and give EPA, in addition to its current authorities under section 4 of TSCA, the authority to require chemical substance manufacturers and processors to develop test data based on substantial production volume and the necessity for testing. In addition, we reported in June 2005 that under section 5—provisions for new chemical review—TSCA generally requires chemical companies to submit a notice to EPA (known as a “premanufacture notice”) before they manufacture or import new chemicals and to provide any available test data. EPA estimated that most notices do not include any test data and that about 15 percent of them included health or safety test data. These tests may take over a year to complete and cost hundreds of thousands of dollars, and chemical companies usually do not perform them voluntarily. However, chemical companies are not generally required under TSCA to limit the production of a chemical or its uses to those specified in the premanufacture notice or to submit another premanufacture notice if changes occur. For example, companies may increase production levels or expand the uses of a chemical, potentially increasing the risk of injury to human health or the environment. Banning or Limiting Chemicals EPA has had difficulty demonstrating that chemicals should be banned or have limits placed on their production or use under section 6—provisions for controlling chemicals. Specifically, we reported, in June 2005, that since Congress enacted TSCA in 1976, EPA has issued regulations under section 6 to ban or limit the production or restrict the use of five existing chemicals or chemical classes out of tens of thousands of chemicals listed for commercial use on the agency’s TSCA inventory. EPA’s 1989 asbestos rule illustrates the difficulties EPA has had in issuing regulations to control existing chemicals. In 1979, EPA started considering rulemaking on asbestos. After concluding that asbestos was a potential carcinogen at all levels of exposure, EPA promulgated a rule in 1989 prohibiting the future manufacture, importation, processing, and distribution of asbestos in almost all products. Some manufacturers of asbestos products filed suit against EPA, arguing, in part, that the rule was not promulgated on the basis of substantial evidence regarding unreasonable risk. In 1991, the Fifth Circuit Court of Appeals ruled for the manufacturers and returned parts of the rule to EPA for reconsideration. In reaching this conclusion, the court found that EPA did not consider all necessary evidence and failed to show that the control action it chose was the least burdensome reasonable regulation required to adequately protect human health or the environment. Since the court’s 1989 decision, EPA has only exercised its authority to ban or limit the production or use of an existing chemical once—for hexavalent chromium, a known human carcinogen widely used in industrial cooling towers—in 1990. Disclosure of Chemical Data EPA has limited ability to publicly share the information it receives from chemical companies under TSCA. Specifically, as we reported in 2005, EPA has not routinely challenged companies’ assertions that the chemical data they disclose to EPA under section 14—disclosure of chemical data—are confidential business information, citing resource constraints. TSCA requires EPA to protect trade secrets and privileged or confidential commercial or financial information against unauthorized disclosures. When information is claimed as confidential business information, it limits EPA’s ability to expand public access to this information—such as sharing it with state environmental agencies and foreign governments, which potentially limits the effectiveness of these organizations’ environmental risk programs. Because EPA has not routinely challenged these assertions, the extent to which companies’ confidentiality claims are warranted is unknown. We recommended, in June 2005, that EPA revise its regulations to require that companies periodically reassert claims of confidentiality.not disagree with our recommendation but has not revised its regulations. EPA has explored ways to reduce the number of inappropriate and over- broad claims of confidentiality by companies that submit data to EPA. EPA Has Made Progress to Implement Its New Approach to Managing Chemicals, but Some Challenges Persist In March 2013, we reported on progress EPA has made implementing its new approach to manage toxic chemicals under its existing TSCA authority—particularly by increasing efforts to (1) obtain toxicity and exposure data, (2) assess risks posed by chemicals, and (3) discourage the use of some chemicals. However, the results of EPA’s activities, in most cases, have yet to be realized. We also reported that it is unclear whether EPA’s new approach will position the agency to achieve its goal of ensuring the safety of chemicals. EPA Has Increased Efforts to Collect Data on Toxicity and Exposure, but It May Take Several Years to Produce Results EPA has increased its efforts to collect toxicity and exposure data, but because rules can take years to finalize and additional time for companies to execute, these efforts may take several years to produce results. Even with these efforts, EPA has not pursued all opportunities to obtain chemical data. We reported, in March 2013, that EPA has made progress by taking the following actions but continues to face challenges in collecting such data, specifically: Since 2009, EPA has proposed or promulgated rules to require chemical companies to test 57 chemicals. Specifically, EPA has required companies to test 34 chemicals and provide EPA with the resulting toxicity and other data. In addition, EPA announced, but has yet to finalize, plans to require testing for 23 additional chemicals. However, requirements under TSCA place the burden of developing toxicity data on EPA. Because rulemaking can take years, EPA has yet to obtain much of the information it has been seeking. According to EPA officials, it can take, on average, 3 to 5 years for the agency to promulgate a test rule and an additional 2 to 2 ½ years for the companies to provide the data once EPA has requested them. In addition, the toxicity data eventually obtained on the 57 chemicals may not be sufficient for EPA to conduct a risk assessment (i.e., characterize risk by determining the probability that populations or individuals so exposed to a chemical will be harmed and to what degree). Specifically, EPA may obtain data that are considered to be “screening level” information. Screening level information is collected to identify a chemical’s potential hazards to human health and the environment, but it was not intended to be the basis for assessing whether a chemical poses an unreasonable risk of injury to human health or the environment, according to agency documents describing the program. In August 2011, EPA revised its periodic chemical data reporting requirements to obtain exposure-related information for a greater number of chemicals. Under the revised requirements, EPA (1) lowered the reporting thresholds, in some cases, which will allow it to look at exposure scenarios for a larger number of chemicals than in the past and (2) shortened the reporting cycle from every 5 years to every 4 years. In addition, starting in 2016, the revised requirements for reporting will be triggered when companies exceed applicable production thresholds in any year during the 4-year reporting cycle. Even with the increased efforts EPA has taken to collect toxicity and exposure data, in March 2013, we reported that EPA has not pursued all opportunities to obtain such data. For example, EPA has not sought toxicity and exposure data that companies submit to the European Chemicals Agency on chemicals that the companies manufacture or process in, or import to, the United States.chemicals legislation, the European Chemicals Agency may share information it receives from chemical companies with foreign governments in accordance with a formal agreement concluded between the European Community and the foreign government, but EPA has not pursued such an agreement. In addition, EPA has not issued a rule under section 8 of TSCA requiring companies to provide EPA with the information provided to the European Chemicals Agency. EPA officials told us that the agency has not sought to obtain chemical data—from either the European Chemicals Agency or companies directly—because it does not believe that this would be the best use of EPA or industry resources. They also said that it is unclear whether these data would be useful to EPA. EPA officials believe it is a more effective use of resources to gain access to data, as needed, on a case-by-case basis from chemical companies. As a result, we recommended that EPA consider promulgating a rule under TSCA section 8, or take action under another section, as appropriate, to require chemical companies to report chemical toxicity and exposure-related data they have submitted to the European Chemicals Agency. In its written comments on a draft of our March 2013 report, EPA stated that it intends to pursue data submitted to the European Chemicals Agency from U.S. companies using voluntary or regulatory means as necessary but did not provide information on its planned approach to pursue such data. Consequently, the extent to which EPA plans to continue to rely on voluntary efforts to obtain the needed data is unclear. EPA Has Begun Assessing Chemical Risks, but It Is Too Early to Tell What, If Any, Risk Management Actions Will Be Taken EPA has increased its efforts to assess chemical risks, but because EPA does not have the data necessary to conduct all risk assessments, it is too early to tell what, if any, risk management actions will be taken. Even with these efforts, it is unclear how EPA is going to obtain the data necessary to continue to conduct all risk assessments. We reported, in March 2013, that EPA has made progress to assess chemical risks by taking the following actions but continues to face challenges. Specifically, in February 2012, EPA announced a plan that identified and prioritized 83 existing chemicals for risk assessment— known as the TSCA Work Plan. From this list of 83 chemicals, EPA’s Office of Pollution Prevention and Toxics—the office responsible for implementing TSCA—initiated risk assessments for 7 chemicals in 2012—5 of which were released for public comment—and announced plans to start risk assessments during 2013 and 2014 for 18 additional chemicals. EPA officials told us that they expect that all 7 risk assessments will be finalized early in 2014. However, it may be years before EPA initiates regulatory or other risk management actions to reduce any chemical risks identified in these assessments. Before EPA can determine such actions are warranted, the agency would need to consider other factors—such as costs and benefits of mitigating the risk, technological information, and the concerns of stakeholders—which could require additional time and resources. Moreover, assuming EPA meets its 2014 target for completing these 7 assessments and initiating new assessments, at its current pace, it would take EPA at least 10 years to complete risk assessments for the 83 chemicals in the TSCA Work Plan. As we reported, in March 2013, even with these increased efforts, it is unclear whether EPA can maintain its current pace given that it currently does not have the toxicity and exposure data it will need to conduct risk assessments for all of the 83 chemicals in its TSCA Work Plan. According to EPA officials and agency documents, the agency has started or plans to start risk assessments on the 25 chemicals for which it has well- characterized toxicity and exposure data. However, before EPA can initiate risk assessments for the remaining 58 chemicals, the agency will need to identify and obtain toxicity and exposure data. According to agency officials, to obtain the toxicity data needed, EPA may need to promulgate rules to require companies to perform additional testing on some of these chemicals. However, EPA has not clearly articulated how or when it plans to obtain these needed data. Moreover, without exposure-related data, such as those potentially available from chemical processors, EPA may still be missing the data necessary to conduct risk assessments. To better position EPA to ensure chemical safety under existing TSCA authority, in our March 2013 report we recommended that EPA develop strategies for addressing challenges associated with obtaining toxicity and exposure data needed for risk assessments. However, based on EPA’s written response to a draft of our 2013 report, it is unclear what action, if any, EPA intends to pursue. EPA Has Taken Actions That May Discourage the Use of Certain Chemicals, but It Is Too Early to Tell Whether These Actions Will Reduce Chemical Risk EPA has taken actions that may discourage the use of certain chemicals, but because many of these actions have yet to be finalized, it is too early to tell whether they will reduce chemical risk. We reported in March 2013 that, given the difficulty that EPA has faced in the past using section 6 of TSCA to ban existing toxic chemicals or place limits on their production or use, the agency generally considers using this authority only after exhausting all other available options. Since 2009, EPA has made progress by increasing its use of certain options, including (1) making greater use of significant new use rules under section 5 and (2) proposing actions that use its TSCA authority in new ways as follows: EPA is making greater use of significant new use rules under section 5 to control new uses of existing chemicals. Our analysis of TSCA rulemaking from 2009 to 2012 shows that EPA has quadrupled its issuance of significant new use rules since 2009. From 2009 to 2012, EPA issued significant new use rules affecting about 540 chemicals, about 25 percent of all 2,180 chemicals subject to significant new use rules issued by EPA since 1976. EPA officials told us that EPA typically recommends that companies submit testing information when they notify EPA of their intent to manufacture or process chemicals, which enables EPA to better evaluate the potential risks associated with the new use. According to EPA officials, this approach allows the agency to “chip away” at chemicals that may pose risks to human health and the environment. Such recommendations may discourage companies from pursuing new uses of existing chemicals that may pose health or environmental risks either because testing itself can be expensive, or because the testing recommendation suggests that the agency may consider banning or limiting the manufacture or production of the chemical on the basis of that testing. EPA has also proposed actions that use its TSCA authority in new ways including the following: Creating “chemicals of concern” list. In May 2010, EPA announced that it intended to create a list of chemicals that present or may present ‘‘an unreasonable risk of injury to health or the environment.’’ EPA has had the authority to create such a list under section 5 of TSCA since its enactment in 1976 but has never attempted to use this authority. EPA submitted the list, which consists of three groups of chemicals, for review by the Office of Management and Budget (OMB) in May 2010, and as of May 2013, EPA’s proposed “chemicals of concern” list has been under review at OMB for over 1,000 days and remains listed as pending review by OMB. Pairing of test and significant new use rules. In December 2010, EPA submitted to OMB for review a proposal to pair testing rules with significant new use rules for the first time. Specifically, EPA has proposed single rules that combine provisions requiring companies to develop toxicity and other data with provisions requiring companies to provide data for new uses of chemicals. EPA has proposed using this approach in two cases. In one case, for example, EPA proposed this approach for certain flame retardants that are being voluntarily phased out, effective December 2013. Under the proposed rule, any new use of the chemical after it has been phased out would qualify as a significant new use, triggering a testing requirement. According to EPA officials, the pairing of these types of rules is intended to discourage new uses of certain chemicals that may pose a risk to human health or the environment and create a disincentive for companies to continue current use of the chemical—something EPA has not done before. OMB’s review of this proposal took 422 days and was completed on February 15, 2012. Extending significant new use rules to articles. Since 2009, EPA has made increasing use of its ability to subject chemicals contained in certain products, or “articles,” such as furniture, textiles, and electronics, to significant new use rules. Generally, those who import or process a substance as part of a product are exempted from compliance with a significant new use rule. EPA’s proposals would eliminate this exemption for certain chemicals. However, it is too early to assess the impact of EPA’s proposed actions because they have yet to be finalized. In addition, in some cases, OMB has not met the established 90-day time for reviewing EPA’s proposed actions—which has increased the time frames for formally proposing and finalizing them. limited by executive order to 90 days, although it can be extended. It Is Unclear Whether EPA’s New Approach Will Position the Agency to Achieve Its Goal of Ensuring the Safety of Chemicals Any rules that EPA plans to issue under TSCA that are considered significant regulatory actions, as defined by Executive Order 12866, are subject to review by the Office of Information and Regulatory Affairs, an office within OMB, prior to being proposed in the Federal Register. Among other things, a significant regulatory action may have an annual effect on the economy of $100 million or more or raise novel legal or policy issues. case-by-case basis from chemical companies. However, the agency’s strategy does not discuss how EPA would execute these plans or how the data obtained would be used to inform the agency’s ongoing or future risk assessment activities, if at all. Banning or limiting the use of chemicals. EPA’s strategy does not articulate how the agency would overcome the regulatory challenges it experienced in the past. In particular, EPA officials told us that, even if EPA has substantial toxicity and exposure data, the agency is challenged in meeting the statutory requirement under section 6 of TSCA to limit or ban chemicals. Further, EPA’s strategy does not identify the resources needed to meet its goal of ensuring chemical safety. For example, EPA’s strategy does not identify the resources needed to carry out risk assessment activities, even though risk assessment is a central part of EPA’s effort to manage chemicals under its new approach. Specifically, EPA does not identify roles and responsibilities of key staff or offices—for example which office within EPA will develop the toxicity assessments needed to support its planned risk assessments—or identify staffing levels or cost associated with conducting its risk assessment activities. Without a clear understanding of the resources needed to complete risk assessments and other activities identified in its strategy, EPA cannot be certain that its current funding and staffing levels are sufficient to execute its new approach to managing chemicals under existing TSCA authorities. When developing new initiatives, agencies can benefit from following leading practices for federal strategic planning. Of these leading practices, it is particularly important for agencies to define strategies that address management challenges that threaten their ability to meet long- term goals. In our March 2013 report, we stated that without a plan that incorporates leading strategic planning practices—particularly a plan that clearly articulates how EPA will address management challenges—EPA cannot be assured that it its new approach to managing chemicals, as described in its Existing Chemicals Program Strategy, will provide a framework to effectively guide its efforts. Consequently, EPA could be investing valuable resources, time, and effort without being certain that its efforts will bring the agency closer to achieving its goal of ensuring the safety of chemicals. As a result, we recommended that the EPA Administrator direct the appropriate offices to develop strategies for addressing challenges that impede the agency’s ability to meet its goal of ensuring chemical safety to better position EPA to ensure chemical safety under its existing TSCA authority. In its written response to our March 2013 report, EPA’s Acting Assistant Administrator stated that change is needed in every significant aspect of the program, and, while strategic planning is a useful exercise it cannot substitute for the basic authorities needed for a modern, effective chemicals program. Moreover, the Acting Assistant Administrator stated that it is EPA’s position that, absent statutory changes to TSCA, the agency will not be able to successfully meet the goal of ensuring chemical safety now and into the future. Chairman Shimkus, Ranking Member Tonko, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have at this time. GAO Contact and Staff Acknowledgments If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions include Diane LoFaro, Assistant Director; Diane Raynes, Assistant Director; Elizabeth Beardsley; Richard Johnson; Alison O’Neill; and Aaron Shiffrin. Related GAO Products Toxic Substances: EPA Has Increased Efforts to Assess and Control Chemicals but Could Strengthen Its Approach, GAO-13-249 (Washington, D.C.: Mar. 22, 2013). Chemical Regulation: Observations on Improving the Toxic Substances Control Act. GAO-10-292T. Washington, D.C.: December 2, 2009. Chemical Regulation: Options for Enhancing the Effectiveness of the Toxic Substances Control Act. GAO-09-428T. Washington, D.C.: February 26, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. Toxic Chemicals: EPA’s New Assessment Process Will Increase Challenges EPA Faces in Evaluating and Regulating Chemicals. GAO-08-743T. Washington, D.C.: April 29, 2008. Chemical Regulation: Comparison of U.S. and Recently Enacted European Union Approaches to Protect against the Risks of Toxic Chemical. GAO-07-825. Washington, D.C.: August 17, 2007. Chemical Regulation: Actions Are Needed to Improve the Effectiveness of EPA’s Chemical Review Program. GAO-06-1032T. Washington, D.C.: August 2, 2006. Chemical Regulation: Approaches in the United States, Canada, and the European Union. GAO-06-217R. Washington, D.C.: November 4, 2005. Chemical Regulation: Options Exist to Improve EPA’s Ability to Assess Health Risks and Manage Its Chemical Review Program. GAO-05-458. Washington, D.C.: June 13, 2005. Toxic Substances: EPA Should Focus Its Chemical Use Inventory on Suspected Harmful Substances. GAO/RCED-95-165. Washington, D.C.: July 7, 1995. Toxic Substances Control Act: Legislative Changes Could Make the Act More Effective. GAO/RCED-94-103. Washington, D.C.: September 26, 1994. Toxic Substances: EPA’s Chemical Testing Program Has Not Resolved Safety Concern. GAO/RCED-91-136. Washington, D.C.: June 19, 1991. Toxic Substances: EPA’s Chemical Testing Program Has Made Little Progress. GAO/RCED-90-112. Washington, D.C.: April 25, 1990. EPA’s Efforts To Identify and Control Harmful Chemicals in Use. GAO/RCED-84-100. Washington, D.C.: June 13, 1984. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 1976, Congress passed TSCA to give EPA the authority to obtain more health and safety information on chemicals and to regulate chemicals it determines pose unreasonable risks of injury to human health or the environment. GAO has reported that EPA has found many of TSCA's provisions difficult to implement. In 2009, EPA announced TSCA reform principles to inform ongoing efforts in Congress to strengthen the act. At that time, EPA also initiated a new approach for managing toxic chemicals using its existing TSCA authorities. This testimony summarizes GAO's past work describing: (1) challenges EPA has faced historically in regulating chemicals and (2) the extent to which EPA has made progress implementing its new approach, and challenges, if any, which persist. This statement is based on GAO reports issued between 1994 and 2013. GAO is not making new recommendations in this testimony. In prior reports, GAO suggested that Congress consider statutory changes to TSCA to give EPA additional authorities to obtain information from the chemical industry and shift more of the burden to chemical companies for demonstrating the safety of their chemicals. In these reports, among other things, GAO recommended that EPA require companies to provide chemical data they submitted to foreign governments, require companies to reassert confidentiality claims, and develop strategies for addressing challenges that impeded EPA's ability to ensure chemical safety. EPA's responses to these recommendations have varied. GAO reported in June 2005 that EPA has historically faced the following challenges in implementing the provisions of the Toxic Substances Control Act (TSCA): Obtaining adequate information on chemical toxicity and exposure . EPA has found it difficult to obtain such information because TSCA does not require companies to provide it; instead, TSCA requires EPA to demonstrate that chemicals pose certain risks before it can ask for such information. Banning or limiting chemicals . EPA has had difficulty demonstrating that chemicals should be banned or have limits placed on their production or use under section 6--provisions for controlling chemicals. The agency issued regulations to ban or limit production or use of five existing chemicals, or chemical classes, out of tens of thousands of chemicals listed for commercial use. A court reversal of EPA's 1989 asbestos rule illustrates the difficulties EPA has had in issuing regulations to control existing chemicals. Disclosing data and managing assertions of confidentiality . EPA has not routinely challenged companies' assertions that data they provide are confidential business information and cannot be disclosed. As a result, the extent to which companies' confidentiality claims are warranted is unknown. GAO reported in March 2013 that EPA has made progress implementing its new approach to managing toxic chemicals under its existing TSCA authority but, in most cases, results have yet to be realized. Examples are as follows: EPA has increased efforts to collect toxicity and exposure data through the rulemaking process, but because rules can take 3 to 5 years to finalize and 2 to 2 1/2 years for companies to execute, these efforts may take several years to produce results. Specifically, since 2009, EPA has (1) required companies to test 34 chemicals and provide EPA with the resulting toxicity and other data, and (2) announced, but has not yet finalized, plans to require testing for 23 additional chemicals. EPA has increased efforts to assess chemical risks, but because EPA does not have the data necessary to conduct all risk assessments, it is too early to tell what, if any, risk management actions will be taken. In February 2012, EPA announced a plan that identified and prioritized 83 existing chemicals for risk assessment; the agency initiated assessments for 7 chemicals in 2012 and announced plans to start 18 additional assessments during 2013 and 2014. At its current pace, it would take EPA at least 10 years to complete risk assessments for the 83 chemicals. In addition, it is unclear whether EPA's new approach to managing chemicals will position the agency to achieve its goal of ensuring the safety of chemicals. EPA's Existing Chemicals Program Strategy, which is intended to guide EPA's efforts to assess and control chemicals in the coming years, does not discuss how EPA will address identified challenges. Consequently, EPA could be investing valuable resources, time, and effort without being certain that its efforts will bring the agency closer to achieving its goal of ensuring the safety of chemicals.
Background The 7(a) loan program, SBA’s largest lending program, is intended to serve small business borrowers who cannot otherwise obtain financing under suitable terms and conditions from the private sector. Under the program, SBA guarantees to repay a participating lender a prespecified percentage of the 7(a) loan amount (generally between 75 and 80 percent) in the event of borrower default. To obtain a 7(a) loan guarantee, a lender must document that the prospective borrower was unable to obtain financing under reasonable terms and conditions through normal business channels. Borrowers participating in the program represent a broad range of small businesses, including restaurants, consumer services, professional services, and retail outlets. The dollar volume of 7(a) loans that can be guaranteed under SBA’s authority is predetermined each fiscal year by congressional appropriations that subsidize the program. During fiscal year 1997, 7(a) loan approvals totaled nearly $9.5 billion—the highest level of loan approvals in the program’s history and an increase of over 20 percent from the previous fiscal year. As of December 31, 1997, there was $21.5 billion in total 7(a) loans outstanding. Lending Companies (SBLC) that accounted for about 19 percent of 7(a) loans outstanding at the end of 1997. Secondary Loan Markets Generate Benefits A secondary loan market is a resale market for loans originated in the primary market. It allows a lender to sell a loan it originates rather than holding the loan on its balance sheet. To hold a loan on its balance sheet, the lender would be required to obtain funding for the time period over which the loan was outstanding. For the types of loans I am discussing, when a lender sells a loan, it continues to service the loan by collecting borrower principal and interest payments and taking possible corrective actions if the borrower does not make required payments. A number of benefits are associated with secondary markets. They provide lenders a funding alternative to deposits, lines of credit, and other debt sources. Secondary loan markets generally link borrowers and lenders in local markets to national capital markets, which can provide liquidity for lenders and thereby reduce regional imbalances in loanable funds and possibly increase the overall availability of credit to the primary market and lower interest rates for borrowers. The share of loans in a primary market that are sold in a secondary market depends on the benefits generated by the secondary market. For example, secondary markets allow interest rate risk to be diversified among investors with access to funding sources that help them manage such risks. Interest rate risk is the possibility of financial loss due to changes in market interest rates. This risk is greatest for holders of assets such as fixed-rate loans. For example, a financial institution holding a 30-year fixed rate mortgage on its balance sheet that it funds with short-term liabilities can experience losses if interest rates rise. In this case, interest earnings from the mortgage do not increase while interest costs do. Interest rate risk is also present for variable rate loans with caps that limit how much interest rates paid by the borrower can increase. Adjustable-rate residential mortgages are an important example of such a variable-rate loan product. Depository institutions that rely on short-term deposits for funding have incentives to avoid holding fixed-rate assets on their balance sheets. In this case, secondary markets provide a funding source that is less likely to be disrupted in a changing interest rate environment. defaults. When secondary market investors are exposed to credit risk, secondary market sales can be impeded if investors lack information on lenders, borrowers, and loan characteristics to estimate their exposure to credit risks. Investors who purchase federally guaranteed loans and securities are not subject to credit risk because the federal guarantees ensure that investors will be paid on defaulted loans. However, lenders and investors are subject to credit risk on unguaranteed loans or portions of loans. Prepayment risk is the risk that borrowers will pay off their loans before maturity. For example, prepayments can lower returns to investors in fixed-rate loans if borrowers prepay the loans when interest rates decline. Likewise, for fixed- or variable-rate loans, prepayments can lower returns to investors who pay a premium for a pool of loans with relatively high interest rates. Federal guarantees do not mitigate this risk. Thus, secondary market sales can be impeded if investors lack information on lenders, borrowers, and loan characteristics to estimate their exposure to prepayment risks. Analysts are able, by using various statistical techniques, to estimate prepayment risks for large loan pools for which information is available on the pool’s loan characteristics and historic prepayment rates on statistical samples of similar loans. In contrast, such estimates are less reliable for securities backed by loan pools composed of a relatively small number of loans. The size of the pool is important because loans with cash flows that represent statistical outliers are less likely to cause the cash flow from a large pool to differ from those of other representative statistical samples of similar loans. The Ginnie Mae Guaranteed MBS Market Is Large Ginnie Mae is a government corporation that is part of the Department of Housing and Urban Development. Ginnie Mae participating lenders originate mortgages insured by the Federal Housing Administration (FHA) and the Department of Veterans Affairs (VA) for resale in the secondary market. These lenders issue mortgage-backed securities backed by cash flows from these mortgages. For a fee of 6 basis points, Ginnie Mae guarantees timely payment of principal and interest on these securities. Currently, over $500 billion in MBS backed by Ginnie Mae is outstanding. Most mortgages backing Ginnie Mae MBS are FHA insured mortgages. compete in the primary market for loan originations even though they do not have a deposit base to finance the mortgages on their balance sheets. Due to competitive forces in the primary market and as a result of increased access to additional sources of funds for lenders, this secondary market has contributed to lower interest rates paid by borrowers on federally insured mortgages. Over 90 percent of single-family FHA mortgages have been sold in the Ginnie Mae MBS secondary market. Over 70 percent of FHA insured mortgages are fixed-rate mortgages. These mortgages have greater interest rate risk than adjustable-rate mortgages. In addition, for adjustable-rate mortgages, FHA limits the degree to which interest rates paid by the borrower can increase to a maximum of 1 percentage point annually and 5 percentage points over the life of the mortgage loan. Therefore, Ginnie Mae guaranteed MBS backed by FHA-insured adjustable-rate mortgages also entail interest rate risk for investors. As I discussed earlier, the presence of interest rate risk in a primary market increases the attractiveness of the secondary market to loan originators. An investor in a Ginnie Mae MBS is to receive an offering statement that discloses the issuer of the MBS, which is normally the lender. Other information to be disclosed includes the value of loans in the pool and characteristics of the loans, such as whether they are 30-year fixed-rate or adjustable rate. The minimum pool size is eight loans, but most pools are much larger. For a fixed-rate MBS pool, interest rates paid by borrowers in the pool must be within one percentage point of each other. For MBS backed by adjustable-rate mortgage pools, the index used to adjust the interest rate paid by the borrower must be specified. Therefore, in estimating prepayment risk, the investor is helped by being able to analyze a relatively large and homogeneous loan pool issued by a particular lender. Investors in Ginnie Mae MBS include mutual funds, pension funds, insurance companies, and individuals. Characteristics of the Guaranteed 7(a) Secondary Market Differ From Those of the Ginnie Mae MBS Market sheets. Particularly active among nondepository lenders are 12 SBLCs that accounted for about 19 percent of 7(a) loans outstanding at the end of 1997. In this market, SBA 7(a) lenders sell their loans to pool assemblers who form pools by combining the loans of a number of lenders and then sell certificates backed by these pools. Colson Services, SBA’s fiscal and transfer agent (FTA), monitors and handles the paperwork and data management system for all 7(a) guaranteed portions sold on the secondary market. Colson also serves as a central registry for all sales and resales of these portions. The firm receives payment from lenders for its secondary market services equal to 12 1/2 basis points of the value of certificates for guaranteed portions under Colson’s management. In 1997, SBA 7(a) secondary market sales of pooled guaranteed portions was approximately $2.6 billion. In contrast to Ginnie Mae guaranteed MBS that are backed by cash flows from whole loans, 7(a) loans are divided into separate guaranteed and unguaranteed portions for secondary market sales. SBA reported that in 1997 over 12,000 7(a) guaranteed portions were sold on the secondary market, about 40 percent of all 7(a) loans approved that year. In recent years, anywhere from a third to almost half of the guaranteed portions of loans originated have been sold on the secondary market. This is in contrast to Ginnie Mae guaranteed MBS, which represent over 90 percent of outstanding federally insured residential mortgage loans. The guaranteed 7(a) secondary market is smaller and less active, and provides lenders with fewer incentives to sell loans than the federally insured residential mortgage market. At the end of 1997, about $10 billion in guaranteed portions of 7(a) loans were outstanding, while Ginnie Mae guaranteed MBS had over $500 billion outstanding. The 7(a) market does not benefit from the incentive for lenders to sell on the secondary market to mitigate interest rate risk, because the 7(a) program consists mainly of variable-rate loans without interest rate caps. Almost 90 percent of 7(a) loans made in 1997 were variable- rate loans without interest rate caps. Because interest rates are adjusted at least quarterly to reflect market rates, these loans entail almost no interest rate risk. In addition, SBA 7(a) loans can consist of loans backed by a variety of items, such as real estate, production inventory, or equipment, and 7(a) loans finance a broad range of businesses. Residential mortgages are all backed by residential property. Because of the heterogeneous nature of 7(a) loans, analysts are less able to accurately estimate prepayment risks. Some participants in the secondary market for 7(a) guaranteed portions expressed concern that information useful to investors in analyzing prepayment risk is not available when pool certificates are resold, limiting investors’ ability to resell in this market. SBA is concerned that providing such information to investors could reduce benefits to some 7(a) borrowers by potentially allowing investors to identify individual borrowers and groups of borrowers. According to SBA, all investors in 7(a) guaranteed pool certificates are institutional investors, such as pension funds, insurance companies, and mutual funds. In summary, secondary market volume for both the guaranteed portions of 7(a) loans and federally insured residential mortgage loans is a market outcome that depends on the relative benefits provided by the respective secondary markets compared to other methods of finance. For the most part, SBA has few if any means to change factors that, through market forces, limit the relative size of the secondary market. For example, heterogeneity in loan characteristics, which results from the 7(a) program’s intention to serve a broad range of small businesses, limits the ability of investors to estimate prepayment risk. As we continue our work, we will consider SBA actions that indicate the potential to improve the efficiency of the SBA 7(a) secondary market in achieving the objectives established for it. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to any questions you or other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Small Business Administration (SBA) 7(a) guaranteed loans secondary market, focusing on the: (1) benefits generally provided by secondary loan markets; (2) characteristics of the secondary market for federal government guaranteed mortgage loans; and (3) characteristics of the guaranteed 7(a) secondary market in relation to those of the Government National Mortgage Association (Ginnie Mae) mortgage-backed securities (MBS) secondary market. GAO noted that: (1) the 7(a) loan program, SBA's largest lending program, is intended to serve small business borrowers who cannot otherwise obtain financing under suitable terms and conditions from the private sector; (2) secondary loan markets link borrowers and lenders in local markets to national capital markets, thus reducing dependence on local funds availability; (3) the secondary market in residential mortgages is recognized for creating this link; (4) this secondary market has reduced regional imbalances in the availability of loanable funds; (5) other benefits, which include tapping additional sources of funds, have also helped to lower interest rates paid by borrowers; (6) in addition, this secondary market allows interest rate risk inherent in holding fixed-rate loans to become diversified among investors that might be better able to hedge against such risks than loan originators; (7) in contrast to Ginnie Mae guaranteed MBS that are backed by cash flows from whole loans, 7(a) loans are divided into separate guaranteed and unguaranteed portions for secondary market sales; (8) preliminary results indicate that the guaranteed 7(a) secondary market has also linked borrowers and lenders in local markets to national capital markets, and thus generated some of the benefits generally created by other secondary markets; (9) however, the guaranteed 7(a) secondary market has characteristics that limit its size in relation to the primary 7(a) market; (10) in particular, most 7(a) loans are variable-rate loans with almost no interest rate risk, which reduces incentives for some lenders to use the secondary market; (11) in addition, 7(a) secondary market investors, relative to MBS investors, have less information to accurately estimate their exposure to risks associated with borrowers paying off their loans before they are due (prepayment risks), which may limit whether or how much they are willing to participate; (12) secondary market volume for both the guaranteed portions of 7(a) loans and federally insured residential mortgage loans is a market outcome that depends on the relative benefits provided by the respective secondary markets compared to other methods of finance; and (13) for the most part, SBA has few if any means to change factors that, through market forces, limit the relative size of the secondary market.
Background The growth in information technology, networking, and electronic storage has made it ever easier to collect and maintain information about individuals. An accompanying growth in incidents of loss and unauthorized use of such information has led to increased concerns about protecting this information on federal systems as well as from private- sector sources, such as data resellers that specialize in amassing personal information from multiple sources. As a result, additional laws protecting personally identifiable information collected and maintained by both government and private-sector entities have been enacted since the Privacy Act of 1974, including measures that are particularly concerned with the protection of personal data maintained in automated information systems. Protecting personally identifiable information in federal systems, such as names, date of birth and SSNs, is critical because its loss or unauthorized disclosure can lead to serious consequences for individuals. These consequences include identity theft or other fraudulent activity, which can result in substantial harm, embarrassment, and inconvenience. Identity Theft Is a Serious Problem Identity theft is a serious problem because, among other things, it may take a long period of time before a victim becomes aware that the crime has taken place, and thus can cause substantial harm to the victim’s credit rating. Moreover, while some identity theft victims can resolve their problems quickly, others face substantial costs and inconvenience repairing damage to their credit records. Some individuals have lost job opportunities, been refused loans, or even been arrested for crimes they did not commit as a result of identity theft. Millions of people become victims of identity theft each year. The Federal Trade Commission (FTC) estimates that in 1 year, as many as 10 million people—or 4.6 percent of the U.S. adult population—discover that they are victims of some form of identity theft, translating into reported losses exceeding $50 billion. In 2007, the FTC estimated that the median value of goods and services obtained by identity thieves was $500, with 10 percent of victims reporting the thief obtained $6,000 or more. Similarly, a more recent 2008 industry survey estimated that, 9.9 million adults in the United States were victims of identity fraud. While available data suggest that identity theft remains a persistent and serious problem, the FTC found that most victims of identity theft do not report the crime. Therefore, the total of number of identity thefts is unknown. Several examples we previously identified illustrate the magnitude of the losses that could occur from a single incident and how aggregated personal information can be vulnerable to misuse: A help desk employee at a New York-based software company, which provided software to its clients to access consumer credit reports, stole the identities of up to 30,000 individuals by using confidential passwords and subscriber codes of the company’s customers. The former employee reportedly sold these identities for $60 each. Furthermore, given the explosion of Internet use and the ease with which personally identifiable information is accessible, individuals looking to steal someone’s identity are increasingly able to do so. In our work, we identified a case where an individual obtained the names and SSNs of high-ranking U.S. military officers from a public Web site and used those identities to apply online for credit cards and bank credit. In 2006, an Ohio woman pled guilty to conspiracy, bank fraud, and aggravated identity theft as the leader of a group that stole citizens’ personal identifying information from a local public record keeper’s Web site and other sources, resulting in over $450,000 in losses to individuals, financial institutions, and other businesses. In February 2007, an individual was convicted of aggravated identity theft, access device fraud, and conspiracy to commit bank fraud in the Eastern District of Virginia. The individual, who went by the Internet nickname “John Dillinger,” was involved in extensive illegal online “carding” activities, in which he received e-mails or instant messages containing hundreds of stolen credit card numbers, usually obtained through phishing schemes or network intrusions, from “vendors” who were located in Russia and Romania. In his role as a “cashier” of these stolen credit card numbers, this individual would then electronically encode these numbers to plastic bank cards, make ATM withdrawals, and return a portion to the vendors. Computers seized by authorities revealed over 4,300 compromised account numbers and full identity information (i.e., name, address, date of birth, Social Security number, and mother’s maiden name) for over 1,600 individual victims. Steps Have Been Taken at the Federal, State, and Local Level to Prevent Identity Theft, Although Gaps Remain in Efforts to Assist Victims Several steps have been taken, both in terms of legislation and administrative actions to combat identity theft at the federal, state and local levels, although efforts to assist victims of the crime once it has occurred remain somewhat piecemeal. While there is no one law that regulates the overall use of personally identifiable information by all levels and branches of government, numerous federal laws place restrictions on public and private sector entities’ use and disclosure of individuals’ personal information in specific instances, including the use and disclosure of SSNs—a key piece of information that is highly valuable to identity thieves. One intention of some of these laws is to prevent the misuse of personal information for purposes such as identity theft. Several Federal Laws Seek to Protect Personally Identifiable Information Including SSNs Two primary laws (the Privacy Act of 1974 and the E-Government Act of 2002) give federal agencies responsibilities for protecting personal information, including ensuring its security. Additionally, the Federal Information Security Management Act of 2002 (FISMA) requires agencies to develop, document, and implement agency wide programs to provide security for their information and information systems (which include personally identifiable information and the systems on which it resides). FISMA is the primary law governing information security in the federal government. The act also requires the National Institute of Standards and Technology (NIST) to develop technical guidance in specific areas, including minimum information security requirements for information and information systems. Other laws which help protect personally identifiable information include the Identity Theft and Assumption Deterrence Act, the Identity Theft Penalty Enhancement Act of 1998, the Gramm-Leach-Bliley Act (GLBA), and the Fair and Accurate Credit Transactions Act (FACTA). (See app. I, table 1, for a more detailed description of these and other related laws.) For example, the Identity Theft and Assumption Deterrence Act, enacted in 1998, makes it a criminal offense for a person to “knowingly transfer, possess, or use without lawful authority,” another person’s means of identification, such as their SSN, with the intent to commit, or in connection with, any unlawful activity that constitutes a felony under state or local law. This act also mandated a specific role for the FTC in combating identity theft. To fulfill the mandate, FTC is collecting identity theft complaints and assisting victims through a telephone hotline and a dedicated Web site; maintaining and promoting the Identity Theft Data Clearinghouse, a centralized database of victim complaints that serves as an investigative tool for law enforcement; and providing outreach and education to consumers, law enforcement, and industry. According to FTC, it receives roughly 15,000 to 20,000 contacts per week on the hotline, via its Web site, or through the mail from victims and consumers who want to avoid becoming victims. In addition, the Identity Theft Enforcement and Restitution Act of 2008 requires persons convicted of identity theft to compensate their victims for the value of the time spent by the victim in an attempt to remediate the intended or actual harm incurred. Another law with some provisions to assist victims of identity theft is FACTA. This law has several provisions to help address the difficulties victims often encounter in trying to recover from identity theft, including (1) a requirement that the FTC develop a model summary of rights to be distributed to consumers who believe that they are victims of identity theft, (2) the right for consumers to place fraud alerts on their credit reports, (3) the right to obtain copies of business records involved in transactions alleged to be the result of identity theft, and (4) the right to obtain all information about fraudulently incurred debts that have been turned over to a collection agency. The Office of Management and Budget has also issued numerous memoranda to federal agencies on safeguarding personally identifiable information. These cover such matters as designating a senior privacy official with responsibility for safeguarding information, and developing and implementing a data breach notification plan. (See app. I, table 2, for a more comprehensive list of pertinent OMB memoranda). Several Federal Agencies Are Involved in Identifying and Investigating Identity Theft Numerous federal agencies can have a role in identifying and investigating identity theft. This is, in part, because identity theft is not a “stand alone” crime, but rather a component of one or more complex crimes, such as computer fraud, credit card fraud, or mail fraud. For example, with the theft of identity information, a perpetrator may commit computer fraud when using a stolen identity to fraudulently obtain credit on the Internet. Computer fraud may also be the primary vehicle used to obtain identity information when the offender obtains unauthorized access to another computer or Web site to obtain such information. As a result, if caught, the offender may be charged with both identity theft and computer fraud. Moreover, perpetrators usually prey on multiple victims in multiple jurisdictions. Consequently, a number of federal law enforcement agencies can have a role in investigating identity theft crimes. How the thief obtains and/or uses an individual’s identity usually dictates which federal agency has jurisdiction in the case. For example, if an individual finds that an identity thief has stolen the individual’s mail to obtain credit cards, bank statements, or tax information, the victim should report the crime to the U.S. Postal Inspection Service, the law enforcement arm of the U.S. Postal Service. In addition, violations are investigated by other federal agencies, such as the Social Security Administration Office of the Inspector General, the U.S. Secret Service, the Federal Bureau of Investigation (FBI), the U.S. Department of State, the U.S. Department of Education Office of Inspector General, and the Internal Revenue Service. The Department of Justice may also prosecute federal identity theft cases. (See app. I, table 3, which highlights some of the jurisdictional responsibilities of some key federal agencies.) States and Localities Have Enacted Laws and Taken Other Measures to Prevent Identity Theft and Assist Potential Victims Many states have laws prohibiting the theft of identity information. For example, New York law makes identity theft a crime. In other states, identity theft statutes also address specific crimes committed under a false identity. For example, Arizona law prohibits any person from using deceptive means to alter certain computer functions or use software to collect bank information, take control of another person’s computer, or prevent the operator from blocking the installation of specific software. In addition, Idaho law makes it unlawful to impersonate any state official to seek, demand, or obtain personally identifiable information of anothe r person. assistance provisions in their laws. For example, Washington state law requires police and sheriffs’ departments to provide a police report or original incident report at the request of any consumer claiming to be a victim of identity theft. Idaho Code § 18-3126A (2005). certain circumstances. Recently, some county governments have also completed or begun redacting or truncating SSNs that are displayed in public records—that is removing the full SSN from display or showing only part of it. Some are responding to state laws requiring these measures, but others have acted on their own based on concerns about the potential vulnerability of SSNs to misuse. Vulnerabilities Remain to Protecting Personally Identifiable Information While steps have been taken at the federal, state, and local level to prevent identity theft, vulnerabilities remain in both the public and private sectors. These vulnerabilities can be grouped into different areas, including: (1) display and use of Social Security numbers; (2) availability of personal information through private information resellers; and (3) security weaknesses in federal agency information systems that may lead to data security breaches involving personally identifiable information; among others. SSNs Are a Key Piece of Information Used in Identity Theft SSNs are a critical piece of information used to perpetrate identity theft. Although the SSN was created as a means to track workers’ earnings and eligibility for Social Security benefits, it is now also a vital piece of information needed to function in American society. Because of its unique nature and broad applicability, the SSN has become the identifier of choice for public and private sector entities, and it is used for numerous non-Social Security purposes. Today, U.S. citizens generally need an SSN to pay taxes, obtain a driver’s license, or open a bank account, among other things. SSNs, along with names and birth certificates, are among the three personal identifiers most often sought by identity thieves. SSNs play an important role in identity theft because they are used as breeder information to create additional false identification documents, such as drivers’ licenses. Most often, identity thieves use SSNs belonging to real people rather than making one up; however, on the basis of a review of identity theft reports, victims usually (65 percent of the time) did not know where or how the thieves got their personal information. In those instances when the source was known, the personal information, including SSNs, usually was obtained illegally. In these cases, identity thieves most often gained access to this personal information by taking advantage of an existing relationship with the victim. The next most common means of gaining access were by stealing information from purses, wallets, or the mail. Finally, while documents such as public records were traditionally accessed by visiting government records centers, a growing source of identity theft may be via the Internet. This is because some record keepers sell records containing SSNs in bulk to private companies and provide access to records on their own government Web sites. When records are sold in bulk or made available on the Internet, it is unknown how and by whom the records, and the personal identifying information contained in them, are used. Because the sources of identity theft cannot be more accurately pinpointed, it is not possible at this time to determine whether SSNs that are used improperly are obtained most frequently from the private or public sector. Our prior work has documented several areas where potential vulnerabilities exist with respect to protecting the security of SSNs in both the public and private sectors. For example: SSNs are displayed on some government-issued identification cards: We have reported that an estimated 42 million Medicare cards, 8 million Department of Defense (DOD) insurance cards, and 7 million Department of Veterans Affairs (VA) beneficiary cards displayed entire 9- digit SSNs. VA and DOD have begun taking action to remove SSNs from cards. For example, VA is eliminating SSNs from 7 million VA identification cards and will replace cards with SSNs or issue new cards without SSNs until all such cards have been replaced. However, the Centers for Medicare and Medicaid Services, with the largest number of cards displaying the entire 9-digit SSN, has no plans to remove the SSN from Medicare identification cards. examining the availability of SSNs in public records, we found that it is possible to reconstruct an individual’s full nine-digit SSN by combining a truncated SSN from a federally generated lien record with a truncated SSN from an information reseller. These records typically contain an individual’s SSN, name, and address. As a result of these findings, we advised Congress to consider enacting legislation to develop a standardized method of truncating SSNs. Such legislation was introduced in the 110th Congress. Federal Law Does Not Cover all Data or Services Provided by Information Resellers Federal law does not currently cover all data or services provided by information resellers, and the personally identifiable information these entities use in the course of their business operations could create potential vulnerability for identity theft, particularly when the information is available on the Internet. For example, information resellers, sometimes referred to as information brokers, are businesses that specialize in amassing personal information from multiple sources and offering informational services, including data on individuals. These entities may provide their services to a variety of prospective buyers, either to specific business clients or to the general public through the Internet. More prominent information resellers such as consumer reporting agencies and entities like LexisNexis provide information to their customers for various purposes, such as building consumer credit reports, verifying an individual’s identity, differentiating records, marketing their products, and preventing financial fraud. These information resellers limit their services to businesses and government entities that establish accounts with them and have a legitimate purpose for obtaining an individual’s personal information. For example, law firms and collection agencies may request information on an individual’s bank accounts and real estate holdings for use in civil proceedings, such as a divorce. Information resellers that offer their services through the Internet (Internet resellers) will generally advertise their services to the general public for a fee. Resellers, whether well-known or Internet-based, collect information from three sources: public records, publicly available information, and nonpublic information. The aggregation of the general public’s personal information, such as SSNs, in large corporate databases and the increased availability of information via the Internet may provide unscrupulous individuals a means to acquire SSNs and other personal information and use them for illegal purposes including identity theft. However, no federal law explicitly requires all information resellers to safeguard all of the sensitive personal information they may hold. For example, the Fair Credit and Reporting Act (FCRA) applies only to consumer information used or intended to be used to help determine eligibility for credit, and GLBA’s safeguarding requirements apply only to customer data held by GLBA-defined financial institutions. Unfortunately, much of the personal information maintained by information resellers that does not fall under FCRA or GLBA is not necessarily required by federal law to be safeguarded, even when the information is sensitive and subject to misuse by identity thieves. Federal Agencies Rely on Information Systems to Carry out Their Missions but Security Weaknesses Leave them Vulnerable to Data Breaches Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. However, it is important for agencies to safeguard their systems against risks such as loss or theft of resources (such as federal payments and collections), modification or destruction of data, and unauthorized uses of computer resources or to launch attacks on other computer systems. Without such safeguards, sensitive information, such as taxpayer data, Social Security records, medical records, and proprietary business information could be inappropriately disclosed, browsed, or copied for improper or criminal purposes including identity theft. Our work indicates that persistent weaknesses appear in five major categories of information system controls. As a result, federal systems and sensitive information are at increased risk of unauthorized access and disclosure, modification, or destruction, as well as inadvertent or deliberate disruption of system operations and services. GAO has found that federal agencies continue to experience numerous security incidents that could leave sensitive personally identifiable information in federal records vulnerable to identity theft. Such risks are illustrated by the following examples: In February 2009, the Federal Aviation Administration (FAA) notified employees that an agency computer was illegally accessed and employee personal identity information had been stolen electronically. Two of the 48 files on the breached computer server contained personal information about more than 45,000 FAA employees and retirees who were on the FAA’s rolls as of the first week of February 2006. Law enforcement agencies were notified and are investigating the data theft. In June 2008, the Walter Reed Army Medical Center reported that officials were investigating the possible disclosure of personally identifiable information through unauthorized sharing of a data file containing the names of approximately 1,000 Military Health System beneficiaries. Walter Reed officials were notified of the possible exposure on May 21 by an outside company. Preliminary results of an ongoing investigation identified a computer from which the data had apparently been compromised. Data security personnel from Walter Reed and the Department of the Army think it is possible that individuals named in the file could become victims of identity theft. The compromised data file did not include protected health information such as medical records, diagnosis, or prognosis for patients. During fiscal year 2008, federal agencies reported 16, 843 incidents to the U.S. Computer Emergency Readiness Team (US-CERT)—a 206 percent increase over the 5,503 incidents reported in 2006. Thus, significant weaknesses continue to threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of federal agencies. The extent to which data breaches result in identity theft is not well known, in large part because it can be difficult to determine the source of the information used to commit identity theft. Available data and interviews with researchers, law enforcement officials, and industry representatives indicate that most breaches have not resulted in detected incidents of identity theft. In 2007, we reported on data breaches in selected sectors of the economy and the potential benefits of breach notifications. As part of this review of the issue, we examined the 24 largest breaches that appeared in the news media from January 2000 through June 2005 and found that 3 breaches appeared to have resulted in fraud on existing accounts, and 1 breach appeared to have resulted in the unauthorized creation of new accounts. When data breaches do occur, notification to the individuals affected and/or the public has clear benefits, allowing individuals the opportunity to take steps to protect themselves against the dangers of identity theft. Moreover, although existing laws do not require agencies to notify the public when data breaches occur, such notification is consistent with federal agencies’ responsibility to inform individuals about how their information is being accessed and used, and promotes accountability for privacy protection. Similarly, in the private sector, representatives of federal banking regulators, industry associations, and other affected parties told us that breach notification requirements have encouraged companies and other entities to improve their data security practices to minimize legal liability or avoid public relations risks that may result from a publicized breach of customer data. Further, notifying affected consumers of a breach gives individuals the opportunity to mitigate potential risk—for example, by reviewing their credit card statements and credit reports, or placing a fraud alert on their credit files. Requiring consumer notification of data breaches may encourage better data security practices and help deter or mitigate harm from identity theft; however, such practices also involve monetary costs and other challenges such as determining an appropriate notification standard. Based on the experience of various federal agencies and private sector organizations in responding to data breaches, we identified the following lessons learned regarding how and when to notify government officials, affected individuals, and the public of a data breach. In particular: Rapid internal notification of key government officials is critical. A core group of senior officials should be designated to make decisions regarding an agency’s response. Mechanisms must be in place to obtain contact information for affected individuals. Determining when to offer credit monitoring to affected individuals requires risk-based management decisions. Interaction with the public requires careful coordination and can be resource-intensive. Internal training and awareness are critical to timely breach response, including notification. Contractor responsibilities for data breaches should be clearly defined. OMB issued guidance in 2006 and 2007 reiterating agency responsibilities under the Privacy Act and FISMA, as well as technical guidance, drawing particular attention to the requirements associated with personally identifiable information. In this guidance, OMB directed, among other things, that agencies encrypt data on mobile computers or devices and follow NIST security guidelines regarding personally identifiable information. However, guidance to assist agency officials in making consistent risk- based determinations about when to offer credit monitoring or other protection services has not been developed. Without such guidance, agencies are likely to continue to make inconsistent decisions about what protections to offer affected individuals, potentially leaving some people more vulnerable than others. We and various agency inspectors general have made numerous recommendations to federal agencies to resolve prior significant control deficiencies and information security program shortfalls. In particular, we have noted that agencies also need to implement controls that reduce the chance of incidents involving data loss or theft, computer intrusions, and privacy breaches. For example, we recommended that the Director of OMB develop guidance for federal agencies on conducting risk analyses to determine when to offer credit monitoring and when to contract for an alternative form of monitoring, such as data breach monitoring, to assist individuals at risk of identity theft as a result of a federal data breach. Other recommendations to agencies include that they need to implement controls that prevent, limit, or detect access to computer resources, and should manage the configuration of network devices to prevent unauthorized access and ensure system integrity. In addition, opportunities also exist to enhance policies and practices necessary for implementing sound information security programs. To implement these programs, agencies must create and maintain inventories of major systems, implement common security configurations, ensure staff receive information security training, test and evaluate controls, take remedial actions for known deficiencies, and certify and accredit systems for operation. While these recommendations are intended to broadly strengthen the integrity of federal information systems, they will also help address many of the vulnerabilities that can contribute to identity theft. Concluding Observations Efforts at the federal, state, and local level to protect personally identifiable information and help prevent identity theft are positive steps, but challenges remain. In particular, the use of SSNs by both public and private sector entities is likely to continue given that it is the key identifier used by these entities, and there is currently no widely accepted alternative. Personally identifiable information including an individual’s name, date of birth, and SSN are important pieces of information used to perpetrate identify theft and fraud, and it is critical that steps be taken to protect such information. Without proper safeguards in place, such information will remain vulnerable to misuse, thus adding to the growing number of identity theft victims. As Congress moves forward in pursuing legislation to address the problem of identity theft, focusing the debate on vulnerabilities that have already been documented may help target efforts and policy directly toward new solutions. We look forward to supporting congressional consideration of these important policy issues. Mr. Chairman, this concludes my prepared testimony. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. GAO Contacts For further information regarding this testimony, please contact me at [email protected] or (202) 512-7215. In addition, contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this statement. Individuals making key contributions to this testimony include Jeremy Cox, John De Ferrari, Doreen Feldman, Christopher Lyons, Joel Marus, and Rachael Valliere. Related GAO Products Information Security: Agencies Make Progress in Implementation of Requirements, but Significant Weaknesses Persist. GAO-09-701T. Washington, D.C.: May 19, 2009. Social Security Numbers Are Widely Available in Bulk and Online Records, but Changes to Enhance Security Are Occurring. GAO-08-1009R. Washington, D.C.: September 19, 2008. Information Security: Federal Agency Efforts to Encrypt Sensitive Information Are Under Way, but Work Remains. GAO-08-525. Washington, D.C.: June 27, 2008. Information Security: Progress Reported, but Weaknesses at Federal Agencies Persist. GAO-08-571T. Washington, D.C.: March 12, 2008. Information Security: Protecting Personally Identifiable Information. GAO-08-343. Washington, D.C.: January 25, 2008. Information Security: Despite Reported Progress, Federal Agencies Need to Address Persistent Weaknesses. GAO-07-837. Washington, D.C.: July 27, 2007. Cybercrime: Public and Private Entities Face Challenges in Addressing Cyber Threats. GAO-07-705. Washington, D.C.: June 22, 2007. Social Security Numbers: Use is Widespread and Protection Could Be Improved. GAO-07-1023T. Washington, D.C.: June 21, 2007. Social Security Numbers: Federal Actions Could Further Decrease Availability in Public Records, though Other Vulnerabilities Remain. GAO-07-752. Washington, D.C.: June 15, 2007. Personal Information: Data Breaches Are Frequent, but Evidence of Resulting Identity Theft Is Limited; However, the Full Extent Is Unknown. GAO-07-737. Washington, D.C.: June 4, 2007. Privacy: Lessons Learned about Data Breach Notification. GAO-07-657. Washington, D.C.: April 30, 2007. Privacy: Domestic and Offshore Outsourcing of Personal Information in Medicare, Medicaid, and TRICARE. GAO-06-676. Washington, D.C.: September 5, 2006 Personal Information: Key Federal Privacy Laws Do Not Require Information Resellers to Safeguard All Sensitive Data. GAO-06-674. Washington, D.C.: June 26, 2006. Privacy: Preventing and Responding to Improper Disclosures of Personal Information. GAO-06-833T. Washington, D.C.: June 8, 2006. Social Security Numbers: Internet Resellers Provide Few Full SSNs, but Congress Should Consider Enacting Standards for Truncating SSNs. GAO-06-495. Washington, D.C.: May 17, 2006. Social Security Numbers: More Could Be Done to Protect SSNs. GAO-06-586T. Washington, D.C.: March 30, 2006. Social Security Numbers: Stronger Protections Needed When Contractors Have Access to SSNs. GAO-06-238. Washington, D.C.: January 23, 2006. Social Security Numbers: Federal and State Laws Restrict Use of SSNs, yet Gaps Remain. GAO-05-1016T. Washington, D.C.: September 15, 2005. Identity Theft: Some Outreach Efforts to Promote Awareness of New Consumer Rights Are Underway. GAO-05-710. Washington, D.C.: June 30, 2005. Information Security: Emerging Cybersecurity Issues Threaten Federal Information Systems. GAO-05-231. Washington, D.C.: May 13, 2005. Social Security Numbers: Governments Could Do More to Reduce Display in Public Records and on Identity Cards. GAO-05-59. Washington, D.C.: November 9, 2004. Social Security Numbers: Private Sector Entities Routinely Obtain and Use SSNs, and Laws Limit the Disclosure of This Information. GAO-04-11. Washington, D.C.: January 22, 2004. Social Security Numbers: Government Benefits from SSN Use but Could Provide Better Safeguards. GAO-02-352. Washington, D.C.: May 31, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The loss of personally identifiable information, such as an individual's Social Security number, name, and date of birth can result in serious harm, including identity theft. Identity theft is a serious crime that impacts millions of individuals each year. Identity theft occurs when such information is used without authorization to commit fraud or other crimes. While progress has been made protecting personally identifiable information in the public and private sectors, challenges remain. GAO was asked to testify on how the loss of personally identifiable information contributes to identity theft. This testimony summarizes (1) the problem of identity theft; (2) steps taken at the federal, state, and local level to prevent potential identity theft; and (3) vulnerabilities that remain to protecting personally identifiable information, including in federal information systems. For this testimony, GAO relied primarily on information from prior reports and testimonies that address public and private sector use of personally identifiable information, as well as federal, state, and local efforts to protect the security of such information. GAO and agency inspectors general have made numerous recommendations to agencies to resolve prior significant information control deficiencies and information security program shortfalls. The effective implementation of these recommendations will continue to strengthen the security posture at these agencies. Identity theft is a serious problem because, among other things, it can take a long period of time before a victim becomes aware that the crime has taken place and thus can cause substantial harm to the victim's credit rating. Moreover, while some identity theft victims can resolve their problems quickly, others face substantial costs and inconvenience repairing damage to their credit records. Some individuals have lost job opportunities, been refused loans, or even been arrested for crimes they did not commit as a result of identity theft. Millions of people become victims of identity theft each year. The Federal Trade Commission (FTC) estimates that in 1 year, as many as 10 million people--or 4.6 percent of the U.S. adult population--discover that they are victims of some form of identity theft, translating into reported losses exceeding $50 billion. Several steps have been taken, both in terms of legislation and administrative actions to combat identity theft at the federal, state and local levels, although efforts to assist victims of the crime once it has occurred remain somewhat piecemeal. While there is no one law that regulates the overall use of personally identifiable information by all levels and branches of government, numerous federal laws place restrictions on public and private sector entities' use and disclosure of individuals' personal information in specific instances, including the use and disclosure of Social Security Numbers (SSN)--a key piece of information that is highly valuable to identity thieves. One intention of some of these laws is to prevent the misuse of personal information for purposes such as identity theft. Despite efforts to prevent identity theft, vulnerabilities remain and can be grouped into several areas, including display and use of Social Security numbers, availability of personal information through information resellers, security weaknesses in federal agency information systems, and data security breaches. GAO's work indicates that persistent weaknesses appear in five major categories of information system controls, including access controls which ensure that only authorized agency personnel can read, alter, or delete data. As a result, federal systems and sensitive information are at increased risk of unauthorized access and disclosure, modification, or destruction, as well as inadvertent or deliberate disruption of system operations and services. GAO has reported that federal agencies continue to experience numerous security incidents that could leave sensitive personally identifiable information in federal records vulnerable to identity theft.
Background IDEA Part B authorizes federal grants to states to help them meet the excess costs of providing special education and related services to students with disabilities. There are a number of conditions that states must meet to be eligible for funding. In particular, states must agree to make available a free appropriate public education to all students with disabilities beginning at the age of 3 and possibly lasting to the 22nd birthday depending on state law or practice, in the least restrictive environment—meaning, to the maximum extent appropriate, these children are educated with other children who do not have disabilities. To accomplish this, states and LEAs must first identify, locate, and evaluate students who are eligible for special education services, regardless of the severity of their disability. For those deemed eligible, the state and LEA must ensure that each student has an individualized education program (IEP) describing their present levels of academic achievement and functional performance; measurable annual goals, including academic and functional goals; special education and related services; supplementary aids and services; and other supports to enable the child to advance appropriately toward attaining those goals. The IEP is developed by a team of teachers, parents, school district representatives, and other educational professionals. This team must meet to develop the initial IEP within 30 days of determining that a student needs special education and related services, and must review the IEP periodically, but not less than annually, to determine whether the annual goals have been achieved, and revise the IEP, as appropriate. To qualify for IDEA funding, states must also provide certain procedural safeguards to children and their parents. For example, these safeguards require that parents be provided with prior written notice a reasonable time before the LEA proposes or refuses to initiate or change the identification, evaluation, or educational placement of the child, or the provision of a free appropriate public education to the child. IDEA also affords parents the right to request a due process hearing, as well as the right to bring a civil action in a state or federal district court. States must ensure that any state rules, regulations, and policies related to IDEA conform to the purposes of IDEA, and are required to identify in writing to Education and their LEAs any state-imposed requirement that is not required by IDEA. The 2004 reauthorization of IDEA included several provisions that were intended to help reduce administrative tasks and paperwork associated with documenting compliance with the law. Specifically, the law: Created two pilot programs: the Paperwork Waiver Demonstration Program (Paperwork Waiver Program), and the Multi-Year IEP Demonstration Program (Multi-Year IEP Program). Under the Paperwork Waiver Program, Education could waive statutory or regulatory requirements that states must comply with in order to receive funding under Part B for up to 4 years, for up to 15 states. The Multi-Year IEP Program authorized Education to allow up to 15 states to give parents and LEAs the option of developing a comprehensive IEP covering up to 3 years, rather than developing yearly IEPs, as currently required. Required that Education publish and disseminate model forms, including forms for documenting IEPs, providing prior written notice, and providing procedural safeguards notice. Introduced various administrative changes, including raising the amount of federal grant funds that states may set aside for administration and other state-level activities, and permitting states to use these funds for paperwork reduction, among other things; eliminating the requirement for benchmarks and short-term objectives in IEPs, and requiring that states identify in writing to LEAs and Education any state requirements that are not mandated by IDEA, and that they minimize requirements that LEAs and schools are subject to under IDEA. IDEA requires states to have a mechanism for interagency coordination between the state educational agency (SEA) and any other state agencies that provide or pay for any services that are considered special education or related services. For example, since 1988, costs of some related services provided to low-income children under IDEA may be covered by Medicaid. In 1999, GAO reported that Medicaid documentation requirements are more burdensome than those of IDEA, leading states to cite this as an area of concern in coordinating Medicaid and IDEA services. Following the 2004 reauthorization, there has been little public debate on the issue of paperwork and administrative burdens associated with IDEA. In 2012, GAO reported information about the burden on states and school districts associated with federal education regulations and identified three IDEA requirements as being among the more burdensome: reporting IDEA performance indicators, IEP processing, and transitioning students into school-age programs from infant and toddler programs. We also found that officials of states and school districts reported they generally did not collect information about the costs to comply with federal requirements, noting that states and school districts are not required to report compliance costs, the data are not useful to them, and collecting such data would in itself be burdensome. We recommended Education take additional steps to address duplicative reporting and data collection efforts across major programs. In addition, we recommended Education identify unnecessarily burdensome statutory requirements and develop legislative proposals to address these burdens, acknowledging that the agency’s ability to address burdens associated with some provisions of IDEA might be limited without statutory changes. Education agreed that it should take additional steps to address duplicative reporting and data collection efforts that are not statutorily required, and believed additional efficiencies could be achieved in its data collections. However, Education noted that some data elements are required under various program statutes and said it would work with Congress in the next reauthorization of IDEA to address duplication or the appearance of duplication resulting from those requirements. Additionally, GAO has performed extensive work on the Paperwork Reduction Act (PRA), including the law’s effectiveness and approaches to reducing burden on the public. Under PRA, federal agencies are generally required to submit any proposed information collections to the Office of Management and Budget (OMB) for approval, including an estimate of the burden the information collections impose on the public. This submission certifies that the information collections meet the PRA standards that, among others, include taking steps to ensure the collection: avoids duplication, minimizes burden on the public. is necessary for agency performance, and In response to PRA requirements, Education submits estimates of time needed to collect and report some IDEA related information, including state applications for IDEA funding, the State Performance Plan/Annual Performance Report, and SEA and LEA recordkeeping requirements. In our past work, we have noted potential discrepancies between Education’s estimates and reported burden as estimated by institutions of higher learning, making it difficult to know the actual burden imposed by these data collections. Education Has Implemented Provisions of IDEA Designed to Reduce Paperwork, but States Have Been Reluctant to Use Them States Saw Little Benefit in Participating in Pilot Programs Education took several steps to design and implement two pilot programs, the Paperwork Waiver Program and Multi-Year IEP Program. To promote these pilot programs, Education conducted a national outreach tour to discuss the changes in the 2004 IDEA reauthorization and provide information about the pilot programs. In December 2005, Education also published notices of proposed requirements and selection criteria for both programs, and requested public comments by March 6, 2006. Education published the final requirements and selection criteria in July 2007, and made applications available to states in October 2007. Additionally, Education officials noted they held a teleconference for the state directors of special education describing the process for applying to participate in the pilot programs, which was also publicized through email and supported by the National Association of State Directors of Special Education, Inc. (NASDSE). Despite Education’s efforts, no state applied to participate in either of the pilot programs. NASDSE officials told us that the application requirements were much too resource-intensive for the potential value they would bring, and implementation of either pilot program would most likely require additional staff that federal funding would not cover. Several states wrote letters to Education explaining their reasons for not applying for and implementing the Paperwork Waiver Program in particular, noting that the program would require more paperwork and staff, but provide little in the way of additional federal funds. For example, New York’s letter listed as key reasons for not participating the extensive requirements for participation, limited funding for the pilots, and the staff commitment necessary for both development of the proposals and ongoing oversight of the pilot projects. In a similar letter, Rhode Island noted that implementing the Paperwork Waiver Program would likely result in more paperwork—not less—as well as taking more time from staff. Likewise, Wisconsin and Missouri expressed concerns about the number of requirements and constraints, coupled with inadequate funding. Education officials said that the amount of funding that was offered to help states implement the Paperwork Waiver Program, $25,000 per state, was based on the amount of available funding at the time, and had taken into account the need to establish a sound evaluation design, as well as Education’s commitment to providing technical assistance, as needed. States also might have been reluctant to participate in the Paperwork Waiver Program because Education cannot waive certain provisions states find most burdensome. For example, Education officials said that states and LEAs were most interested in Education waiving the requirements to notify parents of procedural safeguards and to provide parents prior written notice of certain actions taken with regard to their child’s education, both of which are procedural safeguards that Education is prohibited from waiving under IDEA. Furthermore, the National Association of Secondary School Principals (NASSP) told us that none of their members were in favor of the paperwork waivers, in part because of the perceived risk of exposing local districts to potential litigation if they were to eliminate any of the requirements that parents have come to expect. Stakeholders cited similar reasons for not participating in the Multi-Year IEP Program. Representatives from NASDSE and the Council for Exceptional Children (CEC) cited the costs associated with applying for the program absent sufficient additional federal funding. In its response to the proposed requirements for the pilot programs, the Statewide Parent Advocacy Network, Inc. commented that enabling states to participate in a multi-year IEP demonstration program would have primarily negative implications for families of children with disabilities. Some States and Local Districts Used Federal Model Forms Primarily to Help Develop Their Own Forms Although some states have adopted some of the model forms Education developed pursuant to the 2004 reauthorization’s attempt to reduce paperwork, they have used other model forms primarily as a reference tool to develop their own state forms. In a 2011 NASDSE survey of state directors of education, 18 of the 39 who responded said they had adopted one or more of the forms. Of those who did not adopt any of the forms, 17 said they had used them to help guide revision of their own forms, and only 3 indicated they had not used the forms at all. Education officials and other stakeholders offered several reasons why some states have not adopted the model forms as written and instead used them as reference tools. Education officials said that some states find the model IEP form, for example, lacks the content necessary to meet state and local requirements. Several stakeholders agreed that the model IEP form does not cover all the information required by states, so even if states used a federal model form as a starting point, the state forms could all be different because the state requirements vary so widely. For example, officials from one stakeholder organization told us its state has its own model form for prior written notice because it includes additional procedural safeguards specific to that state. A different stakeholder noted that it would be a lot of work for states to switch from the state forms with which they are familiar to Education’s model forms; another stakeholder said that local school districts may also tailor the forms for local use. On the other hand, one stakeholder noted that the forms provided helpful models for states and districts, and said that further standardization of these forms would be particularly useful for students who move across districts and states because currently they must be reevaluated using different forms, which is resource-intensive and frustrating. The states we visited used some of Education’s forms and not others. For instance, Arkansas has generally adopted Education’s model form for notice of procedural safeguards in its entirety, while New York has adopted most of this form, but has added state-specific information. Both Arkansas and New York have included most of Education’s model form on prior written notice, but with some modifications. Neither state has adopted the model IEP form. One Arkansas official suggested that the model IEP form does not adequately instruct those completing it to include details that could protect school districts from potential parental litigation. In contrast, the official said, the state form specifically calls for those details, which helps staff complete the form in keeping with Arkansas’ direction. New York officials told us they do not use the IEP form because they must include other items in their form to ensure compliance with both federal and state requirements. Stakeholders Had Mixed Views on the Effects of Other Paperwork Reduction Provisions Views on the effects of other IDEA provisions related to paperwork reductions are mixed, based on our conversations with focus group participants, representatives of education stakeholder organizations, and state and local officials in Arkansas and New York. For example, several focus group participants and stakeholders differed in their views of a provision allowing states to use set-aside funds for certain authorized state-level administrative activities, including paperwork reduction activities. Several participants in our state administrator focus groups reported that they have used the flexibility of this provision to help fund automated systems for preparing IEPs and for assisting in data collection and reporting. However, officials from NASDSE said that some states have restrictions on the amount of funds they can use for state administration, making the provision irrelevant to them for paperwork reduction purposes. Stakeholders said that the effect of the revised IEP provision eliminating benchmarks and short-term objectives was mostly negligible. For instance, several stakeholders said that any potential reductions in paperwork were offset by what they described as a new statutory requirement that IEPs include a statement of measurable annual goals, including academic and functional goals. Although Education officials characterized that statutory language as clarifying a previously existing requirement rather than creating a new requirement, several stakeholders said the provision created additional work for those states and local districts that revised their local IEP forms to explicitly include the annual goal information. The 2004 reauthorization also required states to identify any state- imposed special education rules, regulations, and policies not required by IDEA or federal regulations, and minimize the number of rules, regulations, and policies that districts and schools are subject to under IDEA, but it is not clear what effect this provision has had. Education facilitates compliance with this provision by directing states to list their state-imposed rules, regulations, and policies on their annual applications for federal IDEA funding. However, in our review of the information that states submit, we found it varies in detail and format. Education does not verify the accuracy of the information states provide, and the provision does not require Education to do so, making it difficult to determine the prevalence of state-imposed requirements based on state responses alone. Stakeholders Said Additional State and Local Requirements Contribute to Burden, but Differed on the Burdens and Benefits of Federal Requirements States and Localities Described Additional Requirements that Contribute to Administrative and Paperwork Burden State and local officials with whom we spoke widely agreed that nonfederal IDEA-related requirements were burdensome. For example, participants across 6 of our 9 focus groups—3 with local administrators and 3 with educators—said that additional requirements imposed by states and localities contribute to the administrative and paperwork burden beyond that imposed by federal requirements. Similarly, based on our observations and interviews with state and local officials, the number of additional requirements can be considerable. New York listed over 200 state-imposed requirements in 2014. For example, when a student is referred for special education, the school must provide a copy of the state’s 46-page handbook for parents of students with disabilities. If a student is at risk of being placed in a residential facility, the school must provide the parent with information about community support services, including how to obtain an assessment of the family’s service needs, and placement alternatives. New York officials reported that they attempted to identify state administrative requirements that did not add value to the special education process but did not find many items to remove. Further, although Arkansas listed no state-imposed requirements on its federal IDEA funding application, during our visit we observed several required forms imposed by the state. States are not prohibited under IDEA from using their own forms, and Arkansas state officials told us they did not list the forms because they did not believe they were doing anything more than what was required under IDEA. This example, however, highlights the difficulty in determining what state-imposed requirements should be reported. In both local school districts we visited in New York and Arkansas, IEPs are electronic, and each district contracts with a vendor to develop and maintain the software used to guide IEP preparation, tailored to local preferences and needs. However, in the Rochester, N.Y. district, the electronic IEP includes state-required data elements in addition to those required by federal law. Further, the district has added at least one additional requirement that teachers include the student’s latest report card results—even though the information is available in the student’s official school file. Although this approach is typical of many school districts, these systems are tailored to local requirements and the IEPs themselves are formatted differently, making it difficult to transfer students’ records when they move from one district to another within each state, let alone across states with differing laws and administrative requirements. Stakeholders said that the additional requirements can make it difficult to isolate the contribution of IDEA requirements to administrative burdens. One official commented that it is nearly impossible to isolate the contribution of IDEA requirements to administrative burdens at the state and local levels because there are so many other requirements placed on states and local districts related to education by other sources. In a 2015 issue paper, the American Speech-Language-Hearing Association noted that although federal statutes and regulations generate paperwork and administrative burdens for their members, all levels of government contribute to the total burden shouldered by their members. In addition to IDEA requirements, there are those mandated under the Elementary and Secondary Education Act of 1965, as amended, Medicaid, and various smaller programs, as well as those added by state law and local school districts, which further exacerbate the problem, whether in anticipation of, or due to compliance with, litigation and court decisions. Stakeholders Differed on the Specific Burdens and Benefits of IDEA Requirements While many stakeholders agreed that special education requirements contribute to administrative and paperwork burden, they differed in their views on the burdens and benefits of specific IDEA requirements. Participants across educator and local administrator focus groups cited more tasks as being particularly burdensome than did those in the state administrator groups. Common areas of concern for participants across all 3 educator focus groups and all 3 local administrator focus groups included preparing IEP documents, focusing on compliance, using technology, and identifying students with special needs or determining eligibility. In focus group discussions with state administrators, the only specific administrative task they reported as particularly burdensome was preparing state performance plans and annual performance reports for Education. (See table 1.) Our focus group results are consistent with previous findings from a GAO review of federal education requirements in which education stakeholders identified two IDEA requirements— processing IEPs and collecting and reporting performance data to Education—as being among the more burdensome for states and districts. More specifically, stakeholders said in our prior work that IEP processing was complicated, time and paperwork intensive, and vague. They also said that IDEA indicators— performance measures that Education uses to monitor state compliance with IDEA—were complicated, time and resource and paperwork intensive, and duplicative. Similarly, participants in our focus groups for this review identified preparing IEPs and reporting annually to Education as being particularly burdensome, and described similar types of burdens to those previously identified when explaining why these tasks are particularly burdensome. (See table 2.) Aside from their views on IDEA requirements they regard as particularly burdensome, stakeholders across all 9 focus groups acknowledged that administrative tasks and paperwork play an important role in helping ensure accountability and transparency in the special education process, among other benefits. (See table 3.) Although the parents that we spoke with during our site visit in New York did not express any positive views of the administrative requirements, parents in Arkansas said the IEP is useful in guiding discussions with school district staff, and serves as a record of what the district is doing for their child. Based on our discussions with groups that represent the interests of parents with children who receive special education services, evaluation reports, which provide information about a student’s limitations and strengths, can facilitate individualization of instruction by providing a baseline for performance that can be used to measure student progress. Officials from these organizations also noted that administrative requirements that safeguard procedural rights, such as prior written notice, benefit parents by helping them understand how to help their children receive special education services. Additionally, based on our discussions with parents and representatives of parent organizations, we found that administrative requirements in special education can be helpful to parents, but only to the extent that the information generated is accessible, and requirements are enforced. In particular, the language used to document students’ current levels of performance poses a challenge for many parents, who sometimes find the language complicated and confusing, making it difficult to understand important information about their children. In another example, while parents noted that certain administrative requirements, such as IEPs and parent meetings, can be useful tools to share information and promote collaboration among those involved in a child’s education, they did not find them helpful when used incorrectly. For example, parents expressed some frustration with IEPs they felt were not being followed, as well as IEP meetings they felt were sometimes used to justify a course of action rather than to determine the best course of action for their child. Available Research Shows Administrative and Paperwork Tasks Take Time Away from Teaching and Training Available research supports what stakeholders told us about special education administrative burden. Specifically: The 2002 Study of Personnel Needs in Special Education (SPeNSE), commissioned by Education, found that elementary and secondary special education teachers reported spending an average of 1 hour per day completing forms and paperwork—as much time as they spent preparing for lessons. A 2008 time-use study found that special education teachers in five Texas school districts spent an average of almost 2 hours per day (1 hour and 51 minutes) on administrative tasks—more time than they or their principals thought they were spending. A 2012 study of how preventive services are implemented by special education teachers found that special education teachers across seven elementary schools in Kansas spent about 1 hour per day, on average, doing managerial tasks that involved paperwork. In an American Speech-Language-Hearing Association biannual survey of school-based speech-language pathologists and educational audiologists, respondents listed paperwork as their top challenge in each survey from 2004 through 2014. Consistent with this research, participants across our 3 educator focus groups estimated that they spend between 2 to 3 hours per day on administrative tasks, or roughly 20 to 35 percent of their time, and said that these tasks take away time needed to complete other required tasks. In particular, they said they do not have enough time to complete paperwork during their regular work hours, which means they complete it on their own time. Participants across the educator focus groups also reported that paperwork and administrative tasks take time away from the classroom and other important tasks, such as academic planning and performing assessments. Although local administrators would reasonably be expected to spend more time on administrative tasks due to the nature of their jobs, they still shared similar concerns regarding administrative and paperwork burdens crowding out important, non-administrative responsibilities, such as providing training or observing classrooms. Education and States Stated that They Have Adopted Computer Technology and Other Steps that Reduce Administrative Burdens, but These Efforts Have Limitations Computer Technology and Data Systems Have Reportedly Helped Ease Some Administrative Burdens Related to Special Education Requirements Some participants in all of our focus groups said that computer technology and the availability of electronic data sets have reduced administrative burdens associated with IDEA. In particular, some participants in focus groups stated that they found electronic IEPs to be helpful in ways such as making data input easier, reducing the chance for data entry errors, and pulling data together from different sources. According to several SEA officials in our focus groups, the linkage of IEPs with automated data systems assists SEAs in their monitoring and compliance activities with local school districts. Education has developed a data system called EDFacts which it believes will help reduce administrative burden. According to Education, EDFacts is a system designed to centralize data provided by SEAs and LEAs and to streamline data processes at the federal, state, district, and school levels. SEAs transmit data to Education via the EDFacts Submission System, an electronic system designed to help SEAs transmit data in a timely and efficient manner through the use of a file submission application. It includes data required by IDEA and is comprised of six data collections. To further ease the data submission process, Education has developed a web-based Data Submission Organizer tool that provides information about how and when to submit IDEA and other K-12 data. Members of our focus groups also said that GRADS360, an IDEA-specific data system unveiled in October 2014 by the Office of Special Education Programs (OSEP), has effectively reduced the reporting burdens associated with IDEA. GRADS360 is the electronic platform for states to submit data which is then used to create states’ annual IDEA Part B and Part C State Performance Plan/Annual Performance Report (SPP/APR). The SPP/APR evaluates a state’s efforts to implement the requirements and purposes of IDEA, and describes how a state will improve its implementation. Education’s GRADS360 website contains, among other things, profiles on states’ previous data submissions, tools on how to submit data to the system, and a calendar that specifies data submission deadlines. Some focus group participants said GRADS360 has reduced the burden associated with completing the SPPs and APRs. One participant asserted that GRADS360 reduced the administrative burdens of producing these reports by 50 percent. In addition to EDFacts and GRADS360, Education’s OSEP has funded four technical assistance centers intended to help states produce and submit high quality IDEA-related data. The centers focus on (1) IDEA’s data collecting and reporting requirements (The IDEA Data Center), (2) the development or enhancement of longitudinal data systems (The Center for IDEA Early Childhood Data Systems for children from birth through age 5), (3) support for combining IDEA data with the Education- funded State Longitudinal Data System (Center for the Integration of IDEA Data), and (4) assistance to SEAs on their federal special education fiscal data collection and reporting obligations (Center for IDEA Fiscal Reporting). Some focus group participants said SEAs are implementing computer systems that they believe help ease administrative burden. Most of these efforts involve computer data and system consolidation. For example, several participants said their states have activated systems in which electronic IEP data and other student data can be integrated with the state’s student data management system. Participants cited benefits to this integration including making it easier for LEAs to upload participants’ data to the state system, and enabling the state to have data related to IDEA requirements. One focus group participant said her state has a single computer system so that the SEA and LEAs within the state can use the same system. Despite Benefits, Existing Technology Reportedly Has Limitations in Addressing Burdens Some participants in our focus groups said the computer systems used in their states need to be improved to further reduce burdens associated with IDEA requirements. Several participants said existing computer systems are not well integrated and thus do not exchange data across systems. For example, two LEA participants said they must each work with separate computer systems or databases that do not allow automatic data transfers. In one instance, the participant noted having to pull down data from an IEP system and then upload it to another system. In the other case, the participant said data from each database in the system had to be uploaded separately because none of the data had been collated across databases. Another participant noted having to use five different federal log-ons and yet still could not find the information sought. One participant also said that reporting the same information multiple times across different computer systems (federal and state) was a burden. Other focus groups members told us that technical problems can make using automated systems difficult. Some of these technical problems included system crashes, losing data when attempting to save the data into the system, major technical “glitches,” and a lack of computer system capacity. In addition, one educator cited the burden of having to learn a new computer system, only to have the system replaced with another shortly thereafter. Stakeholders Also Reported Using Non- technology Strategies that Help Ease IDEA’s Administrative Burdens Focus group participants, stakeholders in New York and Arkansas, and other special education officials said adopting certain types of non- computer-related practices had reduced the burdens faced by SEAs, LEAs, and educators generally fell into three categories: administrative support, IEP management, and communication strategies. Administrative support. One practice was to assign one or more individuals to perform or monitor administrative duties related to IDEA. According to state special education officials from our site visits, many of the larger school districts hire due process clerks to handle logistics. These are teachers who may continue to teach part time or take on the special education administrative role full time. This frees special education teachers from some of the required paperwork. Some participants from our focus groups said that having administrative clerks reduced the time and burdens associated with administrative tasks. One educator from a focus group said hiring a paraprofessional to set up all IEP meetings, contact parents, and send out meeting notices proved helpful in reducing administrative burden. Another practice favored by one focus group member was establishing dedicated time periods in which a teacher would exclusively teach for 3 weeks and then complete administrative tasks for 1 week. IEP management. Some focus group participants said that using amended or draft IEPs (rather than creating completely new IEPs) can reduce burden. According to the participants, amended IEPs have advantages such as allowing an IEP team to make minor IEP changes without having to call a meeting or redo the entire IEP, making it easier to update or change goals, and reducing the time and impact on staff. One participant said using a draft IEP provides information so everyone attending an IEP meeting is better prepared. Communication strategies. According to participants in several focus groups, practices that foster communication among educators and specialists who work with special education students can reduce burden. For example, one participant said creating a triad relationship with a student’s general education teacher, special education teacher, and case manager can be very important and beneficial to the student. Another participant noted the importance of everyone who works with a special education student collaborating to make a plan so that the student can make progress. In addition, one participant stated that, to reduce burdensome redundancies created by the “dual- track” paperwork systems for special education and general education, those working with a student should have a conversation and reach agreement on whether special education or general education is in that student’s best interests. Agency Comments We provided a draft of this report to the Department of Education for review and comment. In its written comments, reproduced in appendix II, Education neither agreed nor disagreed with our findings. Education also provided technical comments that were incorporated, as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (617) 788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to examine (1) what Education and states have done to implement selected provisions of the law to help minimize the burden associated with administrative and paperwork requirements under the Individuals with Disabilities Education Act (IDEA), (2) stakeholder views about IDEA’s administrative and paperwork requirements, and (3) the steps Education and others have taken to minimize IDEA-related burden. To address our first objective, we reviewed relevant laws, regulations, and published studies. We interviewed officials from Education and organizations representing education stakeholder groups, including special education administrators at the state and district levels, parents of students with disabilities, and educators. Some of these groups included the National Association of State Directors of Special Education, Inc.; American Speech-Language-Hearing Association; Council for Exceptional Children; Council of Administrators of Special Education; National Education Association; National Association of Secondary School Principals; and PACER Center. To identify specific provisions of the 2004 IDEA reauthorization intended to reduce the burden associated with administrative and paperwork requirements, we reviewed a Congressional Research Service analysis of the law as well as the results of our literature search. To address our second objective, we conducted a series of 9 focus group discussions—3 with state administrators from 18 states, 3 with local administrators from 14 states, and 3 with educators from 15 states. Overall, participants came from 37 separate states, and though their comments are not generalizable, they represented a broad range of experiences from across all regions of the country and all types of school districts. To identify potential participants for our focus groups, which consisted of at least six participants each, we worked with the National Association of State Directors of Special Education, Inc., Council of Administrators of Special Education, National Education Association, and other professional organizations representing education stakeholder groups, including educators and special education administrators at the state and district levels. The organizations we contacted that represent parents of students with disabilities included the Statewide Parent Advocacy Network, Inc. and the Council of Parent Attorneys and Advocates. In addition to contacting these organizations, we posted information in a National Association of Special Education Teachers’ newsletter to identify potential focus group participants. In order to maximize the diversity of our sample of participants, we extended invitations to participants based on the state in which they were employed, type of position, years of experience, and type of school district (rural or urban). We documented these characteristics via a participant questionnaire. Using a combination of these methods for identifying potential participants ensured that all of the participants in one strata of the focus groups did not come from a single source, mitigating potential bias toward a specific organization. Once the list of potential participants was compiled, we emailed the potential participants to confirm that they currently worked or had worked previously in the relevant organization in the area of special education, and to inquire if they wished to participate. In addition to gaining the perceptions of special education administrators and educators on IDEA administrative and reporting requirements, we also obtained information on how another key stakeholder group— parents of students with disabilities—perceive these requirements. Although parents of such students do not directly complete IDEA paperwork, we gathered information on how they perceive IDEA administrative and paperwork requirements from our discussions with organizations representing them and from results of their literature on the subject. We analyzed the content of our 9 focus group discussions to identify similarities and differences within and across these groups regarding time spent on IDEA administrative tasks, perceived benefits and burdens associated with administrative requirements, and potential solutions and strategies about how to minimize time spent on administrative tasks. To achieve consensus on this identification, two analysts independently reviewed focus group transcripts and categorized relevant discussions across four different topics: Time Spent on Paperwork Perceived Burdens Perceived Benefits of Administrative Tasks Model Practices and Potential Solutions For burdens specifically, we further categorized the reasons why focus group participants considered administrative and paperwork tasks burdensome, using categories we previously developed. These reasons include and are defined as follows: Complicated: Requirements change often, include varying or conflicting definitions, involve multiple steps, or have processes, deadlines, or rules that make compliance difficult or that result in unintended consequences. Time-intensive: Compliance is time-consuming. Paperwork-intensive: Documentation is excessive. Resource-intensive: Compliance is costly or requires a substantial amount of technical support. Duplicative: Requirements from different agencies or offices within the same agency were poorly coordinated or requested redundant information (similar or exact). Vague: States or school districts lacked knowledge or guidance related to the requirement, or certain processes were unknown or unclear. To illustrate the administrative processes in different school districts in different areas of the country, and to identify how differences in state and local requirements may contribute to differences in burden, along with common concerns and suggestions for addressing them, we completed site visits in Clinton, Arkansas and Rochester, New York. We selected these two school districts to highlight different experiences in how urban and rural school districts manage administrative requirements, and how parents in these districts perceive the requirements. We first selected Arkansas and New York to provide diverse geographic locations, number of state-imposed special education requirements listed on the state application for IDEA funding, and incidence of dispute resolutions as reported by a GAO special education report. Specifically, New York, in the northeast region, listed over 200 state-imposed special education requirements on its IDEA funding application, and also had a high incidence of dispute resolutions. Arkansas, in the southeast region, listed no state-imposed special education requirements on its IDEA funding application, and was not identified in the GAO special education report on dispute resolution. Within each state, we chose the LEAs and schools to achieve diversity across urban and rural districts and primary and secondary schools, and large enough to have with some experience with special education needs. These site visits provided opportunities to understand and document local efforts to manage administrative requirements and to speak with parents about how they perceive special education procedures. The visits also allowed us to explore how differences in state and local requirements may contribute to differences in perceptions of the relative benefits and challenges associated with meeting key federal requirements. At each site, we interviewed the relevant state and local special education administrators, and those who work directly with special education students. We also obtained state and local policies and procedures, enabling us to develop a narrative about the types of paperwork various individuals are responsible for and any additional requirements imposed by the state or LEA. At each location, we also met with parents whose children were receiving special education services. Parents were informed of these meetings by the local districts, and attendance was voluntary. We compared these narratives to understand various differences across the locations. To address our third objective, we spoke with Education officials about steps the agency had taken to reduce the associated burden of administrative and reporting requirements under IDEA, and reviewed the agency’s Paperwork Reduction Act (PRA) burden estimates for proposed amendments to data collections related to IDEA implementation. We also spoke with state and local officials in New York and Arkansas about steps taken to minimize burden, and included a question on this subject in our focus group discussions. Finally, we gathered information on efforts to minimize burden from our discussions with officials from the previously- listed organizations representing education stakeholders and organizations representing parents of students with disabilities, and from results of our literature search. We selected these organizations to provide a range of views on the benefits and burdens of IDEA requirements. Appendix II: Comments from the U.S. Department of Education Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Jaqueline M. Nowicki, (617) 788-0580 or [email protected]. Staff Acknowledgments In addition to the contact named above, Mary Crenshaw, Assistant Director; Regina Santucci, Analyst-in Charge; Betty Ward-Zukerman, Justin Riordan, Tyne McGee, Daren Sweeney, Andrew Nelson, Susanna Clark, Laurel Beedon, and Sheranda Campbell made significant contributions to this report. Also contributing to this report were Walter Vance, James Rebbe, Charlie Willson, Tim Bober, Carolyn Yocom, David Forgosh, and Nyree Ryder Tee.
When IDEA was reauthorized in 2004, it included provisions to reduce administrative and paperwork requirements to address concerns about burden. GAO was asked to review federal efforts to reduce burden related to meeting IDEA requirements for educating children with disabilities. Congress provided about $11.5 billion in grants in fiscal year 2015 under IDEA Part B to help states and local districts defray the costs of special education services for nearly 6.6 million students ages 3 to 21. This report examines (1) what Education and states have done to implement selected IDEA provisions intended to reduce burden, (2) stakeholder views about IDEA's administrative and paperwork requirements, and (3) steps that Education and others have taken to minimize IDEA-related burden. GAO reviewed relevant federal laws and regulations; held nongeneralizable focus groups with state and local administrators and educators from 37 states; visited schools in Clinton, Arkansas and Rochester, New York; and interviewed officials from Education and stakeholder organizations. GAO selected focus group participants, site visit locations, and organizations to highlight a range of demographic and geographic characteristics and obtain perspectives from a variety of stakeholders. In response to the 2004 reauthorization of the Individuals with Disabilities Education Act (IDEA)—the primary federal law governing education of children with disabilities—the Department of Education (Education) attempted to reduce administrative burden by creating pilot programs and publishing model paperwork forms, but states have used these tools sparingly. Specifically, Education created pilot programs allowing states to use multi-year rather than annual individualized education programs (IEP) to describe services to meet each student's needs, and to waive certain federal paperwork requirements. However, no state applied for these pilots, citing a perceived lack of benefit, and inadequate funding to implement and evaluate the pilots. As required by law, Education also published templates, known as model forms, to help states streamline the process of preparing IEPs and comply with parent notice requirements in IDEA. Although some states and school districts adopted at least one of these model forms, they have used others primarily as a starting point to develop their own forms. State and district officials told GAO this allowed them to meet federal as well as state and local requirements, and provided better protection against potential litigation. Stakeholders were mixed in their views about the effects of other provisions intended to reduce administrative burden. For example, several stakeholders viewed a provision allowing states to use more grant funds for paperwork reduction activities as helpful; others said the effect of a provision eliminating benchmarks and short-term objectives for IEPs was largely negligible. Stakeholders across 9 focus groups—3 each with state administrators, local administrators, and educators—said that state-imposed requirements contribute to the administrative and paperwork burden, but their views on the burdens and benefits of federal IDEA requirements varied somewhat. For example, in focus groups, educators expressed concerns about monitoring and documenting student progress, while local and state administrators expressed concerns, respectively, about IEP implementation and federal reporting requirements. Consistent with prior research, many educators in these focus groups estimated they spend roughly one to two hours daily on administrative tasks, and expressed concern about this taking time away from the classroom. Despite perceived burdens, stakeholders widely acknowledged that IDEA's requirements play an important role in accountability. For example, educators said the requirements provide information about student strengths and limitations that help them assist the student, while state administrators said requirements aid planning and program development. Education, states, and school districts have reduced administrative burdens by adopting new technology and using certain resource strategies. For example, several state administrators said Education's electronic data submission system has made it easier to complete federally-required state performance plans. During fall 2014, Education launched a new electronic reporting system intended to, among other things, consolidate data collections and ease data entry. Some schools and districts have also adopted resource strategies, such as hiring data clerks to reduce administrative burdens, but these strategies can be costly.
Background The Clean Water Act prohibits the discharge of pollutants, including dredged or fill material, into “navigable waters,” defined in the act as the “waters of the United States,” without a permit. The act’s objective is to restore and maintain the chemical, physical, and biological integrity of the nation’s waters. Congress’ intent in passing the act was to establish an all- encompassing program of water pollution regulation. The act contains several programs designed to protect waters of the United States, including section 303, which calls for development of water quality standards for waters of the United States; section 311, which establishes a program for preventing, preparing for, and responding to oil spills that occur in waters of the United States; section 401, which establishes state water quality certification of federally issued permits that may result in a discharge to waters of the United States; and section 402, which establishes a permitting system to regulate point source discharges of pollutants (other than dredged and fill material) into waters of the United States. Section 404 of the Clean Water Act generally prohibits the discharge of dredged or fill material into waters of the United States without a permit from the Corps. Corps and EPA regulations under the section 404 program define “waters of the United States” for which a permit must be obtained to include, among other things, (1) interstate waters; (2) waters which are or could be used in interstate commerce; (3) waters such as wetlands, the use or degradation of which could affect interstate commerce; (4) tributaries of the waters identified above and (5) wetlands adjacent to these waters. As such, this program is the nation’s primary wetland protection program. In addition to the federal regulation of wetlands, some state and local governments have developed wetland protection programs. The Corps administers the permitting responsibilities of the section 404 program, while EPA in conjunction with the Corps establishes the substantive environmental protection standards that permit applicants must meet. EPA also has final administrative responsibility for interpreting the term “waters of the United States,” a term that governs the scope of many other programs that EPA administers under the Clean Water Act. Day-to-day authority for administering the permitting program rests with the 38 Corps district offices, whereas Corps division and headquarters offices exercise policy oversight (see fig. 1). Under section 404(q), EPA and other federal agencies, such as the Department of the Interior’s Fish and Wildlife Service, can request that a permit application receive a higher level of review within the Department of the Army. Under a memorandum of agreement between EPA and the Corps, EPA may also initiate a “special case,” in which EPA determines the scope of jurisdiction for a particular site or issue for section 404 purposes. EPA also has “veto” authority over section 404 permitting decisions under section 404(c). However, EPA has rarely used its 404(c) authority to intervene in or overrule Corps permit decisions. EPA also exercises independent enforcement authority. Wetlands are areas that are inundated or saturated with surface or ground water at a frequency and duration sufficient to support vegetation adapted for life in saturated soil conditions. Wetlands include swamps, marshes, bogs, and similar areas. They are characterized by three factors: (1) frequent or prolonged presence of water at or near the soil surface, (2) hydric soils that form under flooded or saturated conditions, and (3) plants that are adapted to live in these types of soils. Wetlands play valuable ecological roles by reducing flood risks, recharging water supplies, improving water quality, and providing habitats for fish, aquatic birds, and other plants and animals, including a number of endangered species. As the Supreme Court has recognized in upholding Corps' authority under the Clean Water Act to regulate wetlands adjacent to waters of the United States, “he regulation of activities that cause water pollution cannot rely on . . . artificial lines . . . but must focus on all waters that together form the entire aquatic system.” Further, water moves in hydrologic cycles and pollution of one part of an aquatic system can affect other waters within that aquatic system. The regulations also extend federal jurisdiction under section 404 to tributaries. The federal government has argued in court that it must regulate tributary waters well beyond the point at which they are navigable because any pollutant or fill material that degrades water quality in a tributary has the potential to move downstream and degrade the quality of navigable waters themselves. Similarly, according to the Corps, drainage ditches constructed in uplands that connect two waters of the United States may themselves be jurisdictional. The first step in the regulatory process is a jurisdictional determination, in which the Corps determines whether a water or wetland is a “water of the United States.” In general, Corps staff conduct jurisdictional determinations by considering a range of factors, and they often view each factor’s importance within the context of the actual site of a proposed project. While many jurisdictional determinations are simple to perform, some can be complex and require considerable effort. For example, a relatively simple jurisdictional determination might involve a proposed project for the placement of a pier on the Mississippi River. In this case, Corps staff may only consult a map to determine that the activity falls within the Corps’ jurisdiction. In contrast, a more complex jurisdictional determination might arise when a property owner wants to fill in multiple wetlands to build a parking lot. This kind of jurisdictional determination would likely require additional time and resources because Corps staff might need to consult a variety of maps and aerial photographs and then visit the site. Once on site, Corps staff would verify the exact locations of the wetlands. If the Corps determines that a water or wetland is jurisdictional, a permit applicant then has the option of filing an administrative appeal challenging this determination and could subsequently pursue the matter in court. If a water or wetland is found to be jurisdictional, the property owner would take the next step in the process and apply for a section 404 permit from the Corps. The Corps bases its decision to issue a permit on an evaluation of the probable impacts, including cumulative impacts, of the proposed activity on the public interest. The decision should reflect the national concern for both the protection and utilization of important resources. As part of the balancing process, the Corps may require project modifications designed to avoid and minimize impacts on natural resources. Depending on the individual and cumulative impacts of the regulated activity, these modifications can range from requiring little or no additional effort by the property owner to requiring the property owner to incur significant costs. According to the Corps, in approving permits, the agency requires permit applicants to avoid, minimize, or mitigate impacts to wetlands and waters in most cases. The Corps approves virtually all section 404 permit applications. In fiscal year 2002, for example, of 85,445 section 404 permit applications filed, the Corps denied 128 and 4,143 were withdrawn by the applicant. While the interpretation of Clean Water Act jurisdiction has evolved over time, the Corps’ implementation of section 404 of the act changed significantly in January 2001, when the Supreme Court in the SWANCC decision ruled that Corps guidance known as the migratory bird rule could no longer be used as a basis to assert jurisdiction over a water or wetland. Discussed in the preamble to regulations issued in 1986—but never itself promulgated as a regulation—this provision stated that jurisdictional waters include waters that “are or would be used as habitat by birds protected by migratory bird treaties,” or that “are or would be used as habitat by other migratory birds that cross state lines.” Under this provision, nearly all waters and wetlands in the United States were potentially jurisdictional. The Supreme Court held that the Clean Water Act did not authorize the Corps to require a permit for filling an isolated, intrastate, nonnavigable pond where the sole basis for the Corps' authority was that the pond had been used by migratory birds. The extent to which the reasoning in the SWANCC decision applies to waters other than those specifically at issue in that case has been the subject of considerable debate in the courts and among the public. Some groups have argued the SWANCC decision precludes the Corps from regulating virtually all isolated, intrastate, nonnavigable waters, as well as nonnavigable tributaries to navigable waters, while others have argued that it merely prohibits the regulation of isolated, intrastate, nonnavigable waters and wetlands solely on the basis of use by migratory birds. In the context of this decision, the Corps and EPA considered whether to modify the definition of “waters of the United States.” However, any modification of the scope of waters of the United States would have implications for other Clean Water Act programs that cover “navigable waters,” including section 303 (governing water quality standards), section 311 (governing oil and hazardous substance spills), and section 402 (regulating discharges of pollutants other than dredged and fill material). Federal Regulations That Define Jurisdictional Waters Allow for Interpretation by Individual Corps Districts and Are Currently the Subject of Debate EPA’s and the Corps’ regulations defining waters of the United States provide a framework for determining which waters are within federal jurisdiction. The regulations leave room for judgment and interpretation by the Corps districts when considering jurisdiction over, for example, (1) adjacent wetlands, (2) tributaries, and (3) ditches and other man-made conveyances. Prior to the 2001 SWANCC decision, the Corps generally did not have to be concerned with such factors as adjacency, tributaries, and other aspects of connection with an interstate or navigable water body, if the wetland or water body qualified as a jurisdictional water on the basis of its use by migratory birds. Since the SWANCC decision, the Corps and EPA have provided limited additional guidance to the districts concerning jurisdictional determinations. Specifically, the Corps told districts that they may not assert jurisdiction over any waters solely on the basis of use by migratory birds and that they should not develop new local practices for determining the extent of Clean Water Act section 404 regulatory jurisdiction or use local practices that were not in effect prior to the SWANCC decision. Additionally, in January 2003, the Corps and EPA published an ANPRM, soliciting public comments on, among other things, whether isolated, intrastate, nonnavigable waters are jurisdictional under the Clean Water Act, whether the regulations should define the term isolated waters and whether any other revisions are needed to the regulations defining “waters of the United States.” According to EPA officials, respondents submitted approximately 133,000 comments with widely differing views on the need for a new regulation and the scope of Clean Water Act jurisdiction. In December 2003, the Corps and EPA decided that they would not issue a new rule on federal regulatory jurisdiction over isolated wetlands. In the almost 3 years since the SWANCC decision, 11 federal appellate court decisions interpreting the term “waters of the United States” have been issued. Project proponents in three of these cases are seeking Supreme Court review, and review has been denied for two additional cases. Regulations and Guidance Define Waters of the United States but Do Not Specify Detailed Aspects of Making a Jurisdictional Determination EPA’s and the Corps’ regulations defining waters of the United States establish the framework for determining which waters are within federal jurisdiction. In addition, the agencies have provided some limited additional national guidance to aid interpretation by the Corps districts. The regulations and national guidance leave room for judgment and interpretation by the Corps districts when considering jurisdiction over, for example, (1) adjacent wetlands, (2) tributaries, and (3) ditches and other man-made conveyances. For example, federal regulations state that wetlands adjacent to other waters of the United States, other than waters that are themselves wetlands, are to be considered waters of the United States. The regulations specify that adjacent means “bordering, contiguous, or neighboring,” and that wetlands separated from other waters of the United States by barriers such as man-made dikes, natural river berms, and beach dunes may be considered adjacent wetlands. This definition of adjacency leaves some degree of interpretation to the Corps districts. For example, the regulations and subsequent national guidance do not fully define the circumstances under which wetlands that do not touch waters of the United States may be considered jurisdictional waters. The regulations also specify that tributaries to waters of the United States are to be considered waters of the United States. The regulations do not define “tributaries,” but state that in the absence of adjacent wetlands, lateral jurisdiction over nontidal waters extends to the ordinary high water mark. The ordinary high water mark is the line on the shore caused by fluctuations of water and can be characterized by a clear bank, shelving, debris, or changes in vegetation. The Corps further states that the ordinary high water mark should be used to identify the upstream limits of jurisdiction for tributary waters. Thus, federal jurisdiction generally extends up the banks and upstream of a tributary to the point where the ordinary high water mark is no longer discernible. Additionally, the Corps states that ephemeral tributaries—which have flowing water only at certain times of year or only after certain storm events in a typical year—are to be considered jurisdictional, provided that an ordinary high water mark is present. Tributary waters can thus range from substantial rivers and streams with definite ordinary high water marks, to channels that are usually dry, and may have very faint or ill-defined ordinary high water marks. The regulations do not further define the physical characteristics of an ordinary high water mark. As a result, it is possible that well trained and competent staff might interpret the term differently. The definition refers to factors such as changes in the character of the soil, absence of terrestrial vegetation, and the presence of litter and debris; but both the interpretation and weight assigned to each of these factors is left to the official conducting the jurisdictional determination. Neither the Corps nor EPA have issued any additional clarifying national technical guidance for use by Corps staff in identifying ordinary high water marks. The regulatory definition of waters of the United States also does not specifically discuss the jurisdictional status of ditches and other man-made conveyances, and guidance issued by the Corps and EPA leaves room for interpretation. The Corps has stated that certain man-made conveyances, such as nontidal drainage and irrigation ditches excavated on dry land, are generally not considered waters of the United States. In other situations, however, the Corps may determine that man-made conveyances are waters of the United States. For example, natural streams that have been diverted into man-made channels are jurisdictional. Also, ditches that extend the ordinary high water mark of a water of the United States are jurisdictional. However, the Corps guidance provides little additional direction on when asserting jurisdiction over man-made conveyances is warranted, leaving that decision to individual Corps districts. The Corps guidance allows districts discretion when determining whether man-made channels dug on dry land are jurisdictional. Administrative Actions to Clarify Jurisdiction After SWANCC Since the SWANCC decision in January 2001, Corps and EPA headquarters have moved cautiously to address its implications. In a series of memoranda, the Corps has outlined some of the issues raised by the decision, but it has provided limited specific guidance as to how Corps districts are to respond to it. Specifically, the Corps has taken the following three steps. In a memorandum issued 10 days after the SWANCC decision in January 2001, EPA and Corps headquarters instructed field staff that they could no longer assert jurisdiction over waters and wetlands, solely on the basis of use by migratory birds. The memorandum also noted that because the SWANCC decision was limited to isolated, intrastate, nonnavigable waters, the Corps could continue asserting jurisdiction over all other waters covered by its regulations, such as adjacent wetlands and tributaries. However, the memorandum noted the Supreme Court’s opinion raised questions about—but did not specifically address—what, if any, connections to interstate commerce could be used to assert jurisdiction over isolated, intrastate, nonnavigable waters. Consequently, the memorandum instructed Corps districts to consult agency legal counsel when such cases arose. In May 2001, the Corps issued another memorandum that prohibited the districts from developing local practices for asserting jurisdiction and from using any practices not in effect before the SWANCC decision. The memorandum said that a prohibition on new practices was necessary to minimize any inconsistencies among the districts. In January 2003, the Corps and EPA issued an ANPRM seeking public comment on issues associated with the definition of “waters of the United States” and soliciting information from the general public, the scientific community, and federal and state resource agencies on the implications of SWANCC for jurisdictional decisions under the Clean Water Act. Attached to the notice was a joint memorandum between EPA and the Corps designed to provide clarifying guidance regarding SWANCC and to address several legal issues that had arisen since the SWANCC decision concerning jurisdiction under various factual scenarios. For example, the joint memorandum stated that, isolated, intrastate waters that are capable of supporting navigation by watercraft remain subject to Clean Water Act jurisdiction. The guidance called for field staff to continue to assert jurisdiction over traditional navigable waters, their tributaries, and adjacent wetlands. The joint memorandum directed field staff to make jurisdictional determinations on a case-by- case basis, considering the guidance in the memorandum as well as applicable regulations and any relevant court decisions in addition to those discussed in the memorandum. The joint memorandum also reiterated that field staff were no longer to assert jurisdiction over an isolated, intrastate, nonnavigable water on the basis of the factors listed in the migratory bird rule. It also noted that, in light of the SWANCC decision, it is uncertain whether there remains any basis for jurisdiction over any isolated, intrastate, nonnavigable waters. In view of these uncertainties, the joint memorandum stated that field staff should seek formal headquarters approval before asserting jurisdiction over such waters. The ANPRM generated significant interest, as evidenced by the approximately 133,000 comments submitted by state agencies, national development organizations, environmental groups, and other parties. According to EPA, 99 percent of the comments on the need for a new rule submitted to EPA and the Corps in response to the ANPRM were opposed to a new rule. Some groups, such as industry representatives, generally indicated that they favor a rulemaking because they believe the SWANCC decision created, among other things, a great deal of uncertainty, resulting in unequal treatment and significant financial burden to the regulated community. These groups further stated that the current breadth of federal jurisdiction is too great and that, under the principles of federalism, state and local governments are the appropriate regulators of nonnavigable waters within their borders. In contrast, other groups, such as environmentalists, indicated a general opposition to any rulemaking effort, expressing concerns that a new rule would result in reduced federal jurisdiction under section 404 and other programs under the Clean Water Act. Furthermore, these other groups argued that it is unlikely that other federal and state programs provide the oversight or require the mitigation that would be sufficient to protect wetlands and other waters that were no longer covered under the section 404 program. An EPA official stated that 41 of the 43 states that submitted comments were concerned about any major reduction in Clean Water Act jurisdiction. This official also said that most states are concerned that political, legal, and budgetary constraints complicate efforts to regulate certain types of waters and wetlands at the state level. In December 2003, EPA and the Corps announced that they would not issue a new rule on federal regulatory jurisdiction over isolated wetlands. Along with the ANPRM, attempts have been made to coordinate Corps and EPA efforts to address the implications of the SWANCC decision. In October 2003, the Corps agreed to an EPA request to collect data measuring the extent to which the Supreme Court’s SWANCC ruling prompted Corps district offices to avoid the regulation of wetlands and other waters. Specifically, the Corps has agreed to have district offices report quarterly to EPA any negative jurisdictional determinations for 1 year—that is any decision not to regulate waters or wetlands—based on issues raised by the SWANCC decision and the districts’ basis and reasoning for making these determinations. EPA has also requested that Corps district offices coordinate with them before declining jurisdiction over waters or wetlands, based upon issues raised by the SWANCC decision. However, the Corps has declined EPA’s request, stating that it is “most prudent to continue the present policy regarding interagency coordination.” Clean Water Act Jurisdiction Has Been Litigated in Several Appellate Courts Since SWANCC Since January 2001, 11 federal appellate court cases have considered the scope of the term “waters of the United States” in situations other than those involving the migratory bird rule. Table 1 summarizes these cases. In three cases, the affected project proponents are seeking Supreme Court review, while the Supreme Court denied review in two others. Corps District Offices Use Differing Practices to Make Jurisdictional Determinations There are several differences in the practices Corps districts use to make jurisdictional determinations. Specifically, districts sometimes differ when (1) identifying jurisdictional wetlands adjacent to waters of United States; (2) identifying jurisdictional limits of tributaries; and (3) regulating wetlands connected to waters of the United States by man-made conveyances, such as ditches. Corps headquarters officials said that there are enough differences in district office practices that a comprehensive survey of them is warranted. District Offices Use Different Factors to Identify Adjacent Wetlands All Corps districts that we reviewed regulated wetlands that are contiguous with—directly touching—other waters of the United States. However, when making jurisdictional determinations for wetlands not touching waters of the United States, districts consider several factors, including hydrologic connections between wetlands and other waters of the United States, the proximity of wetlands to other waters of the United States, and the number of barriers separating wetlands from other waters of the United States. Districts differed in the way they considered and weighed these various factors. Hydrologic Connections Districts use different approaches to determine whether there is a sufficient hydrologic connection between a wetland and a water of the United States to consider the wetland jurisdictional. In making determinations, some factors that are considered by some districts but not others include the likelihood that a water of the United States will flood into a wetland in any given year and whether the wetland is connected to a water of the United States through a periodic sheet flow. We found differences in how districts apply these considerations. For example, districts differed in their use of floodplains to make jurisdictional determinations. Some districts often use the 100-year floodplain to determine if wetlands are adjacent to waters of the United States. For example, written guidance from the Galveston District states that the district generally regulates wetlands located in the 100-year floodplain because this type of flooding is sufficient evidence of a hydrological connection between a wetland and a water of the United States. Alternatively, officials from other districts, such as Jacksonville and Philadelphia, stated that they may consider the 100-year floodplain as one of many factors when making jurisdictional determinations for adjacent wetlands, but they do not consider it sufficient evidence on its own. Still other districts, such as Chicago and Rock Island, do not consider the 100- year floodplain at all when making jurisdictional determinations. Rock Island District officials said that they do not use the 100-year floodplain because headquarters never suggested it as a possible criterion. Moreover, these officials were concerned that if they used this practice, there were parts of the Rock Island District where the practice would be very inclusive because the 100-year floodplain can extend several miles inland from the banks of the Mississippi River. Additionally, districts varied in their use of sheet flow—that is overland flow of water outside of a defined channel—for making jurisdictional determinations. In certain circumstances, some districts, such as San Francisco, Sacramento, and Los Angeles, used sheet flow between a wetland and a water of the United States as a basis for regulating the wetland. For example, San Francisco District officials said they would assert jurisdiction over a series of vernal pools—intermittently flooded areas—that are hydrologically connected to each other and a water of the United States through directional sheet flow during storm events. These officials said that this kind of sheet flow is common in the San Francisco District because the clay soils do not allow for rapid rates of infiltration, and the water flows more easily across the surface. In contrast, both the New Orleans District and the Galveston District do not consider sheet flow between a wetland and a water of the United States when making jurisdictional determinations. Officials from the Galveston District said they do not consider sheet flow when asserting jurisdiction because they believe sheet flow is not well defined and that, in its broadest interpretation, could cover nearly all waters in their district. Proximity Districts also vary in their use of proximity as a factor in making jurisdictional determinations. Some districts set a specific distance from a water of the United States within which a wetland must lie to be jurisdictional. For example, officials from the Jacksonville District said that they regulate almost all wetlands located within 200 feet of other waters of the United States, and they generally do not assert jurisdiction beyond that distance. According to these officials, the district set this distance because it needed an approximate distance for enforcement purposes, and it gradually became a rule of thumb. Philadelphia District officials said they generally consider a different specific distance to determine whether wetlands are jurisdictional. These officials said they generally do not consider a wetland adjacent if it is located more than 500 feet away from a water of the United States, although not all wetlands located within 500 feet of waters of the United States are regulated. Other districts, such as Portland and Sacramento, have not established specific distances between a wetland and a water of the United States that would make the wetland jurisdictional or nonjurisdictional. However, these districts do include proximity as an important consideration when making jurisdictional determinations. For example, Sacramento District officials said that a wetland that is 50 feet away from a water of the United States is more likely to be considered adjacent than a wetland that is 1,000 feet away. These officials explained that the farther a wetland is away from a water of the United States the greater the emphasis placed on other factors, such as the wetlands’ location in the 100-year floodplain. Similarly, Portland District officials asserted that it is important to consider different relationships—hydrological, ecological, and others—between a wetland and water of the United States, along with the distance between the two to provide the most meaningful basis for a jurisdictional determination. Man-Made and Natural Barriers According to federal regulations, a jurisdictional wetland may be separated from a water of the United States by man-made or natural barriers, such as dikes and dunes. The regulations do not specify the number of barriers necessary to break a jurisdictional connection, and district officials that we contacted applied different practices. Officials at several districts, such as Buffalo, Chicago, and Galveston, assert jurisdiction over wetlands separated from other waters of the United States by no more than one barrier. In contrast, officials from other districts said they assert jurisdiction over wetlands separated from other waters of the United States by more than one barrier. For example, officials from the Rock Island and Omaha districts said they would regulate wetlands separated from other waters of the United States by as many as two barriers. Also, officials from the Jacksonville District said they would generally regulate all wetlands within 200 feet of other waters of the United States, regardless of the number of barriers separating the waters from the wetlands. Officials from the Baltimore District said they have not established a maximum number of barriers that may separate a water of the United States from a jurisdictional wetland because the regulations leave room for interpretation. Districts Generally Use a Common Approach to Identify the Jurisdictional Limits of Tributaries but May Apply It Differently The Corps districts that we contacted generally used a similar approach to identify jurisdictional tributaries. However, beneath this similarity, we found that districts in different regions of the United States—and even individual Corps staff—could differ significantly in how they applied this approach when delineating tributary waters. The districts that we contacted rarely used a quantitative standard of the volume or frequency of flow for assessing jurisdiction. Instead, most of them used the concept of an ordinary high water mark to identify both the outer limits of a tributary, as well as the upstream limits of a tributary. Corps staff said that they generally assert jurisdiction, as long as they can identify the characteristics of an ordinary high water mark, regardless of the volume or frequency of flow in the channel. In some arid regions, this means that channels that might have little water flow in a given year, and at times may be completely dry, could be jurisdictional, as long as the characteristics of an ordinary high water mark were visible to the Corps staff. Districts would also assert jurisdiction over a tributary in the absence of an ordinary high water mark if there were evidence that construction or other activities had obliterated its signature. For example, officials from the Chicago District said that because their district was heavily urbanized many channels had been manipulated and contained, often in ways that obscured the ordinary high water mark. Districts in arid regions identified unique difficulties they face when identifying the limits of an ordinary high water mark. For example, in the arid West, the intermittency of the water flow and the occasional massive flood surges that affect many rivers and streams can make identifying the ordinary high water mark a difficult exercise. According to Corps district officials, large periodic floods in the arid West create complex tributary basins that feature a network of channels, many of which are remnants of a time when the water flowed along a different course and which rarely, if ever, experience water flow. Corps officials said that identifying the ordinary high water mark in such basins can be very difficult because there may be physical evidence of water flow that is little more than a historic artifact. Additionally, large flood surges can wash away normal banks, debris, vegetation, and other evidence of the ordinary high water mark, making it more difficult for Corps staff to identify the outer limits of the tributary. Because of the difficulties in identifying the ordinary high water mark in some arid regions, the Corps has determined that there can be considerable variations among Corps staff in identifying the outer limits of the ordinary high water mark in arid regions resulting in considerable differences in their assessments of the width of tributary channels. To address the difficulties, the Corps and EPA have taken several actions to help ensure better consistency for jurisdictional determinations. For example, the Corps’ South Pacific Division—which includes district offices encompassing a large portion of the arid West—has issued a jurisdictional determination tool that staff can use to identify the limits of tributaries in the region. It specifically guides the user to identify the water features present—including water features indigenous to the arid West, such as arroyos, coulees, and washes—and includes implicit practices for assessing the jurisdictional status of a water feature in that region. In addition, the Corps and EPA are developing a manual to guide field staff in identifying the ordinary high water mark in arid regions. Moreover, the difficulty and ambiguity associated with identifying the ordinary high water mark can affect jurisdictional determinations beyond arid regions. For example, an official of the Portland District said that the definition of the ordinary high water mark is among the most ambiguous terms in the regulatory definition of waters of the United States and that the lateral limits of the ordinary high water mark can be difficult to identify, even for major bodies of water such as the Columbia River. The official said that if he asked three different district staff to make a jurisdictional determination, he would probably get three different assessments of the ordinary high water mark from them. Similarly, an official from the Philadelphia District stated that identifying the upper reaches of an ordinary high water mark is one of the most difficult challenges the district staff face. The official explained that, as one progresses upstream, the depth of the bed and bank diminish, and the key indicators of an ordinary high water mark gradually disappear, thus identifying precisely where the ordinary high water mark ends is very much a judgment call. Districts Vary in Treatment of Ditches and Other Man- Made Conveyances All of the district office officials that we contacted consider and use links created by man-made conveyances to assert jurisdiction over wetlands. However, the district officials described different circumstances under which they consider a man-made conveyance sufficient to establish jurisdiction for a wetland that is connected by the conveyance to a water of the United States. The officials also differed with regard to the circumstances under which they consider the conveyance itself to be jurisdictional and with regard to their treatment of subsurface closed conveyances such as pipes and drain tiles. According to Corps headquarters officials, man-made conveyances are the most difficult and complex jurisdictional issue faced by Corps districts. Ditches and Other Man-Made Surface Conveyances Officials in all the districts we contacted said they consider and use connections made by man-made surface conveyances— such as ditches — when assessing the jurisdictional status of a wetland (see figure 2). If, for example, a ditch carries water between a wetland and a water of the United States, then a wetland could be considered jurisdictional. However, districts differed in their practices to test the sufficiency of such a connection. For example, some districts, such as the St. Paul, Rock Island, and Wilmington districts, were fairly inclusive and said that they would find a wetland jurisdictional if water flowed in a man-made surface conveyance between the wetland and a water of the United States. Other districts consider hydrologic connections through a man-made surface conveyance under more limited circumstances. For example, officials from the Portland and Philadelphia districts said that a ditch would also need to have an ordinary high water mark or display wetland characteristics in order to establish jurisdictional status for a wetland. Officials of the Omaha and Fort Worth districts consider different factors when using man- made surface conveyances to assert jurisdiction over a wetland. Omaha District officials require, in addition to water being present at least once per year, that the water flow from the wetland through the ditch and into a water of the United States. If the flow of water went from the water of the United States through the ditch and into the wetland, they would not consider the wetland to be jurisdictional. Omaha District officials told us that officials from Corps headquarters had endorsed this view. Officials of the Fort Worth District said that a ditch would establish a tributary connection for a wetland only if the ditch was a modification of or replacement for a natural stream. Districts also differed regarding the circumstances under which they consider a ditch itself to be jurisdictional. For example, officials from the Omaha and Fort Worth districts said they assert jurisdiction over a ditch whenever it creates a jurisdictional connection between a wetland and a water of the United States. In contrast, officials from other districts—such as Sacramento, Rock Island, and Galveston—said that they might assert jurisdiction over a wetland without regulating the ditch connecting it to a water of the United States. In these districts, the jurisdiction of the ditch depends upon several factors, including whether or not the ditch displays an ordinary high water mark, exhibits the three parameters of a wetland, or replaces a historic stream. Officials at the Galveston District said a result of this policy is that a nonjurisdictional ditch can be filled without a section 404 permit, severing the jurisdictional connection of the wetland to the water of the United States. After the connection is severed, the previously jurisdictional wetland is rendered nonjurisdictional and can be filled without a section 404 permit. Man-Made Subsurface Conveyances Officials in all the districts that we visited confirmed using man-made subsurface conveyances (such as drain tiles, storm drain systems, and culverts) that connect a wetland to a water of the United States as sufficient evidence to assert jurisdiction over a wetland. Nevertheless, we identified variations relating to the type of closed man-made conveyance considered sufficient to make a jurisdictional connection. Chicago District officials said they use drain tiles to establish a jurisdictional connection between a wetland and a water of the United States, but only when evidence supports that it had replaced a historic tributary. The Corps’ justification is that a natural stream that is confined to a pipe, or replaced by a series of pipes in essentially the same location, still functions as a connection between upstream and downstream waters and remains a part of the surface tributary system. In contrast, officials from the Rock Island District do not consider drain tiles to establish jurisdictional connections between wetlands and waters of the United States. Rock Island District staff said they asked Corps headquarters about the use of drain tiles to establish jurisdictional connections after the SWANCC decision; and they were instructed not to use drain tiles, even in situations where Corps staff could determine that water was draining from the wetland through the drain tile and into a water of the United States. Also, officials from the St. Paul district said that they do not use drain tiles to establish jurisdictional connections to wetlands, and Philadelphia District officials said they likely would not do so. Districts also varied in their use of storm drain systems to establish jurisdictional connections for wetlands and other waters. For example, officials from the Portland District said they considered storm drain systems as jurisdictional connections, depending on the historical situation. If a storm drain system conveyed the flow of a historic stream, then Portland District officials would consider the connection jurisdictional; however, in other situations, they would not. Officials from the St. Paul District said they had used storm drain systems to support jurisdictional connections among waters that had not been historically connected. St. Paul District officials explained that several lakes in the Minneapolis-St. Paul area had been connected to one another through underground storm water pipes to control flooding and that the system eventually empties into a water of the United States. These same officials said that this storm drain system is a jurisdictional connection because it is part of a tributary system, reasoning that if a pollutant enters the system it would eventually flow into a water of the United States. Corps Headquarters Officials Recognize That There Are Differences among Corps District Offices We discussed the differences that we observed among district offices’ practices for making jurisdictional determinations with Corps headquarters officials. The officials explained that there are two primary reasons for the differences among Corps district offices. First, a variety of waterways and wetlands across the country are continuously shaped by local climate, topographic features, geological and soil characteristics, fauna and flora, as well as other environmental factors. As a result, in their opinion, the definitions used to make jurisdictional determinations had to be vague. This vagueness has led to the development of local district practices and guidance concerning jurisdictional determinations. Second, because nearly all waters were jurisdictional under the migratory bird rule, questions regarding the imprecise definition of adjacent wetlands and isolated waters were previously moot. When the Supreme Court struck down the migratory bird rule in 2001, districts had to rely on the key terms in the regulatory definition of waters of the United States, which had not been well defined. This led to some confusion in the districts, and Corps headquarters subsequently instructed the districts to use locally developed practices, regardless of their clarity. As a result of these two factors, Corps headquarters officials told us that the existence of differences in jurisdictional determination practices among Corps districts is not surprising. Corps headquarters officials also noted that, given the complexity of nature and the need for some degree of flexibility within and among districts, it is not possible to achieve absolute nationwide consistency in making jurisdictional determinations. Nevertheless, these officials stated that we documented enough differences in the district office practices to warrant a more comprehensive survey, which would include the Corps districts not surveyed in this report. This type of additional review and analysis would help ensure that the Corps is achieving the highest level of consistency possible under the current circumstances. Few Districts Make Documentation of Their Practices Public Few Corps districts that we reviewed made documentation of their practices for making jurisdictional determinations available to the public. Many of the 16 districts that we contacted generally relied on oral communication to convey their practices to interested parties and only 3 had developed documentation of their practices that they made available to the public. Three districts—Jacksonville, Portland, and Galveston—had documented their practices and made this documentation available to the general public. These districts stated that their written materials documented practices that predated the 2001 SWANCC decision. The Jacksonville District developed a comprehensive document in July 2003 describing its practices for asserting jurisdiction over adjacent wetlands, tributary streams, man-made conveyances, and isolated waters and posted this guidance to its Web site. The Portland District also posted descriptions of district practices to its Web site, but its documentation addressed issues such as the regulation of storm water ponds and culvert maintenance activities. Finally, the Galveston District’s documentation, which addresses identifying wetlands adjacent to waters of the United States, is available upon request—but is not posted on its Web site. The other 13 districts that we reviewed have not made documentation of their practices publicly available. When asked about the written materials available to the public, Corps district officials sometimes referred to the Code of Federal Regulations and the Corps’ 1987 Wetlands Delineation Manual as publicly available sources of information. In lieu of documentation, some districts communicate their practices to the public informally, by talking with land planning consultants who help property owners navigate the section 404 program at workshops, in the office, and in the field. For example, the Baltimore District regularly makes its wetland delineations with land planning consultants present, explaining that this allows the consultants to better understand the district’s practices. Conclusions After the Supreme Court’s 2001 SWANCC decision that struck down the migratory bird rule, Corps districts have needed to rely on criteria other than use of the water as habitat for migratory birds to assert jurisdiction over certain waters and wetlands. In doing so, the Corps has based its determinations on criteria within the regulatory definition of “waters of the United States,” including determining whether a wetland or water body is adjacent to or a tributary of a navigable or interstate water or whether the water has a connection with interstate commerce. In making these determinations, the Corps districts and staff have used different practices and have applied different factors. Some flexibility and variation in district practices may well be appropriate to address differences in climatic, hydrologic, or other factors. However, it is unclear whether or to what degree these differences in Corps district office practices would result in different jurisdictional determinations in similar situations, in part, because Corps staff consider many factors when making these determinations. Also, because few Corps districts make documentation of their practices for making jurisdictional determinations available to the public, project proponents may not have clarity as to their responsibilities under section 404 of the Clean Water Act. Recommendations for Executive Action In light of the uncertainty of the impact of differences in district offices’ interpretation and application of the regulations, we recommend that the Secretary of the Army in consultation with the Administrator of EPA: survey the district offices to determine how they are interpreting and applying the regulations and whether significant differences exist among the Corps’ 38 districts; evaluate whether and how the differences in the interpretation and application of the regulations among the Corps district offices need to be resolved, recognizing that some level of flexibility may be needed because of differing climatic, hydrologic, and other relevant circumstances among the districts; and require districts to prepare and make publicly available documentation specifying the interpretation and application of the regulations they use to determine whether a water or wetland is jurisdictional. Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of the Department of Defense and the Administrator of EPA for review and comment. Both the Department of Defense and EPA concurred with the report’s findings and recommendations. The Department of Defense said that, on the basis of our recommendations, it will (1) conduct a more comprehensive survey to further assess the Corps district office practices in determining jurisdiction; (2) develop a strategic approach to ensure the Corps is achieving the highest level of consistency and predictability possible for making jurisdictional determinations; and (3) ask the Corps districts and divisions to prepare documentation describing specific local practices used in making jurisdictional determinations and make this information available to the public. EPA agreed that a more complete survey of approaches to geographic jurisdictional determinations would be helpful and that it is important to document jurisdictional determinations and ensure such information is publicly available. Both the Department of Defense and EPA also provided several technical changes that we have incorporated into this report, as appropriate. The full text of the Department of Defense’s response is included in appendix III, and EPA’s response is included in appendix IV. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to interested congressional committees and Members; the Secretary of Defense; the Administrator, EPA; and the Chief of Engineers and Commander, U.S. Army Corps of Engineers. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix V. Scope and Methodology To identify the national criteria for making jurisdictional determinations, and administrative and judicial developments affecting this process since Solid Waste Agency of Northern Cook County v. U.S. Army Corps of Engineers (SWANCC), we reviewed federal regulations and related guidance that define “waters of the United States.” We also interviewed officials of both the Army Corps of Engineers (the Corps) and the Environmental Protection Agency (EPA) headquarters in Washington D.C. Further, we reviewed the Supreme Court’s SWANCC decision, as well as various subsequent and related lower court decisions. In addition, we analyzed administrative guidance issued by the Corps and EPA, as well as the Advance Notice of Proposed Rulemaking (ANPRM) issued by the Corps and EPA in January 2003. Finally, we reviewed several major public comments addressing the ANPRM and discussed the full range of comments submitted by the public with EPA officials. To determine the extent to which Corps district offices vary in their interpretation and application of the regulations and guidance, we interviewed Corps headquarters officials, as well as national environmental groups and representatives of industry and real estate development organizations. We then selected 16 of the Corps’ 38 district offices for an in-depth examination of their jurisdictional determination practices. Selected to obtain geographic representation across the United States as well as climatic, geologic, and topographic diversity, we contacted at least one district in each of the Corps’ seven Divisions located in the contiguous United States. Specifically we contacted the Baltimore, Buffalo, Chicago, Fort Worth, Galveston, Jacksonville, Los Angeles, New Orleans, Omaha, Philadelphia, Portland, Rock Island, Sacramento, St. Paul, San Francisco, and Wilmington Corps district offices (see fig. 1). For each district office, we conducted a series of preliminary interviews, including interviews with officials representing the Corps Divisional Office responsible for the district office, a state wetland protection agency with jurisdiction overlapping that of the district office, a corresponding EPA regional office, and at least one firm representing the perspective of section 404 permit applicants. The primary purpose of these interviews was to obtain preliminary information on the Corps district’s jurisdictional determination practices and, in particular, information on any significant differences among the districts. Following these discussions, we interviewed officials from 16 Corps district offices, using detailed questionnaires. In these interviews, we discussed a wide range of topics pertaining to jurisdictional determinations, including the practices used by districts to determine whether to assert jurisdiction over adjacent wetlands, tributary waters, man-made conveyances, and isolated, intrastate waters. We also discussed other issues related to jurisdictional determinations, such as the overall impact of the SWANCC decision on districts’ jurisdictional practices, and particular difficulties the districts face in conducting jurisdictional determinations. At the 11 district offices that we visited, we supplemented office discussions with field visits to sites of recent jurisdictional determinations, as well as sites that typified difficult jurisdictional issues. During these site visits, we observed and discussed hydrologic linkages between wetlands and waters of the United States, the difficulty in identifying the outer extent of tributaries in both arid and wet regions, and the role of ditches and other man-made conveyances in establishing jurisdictional connections for wetlands. We did not attempt to determine whether individual differences in district practices resulted in different jurisdictional determinations in similar situations, in part, because Corps staff consider many factors when making these determinations. Also, we did not attempt to compare districts' practices before and after the SWANCC decision. To determine the extent to which the Corps districts document and make their practices for conducting jurisdictional determinations available to the public, we interviewed Corps officials in each of the 16 district offices we contacted. When available, we obtained and reviewed districts’ written guidance. We also perused district office’s Web sites to determine if they made information about their practices readily available to the public. Additionally, we discussed other means of keeping the public informed of district practices and the methods districts used to maintain some degree of consistency among different jurisdictional determinations. We conducted our work between April 2003 and January 2004 in accordance with generally accepted government auditing standards. Because we reviewed 16 of the Corps' 38 districts, our findings may not apply to those districts we did not review. Text of 33 C.F.R. § 328.3 For the purpose of this regulation these terms are defined as follows: (a) The term waters of the United States means: (1) All waters which are currently used, or were used in the past, or may be susceptible to use in interstate or foreign commerce, including all waters which are subject to the ebb or flow of the tide; (2) All interstate waters including interstate wetlands; (3) All other waters such as intrastate lakes, rivers, streams (including intermittent streams), mudflats, sandflats, wetlands, sloughs, prairie potholes, wet meadows, playa lakes, or natural ponds, the use, degradation or destruction of which could affect interstate or foreign commerce including any such waters: (i) Which are or could be used by interstate or foreign travelers for recreational or other purposes; or (ii) From which fish or shellfish are or could be taken and sold in interstate or foreign commerce; or (iii)Which are used or could be used for industrial purpose by industries in interstate commerce; (4) All impoundments of waters otherwise defined as waters of the United States under the definition; (5) Tributaries of waters identified in paragraphs (a)(1) - (4) of this (6) The territorial seas; (7) Wetlands adjacent to waters (other than waters that are themselves wetlands) identified in paragraphs (a)(1) - (6) of this section. Waste treatment systems including treatment ponds or lagoons designed to meet the requirements of Clean Water Act (other than cooling ponds as defined in 40 CFR 123.11(m) which also meet the criteria of this definition) are not waters of the United States. (8) Waters of the United States do not include prior converted cropland. Nothwithstanding the determination of an area's status as prior converted cropland by any other federal agency, for the purposes of the Clean Water Act, the final authority regarding Clean Water Act jurisdiction remains with EPA. (b) The term wetlands means those areas that are inundated or saturated by surface or ground water at a frequency and duration sufficient to support, and that under normal circumstances do support, a prevalence of vegetation typically adapted for life in saturated soil conditions. Wetlands generally include swamps, marshes, bogs, and similar areas. (c) The term adjacent means bordering, contiguous, or neighboring. Wetlands separated from other waters of the United States by man- made dikes or barriers, natural river berms, beach dunes, and the like are "adjacent wetlands." (d) The term high tide line means the line of intersection of the land with the water's surface at the maximum height reached by a rising tide. The high tide line may be determined, in the absence of actual data, by a line of oil or scum along shore objects, a more or less continuous deposit of fine shell or debris on the foreshore or berm, other physical markings or characteristics, vegetation lines, tidal gages, or other suitable means that delineate the general height reached by a rising tide. The line encompasses spring high tides and other high tides that occur with periodic frequency but does not include storm surges in which there is a departure from the normal or predicted reach of the tide due to the piling up of water against a coast by strong winds such as those accompanying a hurricane or other intense storm. (e) The term ordinary high water mark means that line on the shore established by the fluctuation of water and indicated by physical characteristics such as clear, natural line impressed on the bank, shelving, changes in the character of soil, destruction of terrestrial vegetation, the presence of litter and debris, or other appropriate means that consider the characteristics of the surrounding areas. (f) The term tidal waters means those waters that rise and fall in a predictable and measurable rhythm or cycle due to the gravitational pulls of the moon and sun. Tidal waters end where the rise and fall of the water surface can no longer be practically measured in a predictable rhythm due to masking by hydrologic, wind, or other effects. Comments from the Department of the Army Comments from the Environmental Protection Agency GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition, Charles Barchok, Doreen Feldman, Glenn Fischer, Michael Hartnett, Richard Johnson, Kate Kousser, Stephanie Luehr, Jonathan McMurray, and Adam Shapiro made key contributions to this report. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
Each year the U.S. Army Corps of Engineers (Corps) receives thousands of Clean Water Act permit applications from project proponents wishing to fill waters and wetlands. The first step in the permitting process is to determine if the waters and wetlands are jurisdictional. Prior to 2001, if migratory birds used the waters or wetlands as habitat, they were usually jurisdictional. In 2001, the Supreme Court--in Solid Waste Agency of Northern Cook County v. U.S. Army Corps of Engineers (SWANCC)--struck down the migratory bird rule, leaving the Corps to rely on other jurisdictional criteria. GAO was asked to describe the (1) regulations and guidance used to determine jurisdictional waters and wetlands and related developments since SWANCC, (2) extent to which Corps district offices vary in their interpretation of these regulations and guidance, and (3) extent to which Corps district offices document their practices and make this information publicly available. EPA's and the Corps' regulations defining waters of the United States establish the framework for determining which waters fall within federal jurisdiction. However, the regulations leave room for interpretation by Corps districts when considering (1) adjacent wetlands, (2) tributaries, and (3) ditches and other man-made conveyances. Since the SWANCC decision, the Corps and EPA have provided limited additional guidance to the districts concerning jurisdictional determinations, and the Corps has prohibited the districts from developing new local practices for determining the extent of Clean Water Act regulatory jurisdiction. In January 2003, the Corps and EPA published an Advance Notice of Proposed Rulemaking (ANPRM) soliciting comments on whether there was a need to revise the regulations that define which waters should be subject to federal jurisdiction. The ANPRM generated approximately 133,000 comments representing widely differing views. The agencies decided in December 2003 that they would not proceed with a rulemaking. Additionally, since SWANCC, 11 federal appellate court decisions relating to the extent of jurisdictional waters have been rendered; and 3 of these decisions are on appeal with the Supreme Court, with review denied for 2 others. Corps districts differ in how they interpret and apply the federal regulations when determining which waters and wetlands are subject to federal jurisdiction. For example, one district generally regulates wetlands located within 200 feet of other jurisdictional waters, while other districts consider the proximity of wetlands to other jurisdictional waters without any reference to a specific linear distance. Additionally, some districts assert jurisdiction over all wetlands located in the 100-year floodplain, while others do not consider floodplains as a factor. Although districts used generally similar criteria to identify the jurisdictional limits of tributaries, they used differing approaches in how they apply these criteria. Whether or to what degree individual differences in Corps district office practices would result in different jurisdictional determinations in similar situations is unclear, in part, because Corps staff consider many factors when making these determinations. Nevertheless, Corps headquarters officials stated that GAO had documented enough differences in district office practices to warrant a more comprehensive survey, which would include the other districts not surveyed in this report. This would help to ensure that the Corps is achieving the highest level of consistency possible under the current circumstances. Only 3 of the 16 districts that GAO reviewed made documentation of their practices available to the public. Other districts generally relied on oral communication to convey their practices to interested parties.
Background The nation’s 17,000 nursing homes play an essential role in our health care system, providing services to 1.6 million elderly and disabled persons who are temporarily or permanently unable to care for themselves but who do not require the level of care furnished in an acute care hospital. Depending on the identified needs of each resident, as determined through MDS assessments, nursing homes provide a variety of services, including nursing and custodial care, physical, occupational, and speech therapy, and medical social services. The majority of nursing home residents have their care paid for by Medicaid, a joint federal-state program for certain low-income individuals. Almost all nursing homes serve Medicaid residents, while more than 14,000 nursing homes are also Medicare- certified. Medicare, the federal health care program for elderly and disabled Americans, pays for posthospital nursing home stays if a beneficiary needs skilled nursing or rehabilitative services. Medicare- covered skilled nursing home days account for approximately 9 percent of total nursing home days. Medicare beneficiaries tend to have shorter nursing home stays and receive more rehabilitation services than individuals covered by Medicaid. MDS Used to Assess Nursing Home Residents Since 1991, nursing homes have been required to develop a plan of care for each resident based on the periodic collection of MDS data. The MDS contains individual assessment items covering 17 areas, such as mood and behavior, physical functioning, and skin conditions. MDS assessments of each resident are conducted in the first 14 days after admission and are used to develop a care plan. A range of professionals, including nurses, attending physicians, social workers, activities professionals, and occupational, speech, and physical therapists, complete designated parts of the MDS. Assessing a resident’s condition in certain areas requires observation, often over a period of days. For example, nursing staff must assess the degree of resident assistance needed during the previous 7 days—none, supervised, limited, extensive, or total dependence—to carry out the activities of daily living (ADL), such as using a toilet, eating, or dressing. To obtain this information, staff completing the MDS assessments are required to communicate with direct care staff, such as nursing assistants or activities aides, who have worked with the resident over different time periods. These staff have first-hand knowledge of the resident and will often be the primary and most reliable source of information regarding resident performance of different activities. While a registered nurse is required to verify that the MDS assessment is complete, each professional staff member who contributed to the assessment must sign and attest to the accuracy of his or her portion of the assessment. MDS Used in Quality Oversight and as Basis for Payments MDS data are also submitted by nursing homes to states and CMS for use in the nursing home survey process and to serve as the basis for adjusting payments. CMS contracts with states to periodically survey nursing homes to review the quality of care and assure that the services delivered meet the residents’ assessed needs. In fiscal year 2001, the federal government spent about $278 million on the nursing home survey process. Effective July 1999, the agency instructed states to begin using quality indicators derived from MDS data to review the care provided to a nursing home’s residents before state surveyors actually visit the home to conduct a survey. Quality indicators are essentially numeric warning signs of potential care problems, such as greater-than-expected instances of weight loss, dehydration, or pressure sores among a nursing home’s residents. They are used to rank a facility in 24 areas compared with other nursing homes in a state. In addition, by using the quality indicators before the on- site visit to select a preliminary sample of residents to review, surveyors should be better prepared to identify potential care problems. In addition to quality oversight, some state Medicaid programs and Medicare use MDS data to adjust nursing home payments to reflect the expected resource needs of their residents. Such payment systems are commonly known as “case-mix” reimbursement systems. Because not all residents require the same amount of care, the rate paid for each resident is adjusted using a classification system that groups residents based on their expected costs of care. Facilities use MDS data to assign residents to case-mix categories or groups that are defined according to clinical condition, functional status, and expected use of services. In Medicare, these case-mix groups are known as resource utilization groups. Each case-mix group represents beneficiaries who have similar nursing and therapy needs. As of January 2001, 18 states had introduced such payment systems for their Medicaid programs. As directed by the Congress, HCFA in 1998 implemented a prospective payment system (PPS) for skilled nursing facilities (SNF)—nursing homes that are certified to serve Medicare beneficiaries. The SNF PPS also uses MDS data to adjust nursing home payments. MDS Review Activities Can Be On-Site or Off-Site States and CMS use the term “accuracy reviews” to describe efforts that help ensure MDS assessments accurately reflect residents’ conditions. Review activities can be performed on-site—that is, at the nursing home— or off-site. On-site reviews generally consist of documentation reviews to determine whether the resident’s medical record supports the MDS assessment completed by the facility. If the MDS assessment is recent, the review may also include direct observation of the resident and interviews with nursing home staff who have recently evaluated or treated the resident. While documentation reviews may also be conducted outside of the nursing home, other off-site reviews of MDS data include examining trends across facilities. For example, off-site review activities could involve the examination of monthly reports showing the distribution of residents’ case-mix categories across different facilities in a state. Similarly, off-site reviews could also involve an examination of particular MDS elements, such as the distribution of ADLs within and across nursing homes to identify aberrant or inconsistent patterns that may indicate the need for further investigation. Off-site and on-site reviews may also be combined as a way of leveraging limited resources to conduct MDS accuracy activities. Only Eleven States Conduct Separate On- Site or Off-Site Reviews of MDS Accuracy Eleven states conduct separate MDS accuracy reviews apart from their standard nursing home survey process. Ten of these states’ reviews were in operation as of January 2001. An additional 7 states reported that they intend to initiate similar accuracy reviews. All 18 of these states either currently use an MDS-based Medicaid payment system or plan to implement such a system. The remaining 33 states have no plans to implement separate MDS review programs and currently rely on their periodic nursing home surveys for MDS oversight. In all but one of the states with separate MDS review programs operating as of January 2001, accuracy reviews entail periodic on-site visits to nursing homes. The reviews focus on whether a sample of MDS assessments completed by the facility is supported by residents’ medical records. If the MDS assessments reviewed are recent enough that residents are still in the facility and their health status has not changed, the on-site review may also be supplemented with interviews of nursing home staff familiar with the residents, as well as observations of the residents themselves, to validate the record review. About half of these states also conduct off-site data analyses in which reviewers look for significant changes or outliers, such as facilities with unexplained large shifts in the distribution of residents across case-mix categories over a short period. Officials primarily attributed the errors found during their on-site reviews to differences in clinical interpretation and mistakes, such as a misunderstanding of MDS definitions. A few of these states have been able to show some recoupments of Medicaid payments since the implementation of their on- site review programs. Most States Do Not Have Separate MDS Review Programs Of the 50 states and the District of Columbia, only 11 conduct accuracy reviews of MDS data that are separate from the state’s nursing home survey process. (See table 1.) These 11 states provide care to approximately 22 percent of the nation’s nursing home residents and all but one have an MDS-based payment system (Virginia began conducting MDS accuracy reviews in April 2001 in anticipation of adopting such a payment system in 2002). Seven additional states plan to initiate separate MDS reviews—three currently have an MDS-based payment system and four are planning to implement such a payment system. Officials in the 10 states with separate, longer standing MDS review programs said that the primary reason for implementing reviews was to ensure the accuracy of the MDS data used in their payment systems. Several of these states also indicated that the use of MDS data in generating quality indicators was another important consideration. Vermont officials, in particular, emphasized the link to quality of care, noting that the state had created its own MDS-based quality indicators prior to HCFA’s requirement to use quality indicators in nursing home surveys. A state official told us it was critical that the MDS data be accurate because Vermont was making this information available to the public as well as using it internally as a normal part of the nursing home survey process. To varying degrees, three major factors influenced the decision of 33 states not to establish separate MDS review programs. First, the majority—28 states—do not have MDS-based Medicaid payment systems. Second, some states cited the cost of conducting separate reviews. Kansas, for example, reported a lack of funding and staff resources as the reason for halting a brief period of on-site visits in 1996 as a follow-up to nursing home surveys. Arkansas as well reported insufficient staff for conducting a separate review of MDS data. Finally, officials in about one- third of the states without separate MDS reviews volunteered that they had some assurance of the accuracy of MDS data either because of training programs for persons responsible for completing MDS assessments or because of the nursing home survey process. For example, Missouri operates a state funded quality improvement project in which nurses with MDS training visit facilities to assist staff with the MDS process and use of quality indicator reports. North Carolina also reported that its quarterly training sessions provide MDS training to approximately 800 providers a year. Regarding standard surveys, Connecticut and Maryland reported that their nursing home survey teams reviewed MDS assessments to determine if they were completed correctly and if the assessment data matched surveyor observations of the resident. In Connecticut, surveyors may also review a sample of facility MDS assessments for possible errors whenever they identify aberrant or questionable data on the quality indicator reports. Officials in the 10 states with separate, longer standing MDS review programs generally said that the survey process itself does not detect MDS accuracy issues as effectively as separate MDS review programs. Some noted that nursing home surveyors do not have time to thoroughly review MDS accuracy and often review a smaller sample size than MDS reviewers. The surveyors’ primary focus, they indicated, was on quality of care and resident outcomes—not accuracy of MDS data. For example, surveyors would look at whether the resident needed therapy and whether it was provided. In contrast, the MDS reviewer would calculate the total number of occupational, speech, and physical therapy minutes to ensure that the resident was placed in the appropriate case-mix category. Officials in Iowa similarly noted that surveyors do not usually cite MDS accuracy as a specific concern unless there are egregious MDS errors, again, because the focus of the survey process is on quality of care. States with Separate MDS Review Programs Emphasize On-Site Oversight, but Also Conduct Off-Site Monitoring Nine of the 10 states with separate, longer standing MDS accuracy review programs use on-site reviews to test the accuracy of MDS data, generally visiting all or a significant portion of facilities in the state at least annually, if not more frequently. (See app. I for a summary of state on-site review programs.) Due to a lack of staff, one state—West Virginia—limits its MDS reviews to off-site analysis of facility-specific monthly data. Most of these states have been operating their MDS review programs for 7 years or longer and developed them within a year of implementing an MDS-based payment system. Three of the nine states arrive at the facility unannounced while the other six provide advanced notice ranging from 48 hours to 2 weeks. The sample of facility MDS assessments reviewed by each state varies considerably. Assessment sample sizes generally range from 10 to 40 percent of a nursing home’s total residents but some states select a specific number of residents, not a percentage, and a few specifically target residents in particular case-mix categories. For example, Indiana selects a sample of 40 percent—or no less than 25 residents—across all major case-mix categories, while Ohio’s sample can be based on a particular case-mix category, such as residents classified as “clinically complex.” Iowa officials told us that its reviewers select at least 25 percent of a facility’s residents, with a minimum of 5 residents, while Pennsylvania chooses 15 residents from each facility, regardless of case- mix category or facility size. Some states expand the resident sample when differences between the MDS assessment and supporting documentation reach a certain threshold. For example, if the on-site review for the initial sample in Iowa finds that 25 percent or more of the MDS assessments have errors, a supplemental random sample is selected for review. While a few states limit their sample to Medicaid residents only, most select assessments to review from the entire nursing home’s population. On-site reviews generally involve a comparison of the documentation in the resident’s medical record to the MDS assessment prepared by the facility. Generally, the on-site process also allows reviewers to interview nursing home staff and to directly observe residents, permitting a better understanding of the documentation in a resident’s medical record and clarifying any discrepancies that may exist. Staff interviews and resident observations can enhance the reviewer’s understanding of the resident’s condition and allow a more thorough MDS review than one relying primarily on documentation. However, as the interval between the facility’s MDS assessment and the on-site review increases, staff interviews and resident observations become less reliable and more difficult to conduct. For example, staff knowledge of a particular patient may fade over time, the patient’s health status may change, or the patient may be discharged from the facility. Pennsylvania officials, who reported reviewing assessments that were 6 to 12 months old, told us that the state’s MDS reviews tended to identify whether the nursing home had adequate documentation. Reviewing such old assessments tends to focus the review process on the adequacy of the documentation rather than on whether the MDS assessment was accurate. Four of the nine states review assessments between 30 and 90 days old, a process that likely increases the value of interviews and observation. The combination of interviews and observations can be valuable, but limiting reviews to only recent MDS assessments and providing homes advance notice may undermine the effectiveness of on-site reviews. Under such circumstances, facilities have an opportunity to focus on the accuracy of their recent assessments, particularly if the nursing home knows when their reviews will occur, instead of adopting facility-wide practices that increase the accuracy of all MDS assessments. Based on their on-site reviews, officials in the nine states identified seven areas as having a high potential for MDS errors, with two areas most often identified as being among the highest potential for error: (1) mood and behavior and (2) nursing rehabilitation and restorative care. (See fig. 1.) Assessments of resident mood and behavior are used to calculate quality indicators and, along with nursing rehabilitation and restorative care, are often important in determining nursing home payments. CMS indicated that several of the MDS elements cited in figure 1 were also identified by a CMS contractor as areas of concern. Officials in most states with separate on-site review programs told us that errors discovered during their on-site reviews often resulted from differences in clinical interpretation or mistakes, such as a misunderstanding of MDS definitions by those responsible for completing MDS assessments. Officials in only four of the nine states were able to tell us whether the errors identified in their MDS reviews on average resulted in a case-mix category that was too high or too low. Two of these states reported roughly equal numbers of MDS errors that inappropriately placed a resident in either a higher or lower case-mix category; a third indicated that errors more often resulted in higher payments; and a fourth found that errors typically resulted in payments that were too low. None of the nine states track whether quality indicator data were affected by MDS errors. Two of the 10 states with MDS review programs were able to tell us the amount of Medicaid recoupments resulting from inaccurate MDS assessments. From state fiscal years 1994 through 1997, South Dakota officials reported that the state had recouped about $360,000 as a result of recalculating nursing home payments after MDS reviews. West Virginia received $1 million in 1999 related to MDS errors for physical therapy discovered during a 1995 on-site review at a nursing home. Officials in five additional states told us that they recalculate nursing home payments when MDS errors are found, but could not provide the amount recovered. Of the 10 states with longer standing MDS review programs, four use off- site analyses to supplement their on-site reviews, while one state relies on off-site analyses exclusively. Both Maine and Washington examine MDS data off-site to monitor changes by facility in the mix of residents across case-mix categories. Such changes may help identify aberrant or inconsistent patterns that may indicate the need for further investigation. Ohio, a state with approximately 1,000 facilities—more than any other state that conducts MDS reviews—analyzes data off-site to identify facilities with increased Medicaid payments and changes in case-mix categories to select the approximately 20 percent of facilities visited each year. West Virginia has eliminated its on-site reviews and now focuses solely on analyzing monthly reports for its 141 facilities—for example, significant changes in case-mix categories or ADLs across consecutive MDS assessments. In addition to informally sharing results of off-site reviews with the state nursing home surveyors, West Virginia is trying to formalize a process in which off-site reviews could trigger additional on- site or off-site documentation reviews. States Attempt to Improve MDS Data Accuracy through On- Site Reviews, Training, and Other Remedies Officials in the nine states with on-site review programs consistently cited three features of their review programs that strengthened the ability of nursing home staff to complete accurate MDS assessments and thus decrease errors: (1) the actual presence of reviewers, (2) provider education, and (3) remedies that include corrective action plans and financial penalties. On-site reviews, for example, underscore the state’s interest in MDS accuracy and provide an opportunity to train and coach those who are responsible for completing MDS assessments. Similarly, the errors discovered during on-site reviews guide the development of more formal training sessions that are offered by the state outside of the nursing home. Requiring nursing homes to prepare corrective action plans and imposing financial penalties signal the importance of MDS accuracy to facilities and are tools to improve the accuracy of the MDS data. As a result of these efforts, some states have been able to show a notable decrease in their overall error rates. Most of the nine states view on-site visits and training as interrelated elements that form the foundation of their MDS review programs. State officials said that nursing homes pay more attention to properly documenting and completing the MDS assessments because reviewers visit the facilities regularly. On-site visits also allow reviewers to discuss MDS documentation issues or requirements with staff, providing an opportunity for informal MDS training. For example, Indiana officials told us that 2 to 3 hours of education are a routine part of each facility’s MDS review. Noting the high staff turnover rates in nursing homes, many states reported that frequent training for the staff responsible for completing MDS assessments is critical. Officials in seven of the nine states with on- site reviews told us that high staff turnover was one of the top three factors contributing to MDS errors in their states. In addition, many of the reasons cited for MDS errors—such as a misunderstanding of MDS definitions and other mistakes—reinforce the need for training. States with on-site reviews use the process to guide provider education activities—both on-site and off-site. For example, during Pennsylvania’s annual MDS reviews of all nursing homes, state reviewers determine the types of training needed. According to state officials, the state uses the results of these reviews to shape and provide facility-specific training, if it is needed, within a month of the review and subsequently conducts a follow-up visit to see if the facility is improving in these areas. They indicated that all 685 homes visited during 2000, the first year of this approach, were provided with some type of training. To improve MDS accuracy, several states also provide voluntary training opportunities outside of the nursing home. Maine, Iowa, Indiana, and South Dakota, for example, provide MDS training regularly throughout the state, rotating the location of the training by region so that it is accessible to staff from all facilities. While states generally emphasized on-site reviews and training as the primary ways to improve the accuracy of the MDS data, some reported that they have also instituted certain remedies, such as corrective action plans and financial penalties. Indiana and Pennsylvania, for example, require facilities to submit a corrective action plan detailing how the facility will address errors identified during an on-site review. Two states—Maine and Indiana—impose financial penalties. Maine has instituted financial penalties for recurring serious errors, collecting approximately $390,000 since late 1995. Maine also requires facilities with any MDS errors that result in a case-mix category change to complete and submit a corrected MDS assessment for the resident. While Indiana imposes financial penalties, it does not view them as the primary tool for improving MDS accuracy. Rather, officials attributed a decrease in MDS errors to the education of providers and the on-site presence of reviewers. Other remedies cited by states include conducting more frequent on-site MDS reviews and referring suspected cases of fraud to their state’s Medicaid Fraud Control Unit. Five of the nine states that conduct on-site MDS reviews told us that their efforts have resulted in a notable decrease in MDS errors across all facilities since the implementation of their review programs. (See table 2.) South Dakota officials, for example, reported that the percentage of assessments with MDS errors across facilities had decreased from approximately 85 percent to 10 percent since the implementation of the state’s MDS review program in 1993. Similarly, Indiana reported a decrease in the statewide average error rate from 75 percent to 30 percent of assessments in 1 year’s time. Four states could not provide these data. In calculating these decreases, three of the five states—Indiana, Maine, and South Dakota—define MDS errors as an unsupported MDS assessment that caused the case-mix category to be inaccurate. Iowa’s definition, however, includes MDS elements that are not supported by medical record documentation, observation, or interviews, regardless of whether the MDS error changed the case-mix category. Similarly, while Pennsylvania does not limit errors to those that changed the case-mix category, the state defines errors as a subset of MDS elements that are not supported by the medical record. CMS’ MDS Review Program Could Better Leverage Existing State and Federal Accuracy Activities Following implementation of Medicare’s MDS-based payment system in 1998, HCFA began building the foundation for its own separate review program—distinct from state efforts—to help ensure the accuracy of MDS data. In the course of developing and testing accuracy review approaches, its contractor found widespread MDS errors that resulted in a change in Medicare payment categories for 67 percent of the resident assessments sampled. In September 2001, CMS awarded a new contract to implement a nationwide MDS review program over a 2- to 3-year period. Despite the benefits of on-site reviews, as demonstrated by states with separate review programs, the current plan involves conducting on-site reviews in fewer than 200 of the nation’s 17,000 nursing homes each year. In addition, the contractor’s combined on-site and off-site reviews to evaluate MDS accuracy will involve only about 1 percent of the approximately 14.7 million MDS assessments expected to be prepared in 2001. In contrast, states that conduct separate on-site MDS reviews typically visit all or a significant portion of their nursing homes and generally examine from 10 to 40 percent of assessments. While CMS’ approach may yield some broad sense of the accuracy of MDS assessments on an aggregate level, it may be insufficient to help ensure the accuracy of MDS assessments in most of the nation’s nursing homes. At present, it does not appear that CMS plans to leverage the considerable resources already devoted to state nursing home surveys and states’ separate MDS review programs that together entail a routine on-site presence in all nursing homes nationwide. Nor does it plan to more systematically evaluate the performance of state survey agencies regarding MDS accuracy through its own federal comparative surveys. Finally, CMS is not requiring nursing homes to provide documentation for the full MDS assessment, which could undermine the efficacy of its MDS reviews. Testing of MDS Accuracy Approaches Identified Widespread Accuracy Problems In September 1998, HCFA contracted with Abt Associates to develop and test various on-site and off-site approaches for verifying and improving the accuracy of MDS data. Two of the approaches resembled state on-site MDS reviews and the off-site documentation reviews performed by CMS contractors that review Medicare claims. Another approach used off-site data analysis to target facilities for on-site review. To determine the effectiveness of the approaches tested in identifying MDS inaccuracies, Abt compared the errors found under each approach to those found in its “reference standard”—independent assessments performed by MDS- trained nurses hired by Abt for approximately 600 residents in 30 facilities in three states. Abt found errors in every facility, with little variation in the percentage of assessments with errors across facilities. On average, the errors found affected case-mix categories in 67 percent of the sampled Medicare assessments. Abt concluded that the errors did not result in systematic overpayments or underpayments to facilities even though there were more errors that placed residents in too high as opposed to too low a case-mix category. Abt did not determine, however, the extent to which errors affected quality indicators. Due to the prevalence of errors, Abt recommended a review program that included periodically visiting all facilities during the program’s first several years. Recognizing the expense of visiting every facility, however, Abt also recommended eventually transitioning to the use of off-site mechanisms to target facilities and specific assessments for on-site review. Abt also made recommendations to address the underlying causes of MDS errors: simplifying the MDS assessment tool, clarifying certain MDS definitions (particularly for ADLs), and improving MDS training for facilities. The Federal MDS Review Program Is Too Limited to Evaluate State-Level Accuracy Assurance Efforts Building on the work of Abt Associates, in the summer of 2000, the agency began formulating its own distinct nationwide review program to address long-term MDS monitoring needs. The agency developed a request for proposal for MDS data assessment and verification activities and sought proposals from its 12 program safeguard contractors. On September 28, 2001, CMS awarded a 3-year contract for approximately $26 million to Computer Sciences Corporation. The contract calls for the initiation of on- site and off-site reviews by late spring 2002, but the full scope of MDS review activities will not be underway until the second year of the contract. (See table 3.) Despite this broad approach, the contractor is not specifically tasked with assessing the adequacy of each state’s MDS reviews. Instead, it is required to develop a strategy for coordinating its review activities with other state and federal oversight, such as the selection of facilities and the timing of visits, to avoid unnecessary overlap with routine nursing home surveys or states’ separate MDS review programs. This approach does not appear to build on the benefits of on-site visits that are already occurring as part of state review activities. Rather, the contract specifies independent federal on-site and off-site reviews of roughly 1 percent of the approximately 14.7 million MDS assessments expected to be prepared in 2001—80,000 during the first contract year and 130,000 per year thereafter. The contractor, however, tentatively recommended that the majority of reviews, about 90 percent, be conducted off-site. According to CMS, these off-site reviews could include a range of activities, such as the off-site targeting approaches developed by Abt or medical record reviews similar to those conducted by CMS contractors for purposes of reviewing Medicare claims. In addition, the contractor is expected to conduct a range of off-site data analyses that could include a large number of MDS assessments. The remaining 10 percent of MDS assessments— representing fewer than 200 of the nation’s 17,000 nursing homes—would be reviewed on-site each year. This limited on-site presence is inconsistent with Abt’s earlier recommendation regarding the benefits of on-site reviews in detecting accuracy problems, and with the view of almost all of the states with separate MDS review programs that an on-site presence at a significant number of their nursing homes is central to their review efforts. While CMS’ approach may yield some broad sense of the accuracy of MDS assessments on an aggregate level, it appears to be insufficient to provide confidence about the accuracy of MDS assessments in the vast bulk of nursing homes nationwide. Given the substantial resources invested in on- site nursing home visits associated with standard surveys or states’ separate MDS review programs, CMS’ MDS review program could view states’ routine presence as the cornerstone of its program and instead focus its efforts on ensuring the adequacy of state reviews. CMS could build on its established federal monitoring survey process for nursing home oversight. The agency is required by statute to annually resurvey at least 5 percent of all nursing homes that participate in Medicare and Medicaid. One of the ways CMS accomplishes this requirement is by conducting nursing home comparative surveys to independently assess the states’ performance in their nursing home survey process. During a comparative survey, a federal team independently surveys a nursing home recently inspected by a state in order to compare and contrast the results. These federal comparative surveys have been found to be most effective when completed in close proximity to the state survey and involve the same sample of nursing home residents to the maximum extent possible. Abt also attempted to review recently completed MDS assessments. Finally, a potential issue that could undermine the efficacy of the federal MDS accuracy reviews involves the level of documentation required to support an MDS assessment. CMS requires specific documentation for some MDS elements, but officials said that the MDS itself—which can simply consist of checking off boxes or selecting multiple choice answers on the assessment form—generally constitutes support for the assessment without any additional documentation. CMS officials consider the MDS assessment form to have equal weight with the other components of the medical record, such as physician notes and documentation of services provided. As a result, CMS asserts that the assessment must be consistent with, but need not duplicate, the medical record. In contrast, most of the nine states with separate on-site review programs require that support for each MDS element that they review be independently documented in the medical record. State officials told us that certain MDS elements, such as ADLs, are important to thoroughly document because they require observation of many activities by different nursing home staff over several days. As a result, some of these states require the use of separate flow charts or tables to better document ADLs. Similarly, some states require documentation for short-term memory loss rather than accepting a nursing home’s assertion that a resident has this condition. CMS’ training manual describes several appropriate tests for identifying memory loss, such as having a resident describe a recent event. In one of its December 2000 reports, the HHS OIG recommended that nursing homes be required to establish an “audit trail” to support certain MDS elements. HCFA disagreed, noting that it does not expect all information in the MDS to be duplicated elsewhere in the medical record. However, given the uses of MDS data, especially in adjusting nursing home payments and producing quality indicators, documenting the basis for the MDS assessments in the medical record is critical to assessing their accuracy. Conclusions In complying with federal nursing home participation and quality requirements, about 17,000 nursing homes were expected to produce almost 15 million MDS assessments during 2001 on behalf of their residents. This substantial investment of nursing home staff time contributes to multiple functions, including establishing patient care plans, assisting with quality oversight, and setting nursing home payments that account for variation in resident care needs. While some states, particularly those with MDS-based Medicaid payment systems, stated that ensuring MDS accuracy requires establishing a separate MDS review program, many others rely on standard nursing home surveys to assess the data’s accuracy. Flexibility in designing accuracy review programs that fit specific state needs, however, should not preclude achieving the important goal of ensuring accountability across state programs. It is CMS’ responsibility to consistently ensure that states are fulfilling statutory requirements to accurately assess and provide for the care needs of nursing home residents. The level of federal financial support for state MDS accuracy activities is already substantial. The federal government pays up to 75 percent of the cost of separate state MDS review activities and in fiscal year 2001 contributed $278 million toward the cost of the state nursing home survey process, which is intended in part to review MDS accuracy. Instead of establishing a distinct but limited federal review program, reorienting the thrust of its review program in order to complement ongoing state MDS accuracy efforts could prove to be a more efficient and effective means to achieve CMS’ stated goals. Such a shift in focus should include (1) taking full advantage of the periodic on-site visits already conducted at every nursing home nationwide through the routine state survey process, (2) ensuring that the federal MDS review process is designed and sufficient to consistently assess the performance of all states’ reviews for MDS accuracy, and (3) providing additional guidance, training, and other technical guidance to states as needed to facilitate their efforts. With its established federal monitoring system for nursing home surveys— especially the comparative survey process—that helps assess state performance in conducting the nursing home survey process, CMS has a ready mechanism in place that it can use to systematically assess state performance for this important task. Finally, to help improve the effectiveness of MDS review activities, CMS should take steps to ensure that each MDS assessment is adequately supported in the medical record. Recommendations for Executive Action With the goal of complementing and leveraging the considerable federal and state resources already devoted to nursing home surveys and to separate MDS accuracy review programs, we recommend that the administrator of CMS review the adequacy of current state efforts to ensure the accuracy of MDS data, and provide, where necessary, additional guidance, training, and technical assistance; monitor the adequacy of state MDS accuracy activities on an ongoing basis, such as through the use of the established federal comparative survey process; and provide guidance to state agencies and nursing homes that sufficient evidentiary documentation to support the full MDS assessment be included in residents’ medical records. Agency and State Comments and Our Evaluation We provided a draft of this report to CMS and the 10 states with separate MDS accuracy programs for their review and comment. (See app. II for CMS’ comments.) CMS agreed with the importance of assessing and monitoring the adequacy of state MDS accuracy efforts. CMS also recognized that the MDS affects reimbursement and care planning and that it is essential that the assessment data reflect the resident’s health status so that the resident may receive the appropriate quality care and that providers are appropriately reimbursed. However, CMS’ comments did not indicate that it planned to implement our recommendations and reorient its MDS review program. Rather, CMS’ comments suggested that its current efforts provide adequate oversight of state activities and complement state efforts. While CMS stated that it currently evaluates, assesses, and monitors the accuracy of the MDS through the nursing home survey process, it also acknowledged the wide variation in the adequacy of current state accuracy review efforts. Our work in the 10 states with separate MDS review programs raised serious questions about the thoroughness and adequacy of the nursing home survey process for reviewing MDS accuracy. Officials in many of these states said that the survey process itself does not detect MDS accuracy issues as effectively as separate MDS review programs. Surveyors, we were told, do not have time to thoroughly review MDS accuracy and their focus is on quality of care and resident outcomes, not accuracy of MDS data. In response to our recommendations on assessing and monitoring the adequacy of each state’s MDS reviews, CMS commented that it would consider adding a new standard to the state performance expectations that the agency initiated in October 2000. CMS indicated that the state agency performance review program would result in a more comprehensive assessment of state activities related to MDS accuracy than could be obtained through the comparative survey process. CMS also outlined planned analytic activities—such as a review of existing state and private sector MDS review methodologies and instruments, ongoing communications with states to share the knowledge gained, and comprehensive analyses of MDS data to identify systemic accuracy problems within states as well as across states—that it believes will help to evaluate state performance. We agree that some of CMS’ proposed analytic activities could provide useful feedback to states on problem areas at the provider, state, region, and national levels. Similarly, the addition of MDS accuracy activities to its state performance standards for nursing home surveys, which CMS is considering, has merit. While CMS plans to consider adding a new standard to its state agency performance review program, the agency has a mechanism in place—the comparative survey process—that it could readily use to systematically assess state performance. However, CMS apparently does not intend to do so. Based on our discussions with agency officials, it does not appear that CMS’ approach will yield a consistent evaluation of each state’s performance. We continue to believe that assessment and routine monitoring of each state’s efforts should be the cornerstone of CMS’ review program. As we previously noted, the agency’s proposed on-site and off-site reviews of MDS assessments are too limited to systematically assess MDS accuracy in each state and would consume resources that could be devoted to complementing and overseeing ongoing state activities. A comprehensive review of the adequacy of state MDS accuracy activities, particularly in those states without a separate review program, is essential to establish a baseline and to allow CMS to more efficiently target additional guidance, training, or technical assistance that it acknowledged is necessary. CMS did not agree with our recommendation that it should provide guidance to states regarding adequate documentation in the medical record for each MDS assessment. CMS stated that requiring documentation of all MDS items places an unnecessary burden on facilities. Skilled reviewers, it stated, should be able to assess the accuracy of completed MDS assessments through a combination of medical record review, observation, and interviews. CMS further stated that requiring duplicative documentation might result in documentation that is manufactured and of questionable accuracy. Of course, the potential for manufactured data could also be an issue with the MDS, when supporting documentation is absent or limited. Without adequate documentation, it is unclear whether the nursing home staff sufficiently observed the resident to determine his or her care needs or merely checked off a box on the assessment form. We continue to believe, as do most of the states with separate MDS review programs, that requiring documentation for the full MDS assessment is necessary to ensure the accuracy of MDS data. In our view, however, this documentation need not be duplicative of that which is already in the medical record but rather demonstrative of the basis for the higher-level summary judgments about a resident’s condition. Some states have already developed tools to accomplish this and in commenting on a draft of this report, two states said that CMS should establish documentation requirements for responses on the MDS. In addition, the discrepancies cited by the HHS OIG in its studies stemmed from inconsistencies between MDS assessments and documentation in residents’ medical records. The OIG acknowledged that the results of its analyses were limited by the information available in the medical record— for example, when a facility MDS assessment was based on resident observation, the facility may not have documented these observations in the medical record. The importance of adequate documentation is further reinforced by the fact that using interviews and observation to validate MDS assessments may often not be possible, particularly for residents who have been discharged from the nursing home before an MDS accuracy review. Given the importance of MDS data in adjusting nursing home payments and guiding resident care, documenting the basis for the MDS assessment—in a way that can be independently validated—is critical to achieving its intended purposes. CMS provided additional clarifying information that we incorporated as appropriate. In addition, the states that commented on the draft report generally concurred with our findings and provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days after its date. At that time, we will send copies to the administrator of CMS; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions, please call me at (202) 512-7114 or Walter Ochinko at (202) 512-7157. Major contributors to this report include Carol Carter, Laura Sutton Elsberg, Leslie Gordon, and Sandra Gove. Appendix I: Summary of State On-Site MDS Reviews As of January 2001 Year state began: • MDS-based payment system • MDS reviews 2000 (payment) 2000 (reviews) Average time lapse between facility MDS and state review 90 days 1998 (payment) 1998 (reviews) State reviews most recent MDS assessment 1993 (payment) 1994 (reviews) Minimum of 10 assessments per facility 1988 (payment) 1992 (reviews) At least 20 percent of residents in facility 1993 (payment) 1994 (reviews) Year state began: MDS-based payment system MDS reviews 1996 (payment) 1994 (reviews) Frequency of on-site reviews (all facilities unless otherwise noted) Average time lapse between facility MDS and state review 6-12 months 1993 (payment) 1993 (reviews) At least 25 percent of residents in facility 1992 (payment) 1992 (reviews) MDS never older than 90 days 1998 (payment) 1998 (reviews) Annually (Staff also conduct quarterly quality review audits) Yes (effective 10/1/01) The types of MDS errors that commonly reoccur relate to misapplication of MDS definitions, and may in large part be due to facility staff turnover. In commenting on a draft of this report, officials told us that these errors are consistent with those found in other states with MDS-based payment systems. State plans to publish the results of MDS accuracy reviews on a Web page to prevent simple but recurring errors Virginia is not included because of the newness of its MDS review program (began operating in April 2001). We have included the nine other states with longer standing on-site review programs. This column reflects the frequency of initial reviews for each facility. Some states conduct follow-up reviews more frequently for facilities where problems have been identified. We asked states to select from the following categories: more important, equally important, and less important. Indiana officials added the following language to characterize MDS errors: An error occurs when the audit findings are different from the facility’s transmitted MDS data and those differences result in a different case-mix category. Appendix II: Comments from the Centers for Medicare and Medicaid Services
Nursing homes that participate in Medicare and Medicaid must periodically assess the needs of residents in order to develop an appropriate plan of care. Such resident assessments are known as the minimum data set (MDS). According to officials in the 10 states with MDS accuracy review programs in operation as of January 2001, these programs were established to set Medicaid payments and identify quality of care problems. Nine of the 10 states conduct periodic on-site reviews in all or a significant portion of their nursing homes to assess the accuracy of the MDS data. These reviews sample a home's MDS assessments to determine whether the basis for the assessments is adequately documented in residents' medical records. These reviews often include interviews of nursing home personnel familiar with residents and observations of the residents themselves. States with separate MDS review programs identified various approaches to improve MDS accuracy. State officials highlighted the on-site review process itself and provider education activities as their primary approaches. State officials also reported such remedies as requiring nursing homes to prepare a corrective action plan or imposing financial penalties on nursing homes when serious or extensive errors in MDS data are found. Following the 1998 implementation of Medicare's MDS-based payment system, the Health Care Financing Administration began its own review program to ensure the accuracy of MDS data.
Background Recognizing their mutual interests in the Great Lakes and other boundary waters, the United States and Canada signed the Boundary Waters Treaty in 1909, giving both countries equal rights to use the waterways that cross the international border. Accordingly, the treaty established the International Joint Commission (IJC), comprised of three commissioners from each country, to help the two governments resolve and prevent disputes concerning boundary waters. With increased concern over the contamination of the Great Lakes, the two countries signed the first international Great Lakes Water Quality Agreement in 1972 to improve the environmental conditions in the lakes. The agreement focused on controlling phosphorous as a principal means of dealing with eutrophication in the lakes. The parties signed a new agreement in 1978 that called for increased control of toxic substances and restoring water quality throughout the Great Lakes Basin. Subsequent amendments were made to the agreement in 1983 and 1987. The 1987 amendments added several annexes that focused on specific environmental concerns, such as contaminated sediment. The 1978 agreement as amended contains 17 annexes that define in detail the specific programs and activities that the two governments agreed upon and committed to implement. Although most of the annexes specify pollution prevention strategies, Annex 2 calls for the preparation of RAPs to address the restoration and protection of beneficial uses in specific contaminated areas designated as areas of concern and the other open waters of the Great Lakes. Such areas may include areas along the Great Lakes’ shoreline and areas that drain into the lakes that states and provinces identified as contaminated areas requiring cleanup. The agreement binds the United States and Canada to cooperate with state and provincial governments to designate such areas of concern, with the IJC reviewing progress by each government in addressing actions to restore water quality in the lakes. The agreement as amended also directs that the public be consulted in the RAP process and that each RAP define the environmental problems and the causes of these problems in provide an evaluation of remedial measures, select remedial measures, provide an implementation schedule, identify organizations or individuals responsible for implementation, include a process for evaluating remedial implementation and effectiveness, and provide a description of monitoring to track effectiveness and confirmation that the areas are restored. In defining the environmental problems, RAPs determine the applicability of 14 adverse environmental conditions to the area. Such impairments include beach closings, tainting of fish and wildlife flavor, and bird or animal deformities or reproduction problems. In addition, the Water Quality Act of 1987 amended the Clean Water Act to provide that EPA should take the lead in coordinating with other federal agencies and state and local authorities to meet the goals in the agreement. The act also established GLNPO within EPA to fulfill the United States’ responsibilities under the agreement and to coordinate EPA’s actions both at headquarters and in the affected regional offices. The Great Lakes Critical Programs Act of 1990 amended the Clean Water Act further defining GLNPO’s role and requiring the submission of all RAPs to the office and also requiring each plan be submitted to the IJC for review and comment. The 1990 Act designated states as the primary parties for developing and implementing plans, although ensuring successful completion of the plans remains the responsibility of the United States and EPA under the agreement and the Clean Water Act. When Coastal Environmental Management (CEM) funding first became available in 1992, and because the Water Divisions administered other water program funding, EPA officials made the decision to transfer oversight of the RAP process from GLNPO to the Water Division in EPA Regions II, III, and V, which border the Great Lakes. For the past several years, we and others have reported on slow progress of the Great Lakes cleanup activities, making particular reference to the fact that neither GLNPO nor any other EPA office had devoted the necessary responsibility, authority, and resources to effectively coordinate and oversee cleanup efforts in the Great Lakes Basin. In 1990, we reported that the development of the RAPs and Lakewide Management Plans (LaMPs) called for in the agreement had fallen far behind schedule and recommended that EPA better coordinate GLNPO and EPA’s headquarters offices to improve the process. Likewise, EPA’s Office of Inspector General (OIG) reported in 1999, that EPA officials were not as effective as they could be in working with states and local officials on RAPs and recommended that one official coordinate these activities. The IJC, in its most recent biennial report, identified the RAP process as an area needing improvement and reported that the process for preparing RAPs and LaMPs was no longer being followed, in some cases resulting in an ad hoc modification of the annex. The IJC also reported that information on RAP implementation is not readily available in a standardized, consolidated format. Overall, the IJC concluded that although some progress had been made in the Great Lakes, the governments had not committed adequate funding or taken decisive actions to restore and protect the lakes. Citing the public’s right to know (and in an effort to get the program back on track), the IJC recommended a results-oriented approach, suggesting that the governments of the United States and Canada prepare one consolidated progress report that lists accomplishments, expenditures, what remains to be done, and the amount of funding and time needed to restore the contaminated areas to beneficial use. Cleanup Progress Has Been Limited in Many Contaminated Areas Progress in cleaning up the Great Lakes and restoring the contaminated areas to their beneficial uses has fallen behind where the parties hoped it would be. As of April 2002, most of the RAPs for which the United States was responsible were in the second stage of having remedial and regulatory measures selected; none has completed all three stages indicating completion of cleanup. (See table 1.) No area of concern in the United States has had its designation removed— that is, been delisted—although the Great Lakes Strategy 2002 plan, which was developed by representatives of federal, state, and tribal governments in the Great Lakes area, lists as one of its objectives the removal of 3 areas from the list by 2005, and 10 by 2010. The RAP process envisioned in the agreement is not being consistently used as a model for cleanup activities occurring at the areas. While cleanup activities have occurred in many areas, such activities have generally resulted from other environmental programs or initiatives. The RAP process has essentially been abandoned for some areas, modified for others, and for a limited number of areas the process is being followed to address the environmental impairments. According to state officials, a major reason that the RAP process is not being followed is the lack of general funding, including funding from EPA. Whether or not the process is being followed at an area often depends in part on state involvement in the process and whether there is local interest. As a result, implementation of the agreement is uneven across the areas and, in areas where the process has been abandoned, the initial investment in the process may have been largely wasted. Each of the eight Great Lakes states—Illinois, Indiana, Ohio, Michigan, Minnesota, New York, Pennsylvania, and Wisconsin—has approached the RAP process in a somewhat different manner after EPA reduced its funding, but in general the volume of resources they devoted to the process has diminished in the past 10 years, according to state officials. The state of Michigan, which contains 14 areas, completed the first stage of the RAP process—defining the environmental problems—in 1987 and 1988. In the preparation stage, the state funded a group of state coordinators, who spent part or all of their time on RAPs. Today, the coordinators spend only a small fraction of their time on RAPs and serve mainly as an area’s informational point of contact. In addition, the state decided that it would no longer follow the three-stage process set forth in the agreement. Responsibility for the Michigan RAP process rests primarily with local groups known as public advisory councils, and while none of these groups have abandoned their work, state officials indicated that two groups are on the verge of quitting and that others had significantly decreased their activities. The officials further stated that, while RAPs may be a catalyst, they are not driving the implementation of the areas’ cleanup activities. Instead, officials noted, other federal programs, such as Superfund, and state and nonprofit programs provide funding for cleanup and restoration activities. An organization representing the public advisory councils recently recommended that the state play a more aggressive role in supporting their efforts by providing funding and technical support. The state of New York, which has six contaminated areas, employs a part- time coordinator for each area. According to state officials, over the years the overall activity in the RAP process has decreased, but the state retains oversight and commitment to the process. However, the RAP process is not the impetus for cleanup activities at the areas. Instead, other programs, such as EPA’s Resource Conservation and Recovery Act, have been used to clean up contaminated areas. In Wisconsin, which has five contaminated areas, the work on the RAP process for the areas was stopped after EPA decreased funding for RAP activities. As with other states, cleanup work continues at the areas through other programs, although the state only completes projects consistent with a RAP when it has the time and funds to do so, according to a state official. The state does not monitor RAP progress, and community groups are no longer actively involved in the process. In Ohio, which has four contaminated areas, the RAP process evolved differently in each area. For example, a structured process exists to address the environmental impairments in one area, but the process is less structured in two other areas and significantly modified in another, according to a state official. Community organizations are involved in three of the four areas. The state has also modified the three-stage process specified in the agreement, saying that the RAPs could never be used to cleanup an area because they are not implementation documents, according to the official. In Minnesota, Illinois, Pennsylvania, and Indiana, which have one contaminated area each, any work underway in the areas is largely the result of other programmatic activity, such as the removal of contaminated sediment in Waukegan Harbor, Illinois, as part of the Superfund program. There is local involvement in the RAP process in the areas in Illinois, Pennsylvania, and Indiana. In Minnesota, a nonprofit group sponsors environmental projects in the region where the area is located, but it is not directly involved in the RAP process. EPA and others often present environmental cleanup activities that relate to the goals of the RAP process as evidence that progress is being made at the areas, but these activities often relate to the goals of other programs, such as Superfund. Such reporting makes it difficult to determine what progress is being made in eliminating the impairments identified in the individual RAPs. In this connection, the members of the IJC responsible for reviewing the progress of the areas have reported their frustration in assessing RAP progress because EPA has not provided meaningful information to them. EPA Is Not Fulfilling the Nation’s Responsibility to Ensure the Cleanup of Contaminated Areas EPA is not effectively fulfilling the nation’s responsibilities to ensure that RAPs are developed and implemented in the contaminated areas. Several EPA actions, such as diffusing RAP responsibility within the agency, reducing federal funding and staff support for the RAP process, and shifting the agency’s attention to other cleanup priorities in the Great Lakes Basin have all contributed to the uneven progress in RAP development and implementation. For example, in 1992, EPA transferred the responsibility for overseeing the RAP process from GLNPO to its Water Divisions in Regions II, III, and V. GLNPO retained responsibility for certain RAP-related activities, such as preparing progress reports and funding research that affected the contaminated areas. The Water Divisions provided initial support and oversight for the RAP process, but following several sequential cutbacks in process-related state funding and staff, their capacity to oversee the RAP process was diminished to the point where EPA could no longer ensure the ultimate restoration of the contaminated areas. As support for the RAP process waned, EPA shifted its attention to other environmental problems in the Great Lakes, such as completing plans to address lakewide environmental problems. Although important, these activities did not supplant the need for RAPs to address the contaminated areas. Oversight Responsibility Within EPA for Contaminated Areas Is Unclear Responsibility for oversight of the RAP process within EPA has changed over time and today no office claims that responsibility. Amendments to the Clean Water Act in 1987 named EPA as the lead agency and charged GLNPO with coordinating EPA’s actions aimed at improving the water quality of the Great Lakes. The act was amended in 1990 to, among other things, require GLNPO to ensure the submission of RAPs for each area of concern. The EPA administrator is responsible under the act for ensuring that GLNPO specifically delineate the duties and responsibilities, the time commitments, and resource requirements with respect to Great Lakes activities when entering into agreements with other organizational elements within EPA. Shortly after the 1990 amendments were enacted, EPA officials transferred oversight of the RAP process from GLNPO to its Water Divisions in Regions II, III, and V, which border the Great Lakes. While this decision was not formally documented, an EPA official familiar with the decision stated that EPA headquarters considered GLNPO’s primary focus to be on research and basin-wide activities. Furthermore, the official did not think that, as an office, GLNPO had the organizational mindset or capacity to oversee the RAP process. According to GLNPO officials, EPA believed the Water Divisions were more familiar with funding and managing similar programs. GLNPO, however, continued to track the status of RAPs and provide technical assistance and grant funds for projects associated with RAPs. In 1995, EPA’s Region V Office reorganized and created teams responsible for the Great Lakes including their contaminated areas. These teams are focusing on developing and updating the LaMPs for each lake. The directors for GLNPO and the Region V Water Division share responsibility for the teams. In addition to the CEM funds provided for RAPs by the Water Divisions, GLNPO’s base budget has averaged about $14.5 million annually since 1993. During that same period GLNPO awarded about $3.2 million annually to states, tribes, local organizations, and academic institutions to fund Great Lakes activities related to the areas such as sediment research and pollution prevention. In a September 1999 report on EPA’s Great Lakes Program, the EPA OIG recommended that the EPA’s Region V administrator clarify the role of GLNPO as it relates to RAPs and LaMPs. The administrator agreed with this recommendation and stated that GLNPO’s roles and responsibilities would be addressed during the development and implementation of a Great Lakes strategy. At that time, regional officials expected this strategy to be completed by April 2000. EPA released its Great Lakes strategy on April 2, 2002; however, this strategy did not clarify GLNPO’s roles and responsibilities for RAPs, nor did it include provisions for specific funding to carry out the strategy. GLNPO officials stated that they decided not to include this clarification in the strategy because it required more specifics than could be included in the document. Still, as of April 2002, the agency had not clarified GLNPO’s role in any other document. GLNPO officials have stated that state and local governments are primarily responsible for implementation of RAPs through their local pollution control programs, except when federal programs and authorities, such as Superfund, are in the lead for a particular effort. Further, other EPA officials have noted that the financial assistance provided states for developing RAPs was intended only to be seed money and that the states were expected to continue funding the process. State and other EPA officials, including GLNPO officials, maintain that the federal government is ultimately responsible for the RAPs and cleaning up the areas. According to the director of the Water Division in Region V, there needs to be clear delineation of oversight responsibility for RAPs, which are, in the end, a federal responsibility. EPA Cut Funding and Staffing for Program- Related Activities Over the past 10 years EPA has taken several steps that have reduced its ability to sustain the RAP process, such as reducing the amounts of RAP- related funding allocated to the states and reducing the number of agency staff assigned to oversee RAP activities. To assist states in preparing RAPs for the contaminated areas, EPA provided funding to the states from the CEM program. States used the funding to hire staff to focus on the planning process and organize community involvement to develop the RAPs. The funding was allocated to the three EPA Regions and then provided to the states. EPA decreased its regional CEM funding from $9.2 million in fiscal year 1992 to $2.5 million in fiscal year 2002. (See figure 1.) Approximately 75 percent of the CEM funding was provided to Region V in fiscal years 1992 through 2002. The director of the Water Division for EPA Region V stated that when the CEM funding was first available for work on both RAPs and LaMPs, 7 or 8 staff positions were provided for each of the 6 states in the region. The decrease in funding resulted in reducing the staff committed to RAPs in the three states that we visited—Ohio, Michigan, and Wisconsin. For example, in Wisconsin, as the funding for RAPs and other Great Lakes activities was reduced, the state reduced its staff working on RAPs and LaMPs from 9 full-time to one full-time and one part-time position. As a result, the state could no longer provide support for the local RAP committees or updates for the RAPs and stopped doing remedial action work at the contaminated areas, unless it related to some other program, such as Superfund. EPA also reduced its staffing levels for the RAPs. The agency had funded RAP liaison positions to facilitate and coordinate work on RAPs. In EPA’s Region V, which encompasses most of the areas of concern, there were 21 RAP liaisons with at least one assigned to each area in 1999. As of 2001 this staffing had been reduced to two part-time and one full-time liaison. An EPA official responsible for the liaisons stated that work on RAPs was no longer a priority and priorities had shifted to LaMPs. In fiscal year 2002, one person was assigned to work full-time on the RAP for the Detroit River area, but neither Region V nor EPA headquarters had any staff responsible for monitoring RAP progress. GLNPO has provided grant funding to the Great Lakes Commission, a binational agency promoting information sharing among Great Lakes states, to update information on the contaminated areas and the RAPs on GLNPO’s web site. The information provides an overview of the status of RAPs with updated information provided by state or local officials. The information, however, does not present an analysis of the progress in cleaning up areas or time frames for expected completion. EPA Has Shifted Its Focus to Other Great Lakes Activities EPA has reduced support for the RAP process and redirected its efforts to several other Great Lakes initiatives, many of which are required in the agreement and either directly or indirectly affect the areas. Specifically, the Water Divisions have focused resources on the development of LaMPs. LaMPs address overall concerns in the open lake waters, such as reduction in loadings of critical pollutants, but they do not replace the RAPs, which are intended to clean up the shoreline where most of the contamination occurs. GLNPO has been involved in several other initiatives, including coordinating the development of a Great Lakes strategy. The strategy was developed by the U.S. Policy Committee, which is comprised of representatives from federal, state, and tribal organizations responsible for the actions specified in the agreement. The strategy sets forth certain goals, objectives, and actions the parties agree to address, including the following. The reduction of toxic substances in the Great Lakes Basin ecosystem. [The reduction of mercury and dioxin emissions from medical waste incinerators was one objective under this goal. A key action for this goal is that Minnesota will achieve a 70 percent reduction of its 1990 mercury emissions by 2005.] The development of environmental indicators for the Great Lakes through a series of State of the Lakes Ecosystem Conferences (SOLEC) at which indicators are discussed and agreed upon. These biennial conferences, jointly sponsored by GLNPO and Environment Canada, bring together representatives from federal, state, and provincial organizations, and the general public. The latest conference, held in October 2001, approved 33 of 80 indicators being proposed to assess conditions in the Great Lakes. Maintaining a ship for research and monitoring on the Great Lakes and providing another vessel for sampling contaminated sediment. The strategy also addresses cleaning up areas through RAPs and sets forth objectives to cleanup and delist three areas by 2005, and 10 by 2010 and an acceleration of sediment remediation efforts leading to the cleanup of all sites by 2025. In addition, the strategy calls for delisting guidelines for the areas, which were completed in December 2001. The guidelines include tasks such as requiring monitoring data for achieving restoration goals and addressing impairments caused by local sources within the areas. While the strategy sets forth numerous environmental objectives state environmental officials have questioned how the objectives will be achieved without additional funding. Conclusions The process now being used to develop and implement RAPs for many of the contaminated areas in the Great Lakes Basin has deviated from the process outlined in the agreement between the United States and Canada. Momentum for RAP activity waned since EPA diffused the responsibility for ensuring RAP progress among its various offices, began reducing its staff and process-related funding to the states, and shifted its priorities to completing other activities in the Great Lakes Basin. As a result, states and local communities have had to seek funding from other federal programs or other sources in order to continue their cleanup activities. Although EPA’s initial investment in the process yielded some results in terms of planning documents and public involvement, EPA is not in a position to provide assurance that such involvement will continue in the future or that the RAPs will be implemented. Without a clear delineation of oversight responsibilities within EPA for RAP implementation, all preliminary efforts and expenditures may have been largely wasted. Absent EPA’s support, involvement, and consistent oversight, states and local communities will have difficulty keeping the process moving forward. Recommendations for Executive Action To help EPA more effectively oversee the RAP process and meet the United States’ commitment under the Great Lakes Water Quality Agreement, we are recommending that the EPA administrator clarify which office within EPA is responsible for ensuring RAP implementation, and identify the actions, time periods, and resources needed to help EPA fulfill its RAP oversight responsibilities. Agency Comments We provided EPA with a draft of this report for its review and comment. The agency generally agreed with the findings and recommendations in the report. EPA maintained that significant progress was being made at the areas of concern with most RAPs having completed Stage 2 and one having completed Stage 3. However, we and the IJC believe that this does not represent significant progress, and no area of concern within the United States has been delisted. EPA also stated that the RAP process does not fairly represent environmental improvements that are being made at the areas of concern. We recognize that some cleanup activities are being taken within the areas of concern that relate to other program requirements, but maintain that the RAP process is still the primary cleanup vehicle. The agency also stated that it has been actively involved in ensuring that RAPs are developed and it is reviewing the RAP process to create a more effective program. While this may have been true initially, EPA significantly reduced this support and currently provides only limited support for the process. We commend EPA for developing delisting principles and guidelines, but this effort does not directly address the need to improve the overall effectiveness of the RAP process. EPA agreed with our recommendations to clarify which office within EPA is responsible for ensuring RAP implementation, and it will seek to clarify these responsibilities within EPA. As to our recommendation to identify actions, time periods, and resources needed for fulfilling its RAP oversight responsibilities, EPA commented that this would be a difficult task because of the wide spectrum and scale of environmental problems within the areas of concern and other priorities and responsibilities within EPA. We recognize that this task may be difficult, but it is critical if EPA is to fulfill its oversight responsibility. The full text of EPA’s comments is included as appendix II. We conducted our review from September 2001 through April 2002 in accordance with generally accepted government auditing standards. (See app. I for a detailed description of our scope and methodology.) As arranged with your offices, we plan no further distribution of this report until 30 days after the date of this letter unless you publicly announce its contents earlier. At that time, we will send copies to other appropriate congressional committees, the EPA administrator, and the International Joint Commission. We will also make copies available to others upon request. Should you or your staff need further information, please contact me on (202) 512-3841. Key contributors to this report are listed in appendix III. Appendix I: Scope and Methodology To assess what progress had been made in developing and implementing cleanup plans for the contaminated areas around the Great Lakes, we reviewed the Great Lakes Water Quality Agreement of 1987, which set forth the United States’ obligation to cooperate with state governments to ensure the cleanup of the contaminated areas and described the process for developing and implementing the cleanup plans. We also used Internet web site information that described the cleanup status at the contaminated areas of concern. In addition, we visited areas of concern (areas) in the Milwaukee Wisconsin Estuary and Ashtabula, Ohio, where we discussed cleanup efforts, implementation plans, and assistance provided by federal, state and local agencies. Further, we gathered and analyzed information obtained through interviews with officials from the International Joint Commission (IJC), the Great Lakes Commission, the Northeast Midwest Institute, EPA Headquarters and Region V Office of Water, and the Great Lakes National Program Office (GLNPO), the U.S. Army Corps of Engineers, the Wisconsin Department of Natural Resources, the Ohio Environmental Protection Agency, and local community advisory groups responsible for cleanup activities at the selected areas. To determine how the cleanup plans were being used at other areas, we visited the Michigan Department of Environmental Quality, which manages the greatest number of contaminated areas (14), and solicited telephone and written comments from each of the other five Great Lakes states concerning their cleanup activities and the remedial action plan process. To further assess EPA efforts to provide oversight for the contaminated area cleanup process, we reviewed EPA’s legislative responsibilities for providing oversight under the Clean Water Act and discussed with EPA, state, and other federal agencies EPA’s success in fulfilling these responsibilities. Appendix II: Comments from the Environmental Protection Agency Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments Key contributors to this report were Willie E. Bailey, Jonathan S. McMurray, Rosemary Torres-Lerma, Stephanie Luehr, and Karen Keegan.
To protect the Great Lakes and to address common water quality problems, the United States and Canada entered into the bilateral Great Lakes Water Quality Agreement in 1972. The agreement has been amended several times, most recently in 1987. That year, the two countries agreed to cooperate with state and provincial governments to develop and implement remedial action plans (RAPs) for designated areas in the Great Lakes Basin--areas contaminated, for example, by toxic substances. The Environmental Protection Agency (EPA) leads the effort to meet the goals of the Great Lakes Water Quality Agreement, which include RAP development and implementation. As of April 2002, all of the 26 contaminated areas in the Great Lakes Basin that the United States is responsible for have completed the first stage of the RAP process; however, only half have completed the second stage. Even though EPA has been charged with leading the effort to meet the goals of the agreement, it has not clearly delineated responsibility for oversight of RAPs within the agency, and, citing resource constraints and the need to tend to other Great Lakes priorities, reduced its staff and the amount of funding allocated to states for the purpose of RAP development and implementation.
Background Grants Awarded by CMHS CMHS awards two types of grants—formula grants and discretionary grants—to support mental health programs. According to CMHS, in fiscal year 2013, about 61 percent of CMHS’s $822 million in grant funding was awarded through formula grants and about 39 percent was awarded through discretionary grants. (See fig. 1.) Formula grants are awarded to eligible grantees that meet specified criteria outlined in formulas prescribed by statute or regulation. These formulas may consider factors such as the population-at-risk and cost of services. These grants are generally awarded to states and territories that distribute funds to various governmental or nongovernmental entities, such as a state mental health agency or a community mental health center. In fiscal year 2013, CMHS awarded about $501 million under three formula grant programs: 1. the MHBG program, which supports adults with serious mental illness (SMI) or children with serious emotional disturbances (SED); 2. the PAIMI program, which supports protection and advocacy systems designed to ensure the rights of individuals with mental illness who are at risk for abuse and neglect; and 3. the Projects for Assistance in Transition from Homelessness program, which supports outreach, mental health, and other support services to homeless people with SMI. Discretionary grants are generally awarded on a competitive basis for specified projects that meet statutory eligibility and program requirements. Discretionary grants allow CMHS to allocate funding to a particular issue, such as suicide prevention, or to areas and populations with the greatest need. CMHS discretionary grants may be awarded to state, local, territorial, and tribal governments; institutions of higher education; other non-profit organizations (including tribal organizations); and hospitals. These grant applications are solicited through RFAs that specify the purpose of the grant, eligibility requirements, and grantee reporting requirements throughout the grant’s project period. The duration of a grant’s project period and the amount of funding available each year differ by grant program based on program requirements or statutory requirements. In fiscal year 2013, CMHS awarded 589 discretionary grants totaling about $321 million. The smallest grant award was about $30,000 and the largest grant award was about $6 million. Grant Review and Award Process CMHS uses separate processes for reviewing and awarding grants for the formula and discretionary grant programs from which we drew our selection; figures 2 and 3 illustrate these two processes. Evidence-Based Practices After CMHS awards grants, grantees can use a variety of different approaches to treat individuals with mental illness, including evidence- based practices. Since 1997, SAMHSA has sponsored a National Registry of Evidence-Based Programs and Practices, which is a searchable online registry of programs and services that are considered by SAMHSA to be evidence-based. According to SAMHSA the purpose of this system is to help the public, including grantees that have been awarded grants by CMHS, learn more about available evidence-based practices. CMHS’s Criteria for Awarding Grants to Grantees Varied by Program, but CMHS Did Not Document Its Application of Criteria for About a Third of the Grantees We Reviewed The criteria CMHS established for awarding grants to grantees for the MHBG, PAIMI, and selected discretionary grant programs varied by program. CMHS did not document its application of criteria when awarding grants to grantees for about a third of the grantees we reviewed. CMHS’s Criteria for Awarding Grants to Grantees for the MHBG, PAIMI, and Selected Discretionary Grant Programs Varied by Program The criteria CMHS established for awarding grants to grantees for the MHBG, PAIMI, and selected discretionary grant programs varied by program. These criteria identify the requirements grantees must meet in order to receive a grant, including any requirements identified in statute, regulation, or the terms and conditions for the grant program, such as those outlined in RFAs. For example, one of the five programs that awarded grants to grantees we reviewed—Project LAUNCH—required its grantees to state that they will use evidence-based practices to treat individuals with mental illness in their applications when such practices are available, while the others did not. See table 2 for examples of CMHS’s criteria for awarding grants to grantees. CMHS Did Not Document Its Application of Criteria for About a Third of the Grantees We Reviewed and CMHS Lacked Program-Specific Guidance for How to Document This Information During the 2-year period covered by our review, CMHS did not document its application of the criteria it used to award grants to 6 of the 16 grantees we reviewed. Specifically, we found instances across the 2 years in which CMHS did not clearly document the application of its criteria for 4 MHBG grantees and 2 PAIMI grantees. We found that CMHS documented the criteria it applied when awarding grants to discretionary grantees. The grants manual states that CMHS must maintain appropriate file documentation to support decisions in the financial assistance process, including funding decisions, and that transactions and significant events are to be clearly documented to help management with decision-making to help ensure operations are carried out as intended. In addition, Standards for Internal Control in the Federal Government state that all transactions and other significant events need to be clearly documented. CMHS officials said that they follow the grants manual but said that they do not have written guidance that is specific to the MHBG and PAIMI programs that would assist project officers in using the tools that CMHS has developed, such as checklists, to document their application of criteria. Examples of instances in which CMHS did not document its application of criteria when awarding grants to the grantees we reviewed include the following: MHBG. For fiscal year 2012, CMHS officials did not clearly document the application of most criteria for any of the four MHBG grantees we reviewed; however, officials did document how they applied most of the criteria for fiscal year 2013. Because fiscal year 2013 was the second year of the grant project period that corresponded to the application for continued funding, it required that CMHS apply fewer criteria than in the initial application for fiscal year 2012. Specifically, CMHS officials use checklists to help them determine whether grantees meet the agency’s criteria when deciding whether to award grants to grantees. Grantees are expected to provide sufficient information to demonstrate that they have satisfied the criteria in their grant applications. However, for all four of the grantees we reviewed, the checklists for the application covering fiscal year 2012 data listed “N/A” or had missing responses for most of the items in the checklist—the majority of which were related to criteria identified in statute. One grantee’s checklist we reviewed listed “N/A” or had missing responses for nearly all of the items related to statutory requirements. For example, all items related to the five criteria that must be addressed by the required plan submitted as part of the application were listed as “N/A”, and none of the items related to the required state priorities were completed by the project officer. One project officer told us that if a required item on the checklist is listed as “N/A”, it is because the project officer did not have the specified information from the grantee and would therefore request the information. However, we did not see evidence that the project officer updated the checklists after receiving additional information. In another case, officials told us that there was a technical problem with the system that electronically stores the checklists and the responses were automatically populated with “N/A” for these items. While we found that none of the checklists for the MHBG applications covering fiscal year 2012 data clearly documented how CMHS applied criteria when reviewing applications, we found that the checklists covering fiscal year 2013 data for the four grantees clearly documented the use of most criteria. However, officials explained that the 2013 application and the corresponding checklist was less extensive because it was an application for continued funding; project officers reviewed 19 criteria for the 2013 checklist compared to 129 for 2012. PAIMI. For fiscal year 2012, CMHS officials documented the application of criteria for three of the four PAIMI grantees we reviewed. However, we were unable to determine whether CMHS applied the criteria for the fourth grantee because CMHS officials were not able to provide the checklist they used to review this grantee. For fiscal year 2013, we found that accurate documentation of the application of criteria was not available for 2 of the 4 grantees we reviewed. Similar to the MHBG program, officials use checklists to help them determine whether grantees met CMHS’s criteria, such as whether the grantee demonstrated that it had identified priorities and objectives that were relevant to the PAIMI program. All four of the grantees’ checklists for the fiscal year 2013 applications contained documentation that indicated that the project officer found that the objectives identified by the grantee were not relevant to the PAIMI program. However, CMHS officials said that this finding was an error for two of the grantees, but was correct for the other two grantees. The officials told us that, as of October 2014, they had not conducted any follow-up with the two grantees to request additional information regarding the noted concerns. Officials told us that they would only follow up with the grantees on the noted concerns if they conducted a site visit. We found that there were a variety of reasons why CMHS did not adequately document the application of criteria when awarding grants to grantees. One reason was that officials told us they lacked program- specific guidance to document the application of criteria for some programs. CMHS has developed tools for project officers to use for their application of criteria, but project officers sometimes had different ideas about how to use those tools and what to do if a grantee could not provide needed information. For example, officials told us that for the MHBG program there was no written guidance, during the period covered by our review, that described how project officers were to complete the application review checklists to help ensure grantees met the grant criteria. Officials developed program-specific guidance for project officers in fiscal year 2014 for the review of MHBG applications, but we did not assess whether this guidance fully addressed the issues we identified. CMHS officials told us that they also provide training to project officers and that they believe the process is clear; however, we found that project officers had different understandings of how to complete the checklists that are intended to document their review of criteria. Officials also noted a technical problem in the system used to document the application review checklists in fiscal year 2012. For the PAIMI program, officials told us that there was no formal process to address instances in which a grantee did not meet criteria, including how to follow-up with the grantee, for the period we reviewed. Project officers told us that they would sometimes note concerns in the checklist or on the applications, but these notes were for their own benefit. Further, they told us that they generally do not follow-up with the grantee unless they have a site visit. Without documentation that clearly identifies how criteria have been applied when awarding grants to grantees, CMHS cannot ensure that it is applying criteria consistently and appropriately. CMHS Stated That It Uses Various Types of Information for Oversight, but the Documentation of This Information Was Often Missing or Not Readily Available CMHS officials said they use various types of information to oversee grantees awarded grants through the MHBG, PAIMI, and selected discretionary grant programs. However, we found at least one instance during the period covered by our review where documentation of this information was either missing or not readily available for each grantee we reviewed. CMHS Officials Said They Use Various Types of Information, Such as Financial Information and Progress Reports, to Oversee Grantees CMHS officials said they use various types of information to oversee grantees awarded grants through the MHBG, PAIMI, and selected discretionary grant programs. The grants manual, which CMHS officials told us they use as guidance for grantee oversight, describes the various types of information that should be used for overseeing grantees. This includes information grantees are required to report during and at the conclusion of the grant, such as financial information and progress reports, and other information the agency uses for grants management, such as documents noting project officer’s approval of information submitted by the grantee. According to the grants manual, this information is intended to hold the grantee accountable to the programmatic and financial conditions of the award and provide a means for ongoing monitoring of grantee performance. Examples of the types of information and related documentation CMHS officials stated they use to oversee grantees we reviewed for fiscal years 2012 and 2013 are outlined in table 3 below. Documentation of Some Information CMHS Officials Said They Used to Oversee Grantees Was Either Missing or Not Readily Available for Each of the Grantees We Reviewed and CMHS Lacked Program-Specific Guidance For each of the 16 grantees we reviewed, we found at least one instance during the period covered by our review in which the documentation CMHS officials said they used to oversee grantees was missing or not readily available—meaning it was either missing entirely, stored outside of the systems CMHS designated for storing the information, or was not readily available to all officials involved in the oversight of grant documentation. Because this information was missing or not readily available, we were unable to identify all of the information project officers used to oversee these grants. The grants manual states that CMHS must create and maintain files that allow a third party, such as an auditor, to “follow the paper trail” beginning with program initiation through closeout of individual awards, including decisions made and actions taken in between. As previously noted, examples of the types of information that should be documented include all information required by the conditions of the award, such as financial and performance reports. According to the grants manual, this documentation should provide a complete record of the history of an award and serve as a means to support funding decisions and provide ongoing monitoring of grantee performance. In addition, Standards for Internal Control in the Federal Government state that all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination and properly managed and maintained. CMHS officials said that they follow the grants manual. However, where we found documentation missing or not readily available, CMHS often lacked program-specific guidance to assist project officers in documenting their oversight of grantees. Table 4 below provides information on the types of issues identified for the grant documentation for grantees we reviewed from each of the three grant programs. Specific examples of grant documentation that was missing or not readily available for grantees we reviewed from each of the three grant programs include the following: MHBG. Financial information grantees are required to report was missing for two of the four MHBG grantees we reviewed because, according to CMHS officials, the grantees did not submit it. We found that one grantee had not submitted final financial expenditures for fiscal years 2012 or 2013. The other grantee had not yet submitted final expenditures for fiscal year 2013. CMHS officials provided documentation that indicated that these grantees subsequently submitted this information after our review was complete. In addition, we found that the same two MHBG grantees had not submitted required information to show that they have maintained a required level of state mental health expenditures. Further, we found that there was no documentation from either of these grantees or from CMHS officials that indicated that a waiver had been approved to relieve the grantee of this requirement or that CMHS had determined that the grantee had materially complied despite not maintaining the required level of expenditures. This documentation is required in instances in which the grantee has not complied with the spending level. These grantees submitted other required financial information after our review was complete and CMHS officials told us that as a result this documentation was no longer needed. In addition, for all four grantees we reviewed, documentation of the final approval by the branch chief for the application covering fiscal year 2012 was not readily available. Specifically, the approval in the application review system was marked pending. According to officials, the application did not appear to have been approved because there were technological issues with the system CMHS maintains to record approvals. As a result, the branch chief provided final approval verbally for these grantees but did not document it. PAIMI. CMHS could not produce documentation of its review of the required annual program performance reports covering fiscal year 2012 data for any of the four PAIMI grantees we reviewed. Project officers told us that they typically document their review of the reports by writing notes in the margin of a paper copy of the annual report; however, we did not see evidence of any notes in the paper copy reports we reviewed or any other documentation of their review. According to CMHS officials, CMHS developed a checklist in 2013 to help project officers document their review of future annual reports. We found that CMHS had completed these checklists for all four grantees we reviewed. However, one checklist indicated that the priorities and goals were poorly written and not all of the objectives listed were relevant to the PAIMI program. Officials told us that they did not have any other documentation related to these issues, including any communication with the grantee. In addition, each year officials use data submitted by grantees to calculate aggregate performance measure data across all PAIMI grantees. Officials explained that this information is hand-tallied on paper worksheets. We found that officials did not keep records of the hand-tallied worksheets they used to calculate aggregate performance measure data for fiscal year 2012. Officials told us that they saw no need to retain documentation of these calculations once the data were published in the annual report. Officials also told us that although senior officials review the worksheets for consistency and completeness, they did not maintain any documentation of this review. Discretionary. For seven of the eight discretionary grantees we reviewed, some performance measure data had not been approved by the project officer in CMHS’s performance measure data tracking system. CMHS officials said that project officers are expected to record their approval of performance measure data in CMHS’s performance measure data tracking system. For the eighth discretionary grantee we reviewed, the grantee had not submitted any performance measure data since the grant was awarded in July 2013. According to CMHS officials, the performance measure data did not appear to have been approved because CMHS’s performance measure tracking system locked, meaning that it did not allow the project officer to enter his or her approval, to enforce project officer deadlines. In some of these cases, project officers used other methods to document their approval of performance measure data, such as hand-written notes. However, according to officials, these notes are generally stored at project officers’ workstations and cannot be easily accessed by other CMHS officials, making it difficult for a third party to follow the trail of project officer oversight and for other officials to access the documents if the project officer is unavailable. In other cases, there was no documentation that the project officer approved this performance measure data. We found that there were a variety of reasons why documentation used to oversee grantees was missing or not readily available. For example, officials told us that in some cases they lack program-specific guidance for the processes officials use to document their oversight of MHBG and PAIMI grantees. For the MHBG program, officials told us that they did not have program-specific guidance for the period of our review that indicated how project officers and the branch chief are to document their approval of the application if there are technological problems with the application review system. For the PAIMI program, officials told us that they have some program-specific guidance; however, officials told us that there is no written guidance with instructions for how officials are to calculate the PAIMI aggregate performance measure data from the data submitted by grantees. Further, there is no written guidance that describes how this information should be maintained. For its discretionary grant programs, CMHS does have program-specific guidance that indicates how project officers should review some performance measure data submitted by grantees; however, this guidance does not indicate how project officers should approve performance measure data after the system locks. CMHS officials said that some grantees have difficulty meeting the requirements of their grants because they serve high need populations. However, most of the problems we identified were related to documentation that is to be completed by CMHS officials for grants management and not due to issues with the grantees. Without proper documentation of information used to oversee grantees that is readily available, CMHS runs the risk that it does not have complete and accurate information needed to provide sufficient oversight of its grant programs. CMHS officials said that SAMHSA began efforts in fiscal year 2015 to update existing guidance and develop additional guidance for its grant programs, including those administered by CMHS. However, since these efforts are still in early stages, it is too soon to determine whether they will resolve the issues we identified. CMHS Takes A Variety of Steps When Reviewing Performance Measure Data to Demonstrate How Its Grant Programs Further the Achievement of SAMHSA’s Goals CMHS officials told us that they take a variety of steps when reviewing grantees’ performance measure data to demonstrate how CMHS’s grant programs furthered the achievement of SAMHSA’s goals, which are identified through the strategic initiatives contained within SAMHSA’s strategic plan. CMHS collects performance measure data from grantees as a way to assess grant program performance. CMHS officials said that their performance measure data generally indicate that CMHS has made progress in achieving SAMHSA’s goals. The data collected is based on performance measures CMHS developed in response to the Government Performance and Results Act of 1993 (GPRA), as amended by the GPRA Modernization Act of 2010. Examples of these measures include the number of evidence-based practices implemented, the number of children screened for mental health or related interventions, and the number of people served by the public mental health system. Grantees provide data to CMHS officials in response to these measures periodically, based on the requirements designated by each grant program. For example, the discretionary grantees we reviewed were required to report performance measure data either biannually or on a quarterly basis, while the MHBG and PAIMI grantees we reviewed were required to report performance measure data on an annual basis. (See app. I for more information on the performance measures for the grantees we reviewed.) CMHS officials stated that they take a variety of steps when reviewing performance measure data to demonstrate how CMHS grant programs further the achievement of SAMHSA’s goals. First, CMHS officials stated that they review performance measure data reported by grantees for each grant program on an annual basis as part of their budget planning and formulation process. Specifically, CMHS produces summaries by grant program that are included as part of its budget justification. Officials stated that these summaries include tables with performance measure data that demonstrate how the grant programs further the achievement of SAMHSA’s goals. For example, SAMHSA’s fiscal year 2015 budget justification provides information on the number of people served by the public mental health system for the MHBG program who lived in private residences during fiscal year 2012, which CMHS indicated is related to SAMHSA’s goal to ensure that permanent housing and supportive services are available for individuals in recovery from mental health and substance use disorders. Second, some grant programs produced additional reports that demonstrated how these grant programs furthered the achievement of SAMHSA’s goals. For example, CMHS produces a summary report of performance measure data across discretionary grant programs on an annual basis. The report provides graphs with performance measure data across several discretionary grant programs for a given year, and describes how this performance measure data furthers the achievement of SAMHSA’s goals. Third, CMHS officials stated that performance measure data is reviewed by officials assigned to each strategic initiative who report to the administrator of SAMHSA on an ongoing basis to demonstrate the progress of the agency in furthering the achievement of its goals. For example, through this effort, officials stated that they reviewed performance measure data to demonstrate the agency’s progress in achieving the goals within the recovery support strategic initiative and presented these reports to the administrator of SAMHSA. In addition to these steps, in January 2015, CMHS began its implementation of the Common Data Platform for its discretionary grant programs. This platform is an electronic system that allows officials to generate reports on performance measure data collected from grantees awarded grants through programs across SAMHSA’s centers, including CMHS. According to SAMHSA, analyzing performance measure data across SAMHSA’s centers can assist the agency in evaluating the overall effectiveness of its grant programs and in ensuring that each program furthers the achievement of SAMHSA’s goals. For example, SAMHSA documentation indicates that the Common Data Platform will help officials identify the number of people served by SAMHSA grantees in a particular state and compare that number to previous years’ data as a way of measuring the impact of SAMHSA’s grant programs. While CMHS has begun its implementation of the Common Data Platform for discretionary grant programs, officials stated that CMHS will extend the platform in 2017 to include performance measure data collected from the MHBG and PAIMI programs. Conclusions CMHS’s grant programs support services for individuals with mental illness, which is widespread in the United States. Among the grantees we reviewed, we identified concerns with CMHS’s documentation of its application of criteria when awarding grants to grantees and of the information it used to oversee grantees. We found that there were several reasons why documentation was missing or not readily available for the grantees we reviewed. These reasons included a lack of program-specific guidance for the tools and processes that CMHS officials have developed to document the oversight of grantees. While CMHS has developed tools and processes for its staff to use to document key elements of grants oversight, CMHS staff did not always understand how to use them. CMHS has developed some program-specific guidance to help officials oversee grantees and officials stated that SAMHSA began efforts in fiscal year 2015 to update existing guidance and develop additional guidance for CMHS’s grant programs; however, because these efforts are still in early stages, it is too soon to determine whether they will address the issues we found. CMHS officials said that some grantees have difficulty meeting the requirements of their grants because they serve high need populations. However, most of the problems we identified were related to documentation that is to be completed by CMHS officials for grants management and not due to issues with the grantees. Both the grants manual, which CMHS officials said they follow to guide their grant oversight efforts, and Standards for Internal Control in the Federal Government, which apply to all government transactions, state that all transactions and other significant events need to be clearly documented, and that the documentation should be readily available. Without complete documentation of key elements of the oversight of its grant programs, CMHS does not have reasonable assurance that it is overseeing its grant programs effectively to achieve SAMHSA’s goals. Recommendation for Executive Action To assure the consistent documentation of the application of criteria to award grants to grantees and of the information used for oversight, the Administrator of SAMHSA should direct CMHS to take steps, such as developing additional program-specific guidance, to ensure that it consistently and completely documents both the application of criteria when awarding grants to grantees, and its ongoing oversight of grantees once grants are awarded. Agency Comments We provided a draft of this report to HHS for comment. HHS provided written comments, which are reprinted in appendix II. HHS concurred with our recommendation and stated that the administrator of SAMHSA directed the agency, including CMHS, to initiate efforts to ensure that it consistently and completely documents both the application of criteria when awarding grants to grantees and its ongoing oversight of grantees once grants are awarded. HHS also provided examples of efforts SAMHSA is undertaking to improve the management of its grant programs, including revising and updating guidance used for grants management. However, because several of the efforts are still in development, it is too early to determine whether these efforts will address the issues we identified. In addition, HHS provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix III. Appendix I: Examples of Performance Measures for the Center for Mental Health Services Grantees We Reviewed This appendix provides additional information on the performance measures that the Center for Mental Health Services (CMHS) developed for the grantees we reviewed from five CMHS grant programs— Community Mental Health Services Block Grant (MHBG), Protection and Advocacy for Individuals with Mental Illness (PAIMI), Systems of Care Expansion Implementation Cooperative Agreements, Statewide Consumer Network, and Cooperative Agreements for Linking Actions for Unmet Needs in Children’s Health (Project LAUNCH). CMHS collects performance measure data from grantees based on these measures periodically as a way to assess grant program performance. CMHS developed its performance measures in response to the Government Performance and Results Act of 1993 (GPRA), as amended. See table 5 for examples of performance measures for the grantees we reviewed. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tom Conahan, Assistant Director; Cathy Hamann; Amy Leone; Dan Ries; Rebecca Rust Williamson; and Jennifer Whitworth made key contributions to this report.
In 2013, SAMHSA estimated 43.8 million—or 18.5 percent—of adults in the United States suffered from a mental illness. SAMHSA, an agency within HHS, has various programs that aim to reduce the impact of mental illness through CMHS grants awarded to grantees that include states, territories, and nonprofit organizations. GAO was asked to provide information on CMHS's oversight of mental health grant programs. This report identifies (1) CMHS's criteria for awarding grants to grantees, and how CMHS documents the application of these criteria; (2) the types of information CMHS uses to oversee its grantees; and (3) the steps CMHS takes to demonstrate how its grant programs further the achievement of SAMHSA's goals. GAO reviewed information related to CMHS grants management; reviewed grant documentation from fiscal years 2012 and 2013 for a nongeneralizable selection of 16 grantees within 5 grant programs: the MHBG, PAIMI, and 3 selected discretionary grant programs that GAO selected based on factors such as size of award and type of grantee; and interviewed SAMHSA officials. The Substance Abuse and Mental Health Services Administration's (SAMHSA) Center for Mental Health Services (CMHS) established criteria for the five grant programs covered by GAO's review that varied by program, but GAO found that CMHS did not document its application of criteria for about a third of the 16 grantees GAO reviewed. An example of how criteria varied by program is that one of the five grant programs required its grantees to state that they will use evidence-based practices to treat individuals with mental illness while the others did not. In addition, CMHS did not document its application of the criteria it used to award grants to 6 of the 16 grantees GAO reviewed. For example, for fiscal year 2012, CMHS did not clearly document the application of most criteria for any of the four Community Mental Health Services Block Grant (MHBG) grantees GAO reviewed. The Department of Health and Human Services' (HHS) grants manual, which CMHS officials told GAO they follow, states that CMHS must maintain appropriate documentation to support funding decisions. GAO found a variety of reasons why CMHS did not adequately document the application of criteria, including a lack of program-specific guidance. CMHS officials said they use various types of information to oversee grantees, but the documentation of this information was often missing or not readily available during the period GAO reviewed. For each grantee GAO reviewed, there was at least one instance in which the documentation used to oversee grantees was either missing or not readily available—meaning it was either missing entirely, stored outside of the systems designated for storing the information, or was not readily available to all officials involved in the oversight of grant documentation. For example, GAO found that CMHS could not produce documentation of its review of required annual program performance reports covering fiscal year 2012 data for any of the four Protection and Advocacy for Individuals with Mental Illness (PAIMI) grantees GAO reviewed. The grants manual states that CMHS must create and maintain files that allow a third party, such as an auditor, to “follow the paper trail” from program initiation through closeout of individual awards. GAO found a variety of reasons why grant documentation was missing or not readily available, including a lack of program-specific guidance. Without proper documentation of information used to oversee grantees that is readily available, CMHS runs the risk that it does not have complete and accurate information needed to provide sufficient oversight of its grant programs. CMHS officials told GAO that they take a variety of steps when reviewing grantees' performance measure data to demonstrate how CMHS's grant programs furthered the achievement of SAMHSA's goals. For example, CMHS produces summaries by grant program that are included as part of its budget justification. In addition, CMHS is working to ensure that the performance measure data it collects can be analyzed with performance measure data collected from other grantees awarded through programs across SAMHSA. According to SAMHSA, this analysis can be helpful when demonstrating how CMHS's grant programs further the achievement of SAMHSA's goals.
Risk of Year 2000 Disruption to the Public Is High The public faces a high risk that critical services provided by the government and the private sector could be severely disrupted by the Year 2000 computing crisis. Financial transactions could be delayed, flights grounded, power lost, and national defense affected. Moreover, America’s infrastructures are a complex array of public and private enterprises with many interdependencies at all levels. These many interdependencies among governments and within key economic sectors could cause a single failure to have adverse repercussions. Key economic sectors that could be seriously affected if their systems are not Year 2000 compliant include information and telecommunications; banking and finance; health, safety, and emergency services; transportation; power and water; and manufacturing and small business. The information and telecommunications sector is especially important. In testimony in June, we reported that the Year 2000 readiness of the telecommunications sector is one of the most crucial concerns to our nation because telecommunications are critical to the operations of nearly every public-sector and private-sector organization. For example, the information and telecommunications sector (1) enables the electronic transfer of funds, the distribution of electrical power, and the control of gas and oil pipeline systems, (2) is essential to the service economy, manufacturing, and efficient delivery of raw materials and finished goods, and (3) is basic to responsive emergency services. Reliable telecommunications services are made possible by a complex web of highly interconnected networks supported by national and local carriers and service providers, equipment manufacturers and suppliers, and customers. In addition to the risks associated with the nation’s key economic sectors, one of the largest, and largely unknown, risks relates to the global nature of the problem. With the advent of electronic communication and international commerce, the United States and the rest of the world have become critically dependent on computers. However, there are indications of Year 2000 readiness problems in the international arena. For example, in a June 1998 informal World Bank survey of foreign readiness, only 18 of 127 countries (14 percent) had a national Year 2000 program, 28 countries (22 percent) reported working on the problem, and 16 countries (13 percent) reported only awareness of the problem. No conclusive data were received from the remaining 65 countries surveyed (51 percent). The following are examples of some of the major disruptions the public and private sectors could experience if the Year 2000 problem is not corrected. Unless the Federal Aviation Administration (FAA) takes much more decisive action, there could be grounded or delayed flights, degraded safety, customer inconvenience, and increased airline costs. Aircraft and other military equipment could be grounded because the computer systems used to schedule maintenance and track supplies may not work. Further, the Department of Defense (DOD) could incur shortages of vital items needed to sustain military operations and readiness. Medical devices and scientific laboratory equipment may experience problems beginning January 1, 2000, if the computer systems, software applications, or embedded chips used in these devices contain two-digit fields for year representation. According to the Basle Committee on Banking Supervision—an international committee of banking supervisory authorities—failure to address the Year 2000 issue would cause banking institutions to experience operational problems or even bankruptcy. Recognizing the seriousness of the Year 2000 problem, on February 4, 1998, the President signed an executive order that established the President’s Council on Year 2000 Conversion led by an Assistant to the President and composed of one representative from each of the executive departments and from other federal agencies as may be determined by the Chair. The Chair of the Council was tasked with the following Year 2000 roles: (1) overseeing the activities of agencies, (2) acting as chief spokesperson in national and international forums, (3) providing policy coordination of executive branch activities with state, local, and tribal governments, and (4) promoting appropriate federal roles with respect to private-sector activities. Much Work Remains to Correct the Federal Government’s Year 2000 Problem Addressing the Year 2000 problem in time will be a tremendous challenge for the federal government. Many of the federal government’s computer systems were originally designed and developed 20 to 25 years ago, are poorly documented, and use a wide variety of computer languages, many of which are obsolete. Some applications include thousands, tens of thousands, or even millions of lines of code, each of which must be examined for date-format problems. The federal government also depends on the telecommunications infrastructure to deliver a wide range of services. For example, the route of an electronic Medicare payment may traverse several networks—those operated by the Department of Health and Human Services, the Department of the Treasury’s computer systems and networks, and the Federal Reserve’s Fedwire electronic funds transfer system. In addition, the year 2000 could cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years and contain embedded computer systems to control, monitor, or assist in operations. For example, building security systems, elevators, and air conditioning and heating equipment could malfunction or cease to operate. Agencies cannot afford to neglect any of these issues. If they do, the impact of Year 2000 failures could be widespread, costly, and potentially disruptive to vital government operations worldwide. Nevertheless, overall, the government’s 24 major departments and agencies are making slow progress in fixing their systems. In May 1997, the Office of Management and Budget (OMB) reported that about 21 percent of the mission-critical systems (1,598 of 7,649) for these departments and agencies were Year 2000 compliant. A year later, in May 1998, these departments and agencies reported that 2,914 of the 7,336 mission-critical systems in their current inventories, or about 40 percent, were compliant. Unless progress improves dramatically, a substantial number of mission-critical systems will not be compliant in time. In addition to slow governmentwide progress in fixing systems, our reviews of federal agency Year 2000 programs have found uneven progress. Some agencies are significantly behind schedule and are at high risk that they will not fix their systems in time. Other agencies have made progress, although risks continue and a great deal of work remains. The following are examples of the results of some of our recent reviews. Earlier this month, we testified about FAA’s progress in implementing a series of recommendations we had made earlier this year to assist FAA in completing overdue awareness and assessment activities. These recommendations included assessing how the major FAA components and the aviation industry would be affected if Year 2000 problems were not corrected in time and completing inventories of all information systems, including data interfaces. Officials at both FAA and the Department of Transportation agreed with these recommendations, and the agency has made progress in implementing them. In our August testimony, we reported that FAA had made progress in managing its Year 2000 problem and had completed critical steps in defining which systems needed to be corrected and how to accomplish this. However, with less than 17 months to go, FAA must still correct, test, and implement many of its mission-critical systems. It is doubtful that FAA can adequately do all of this in the time remaining. Accordingly, FAA must determine how to ensure continuity of critical operations in the likely event of some systems’ failures. In October 1997, we reported that while SSA had made significant progress in assessing and renovating mission-critical mainframe software, certain areas of risk in its Year 2000 program remained. Accordingly, we made several recommendations to address these risk areas, which included the Year 2000 compliance of the systems used by the 54 state Disability Determination Services that help administer the disability programs. SSA agreed with these recommendations and, in July 1998, we reported that actions to implement these recommendations had either been taken or were underway. Further, we found that SSA has maintained its place as a federal leader in addressing Year 2000 issues and has made significant progress in achieving systems compliance. However, essential tasks remain. For example, many of the states’ Disability Determination Service systems still had to be renovated, tested, and deemed Year 2000 compliant. Our work has shown that much likewise remains to be done in DOD and the military services. For example, our recent report on the Navy found that while positive actions have been taken, remediation progress had been slow and the Navy was behind schedule in completing the early phases of its Year 2000 program. Further, the Navy had not been effectively overseeing and managing its Year 2000 efforts and lacked complete and reliable information on its systems and on the status and cost of its remediation activities. We have recommended improvements to DOD’s and the military services’ Year 2000 programs with which they have concurred. In addition to these examples, our reviews have shown that many agencies had not adequately acted to establish priorities, solidify data exchange agreements, or develop contingency plans. Likewise, more attention needs to be devoted to (1) ensuring that the government has a complete and accurate picture of Year 2000 progress, (2) setting governmentwide priorities, (3) ensuring that the government’s critical core business processes are adequately tested, (4) recruiting and retaining information technology personnel with the appropriate skills for Year 2000-related work, and (5) assessing the nation’s Year 2000 risks, including those posed by key economic sectors. I would like to highlight some of these vulnerabilities, and our recommendations made in April 1998 for addressing them. First, governmentwide priorities in fixing systems have not yet been established. These governmentwide priorities need to be based on such criteria as the potential for adverse health and safety effects, adverse financial effects on American citizens, detrimental effects on national security, and adverse economic consequences. Further, while individual agencies have been identifying mission-critical systems, this has not always been done on the basis of a determination of the agency’s most critical operations. If priorities are not clearly set, the government may well end up wasting limited time and resources in fixing systems that have little bearing on the most vital government operations. Other entities have recognized the need to set priorities. For example, Canada has established 48 national priorities covering areas such as national defense, food production, safety, and income security. Second, business continuity and contingency planning across the government has been inadequate. In their May 1998 quarterly reports to OMB, only four agencies reported that they had drafted contingency plans for their core business processes. Without such plans, when unpredicted failures occur, agencies will not have well-defined responses and may not have enough time to develop and test alternatives. Federal agencies depend on data provided by their business partners as well as services provided by the public infrastructure (e.g., power, water, transportation, and voice and data telecommunications). One weak link anywhere in the chain of critical dependencies can cause major disruptions to business operations. Given these interdependencies, it is imperative that contingency plans be developed for all critical core business processes and supporting systems, regardless of whether these systems are owned by the agency. Our recently issued guidance aims to help agencies ensure such continuity of operations through contingency planning. Third, OMB’s assessment of the current status of federal Year 2000 progress is predominantly based on agency reports that have not been consistently reviewed or verified. Without independent reviews, OMB and the President’s Council on Year 2000 Conversion have little assurance that they are receiving accurate information. In fact, we have found cases in which agencies’ systems compliance status as reported to OMB has been inaccurate. For example, the DOD Inspector General estimated that almost three quarters of DOD’s mission-critical systems reported as compliant in November 1997 had not been certified as compliant by DOD components.In May 1998, the Department of Agriculture (USDA) reported 15 systems as compliant, even though these were replacement systems that were still under development or were planned for development. (The department plans to remove these systems from compliant status in its next quarterly report.) Fourth, end-to-end testing responsibilities have not yet been defined. To ensure that their mission-critical systems can reliably exchange data with other systems and that they are protected from errors that can be introduced by external systems, agencies must perform end-to-end testing for their critical core business processes. The purpose of end-to-end testing is to verify that a defined set of interrelated systems, which collectively support an organizational core business area or function, will work as intended in an operational environment. In the case of the year 2000, many systems in the end-to-end chain will have been modified or replaced. As a result, the scope and complexity of testing—and its importance—is dramatically increased, as is the difficulty of isolating, identifying, and correcting problems. Consequently, agencies must work early and continually with their data exchange partners to plan and execute effective end-to-end tests. So far, lead agencies have not been designated to take responsibility for ensuring that end-to-end testing of processes and supporting systems is performed across boundaries, and that independent verification and validation of such testing is ensured. We have set forth a structured approach to testing in our recently released exposure draft. In our April 1998 report on governmentwide Year 2000 progress, we made a number of recommendations to the Chair of the President’s Council on Year 2000 Conversion aimed at addressing these problems. These included establishing governmentwide priorities and ensuring that agencies set developing a comprehensive picture of the nation’s Year 2000 readiness, requiring agencies to develop contingency plans for all critical core requiring agencies to develop an independent verification strategy to involve inspectors general or other independent organizations in reviewing Year 2000 progress, and designating lead agencies responsible for ensuring that end-to-end operational testing of processes and supporting systems is performed. We are encouraged by actions the Council is taking in response to some of our recommendations. For example, OMB and the Chief Information Officers Council adopted our guide providing information on business continuity and contingency planning issues common to most large enterprises as a model for federal agencies. However, as we recently testified before this Subcommittee, some actions have not been initiated—principally with respect to setting national priorities and end-to-end testing. State and Local Governments Face Significant Year 2000 Risks State and local governments also face a major risk of Year 2000-induced failures to the many vital services—such as benefits payments, transportation, and public safety—that they provide. For example, food stamps and other types of payments may not be made or could be made for an incorrect amount, date-dependent signal timing patterns could be incorrectly implemented at highway intersections, and safety severely compromised, if traffic signal systems run by state and local governments do not process four-digit years correctly, and criminal records (i.e., prisoner release or parole eligibility determinations) may be adversely affected by the Year 2000 problem. Recent surveys of state Year 2000 efforts have indicated that much remains to be completed. For example, a July 1998 survey of state Year 2000 readiness conducted by the National Association of State Information Resource Executives, Inc., found that only about one-third of the states reported that 50 percent or more of their critical systems had been completely assessed, remediated, and tested. In a June 1998 survey conducted by USDA’s Food and Nutrition Service, only 3 and 14 states, respectively, reported that the software, hardware, and telecommunications that support the Food Stamp Program, and the Women, Infants, and Children program, were Year 2000 compliant. Although all but one of the states reported that they would be Year 2000 compliant by January 1, 2000, many of the states reported that their systems are not due to be compliant until after March 1999 (the federal government’s Year 2000 implementation goal). Indeed, 4 and 5 states, respectively, reported that the software, hardware, and telecommunications supporting the Food Stamp Program, and the Women, Infants, and Children program would not be Year 2000 compliant until the last quarter of calendar year 1999, which puts them at high risk of failure due to the need for extensive testing. State audit organizations have identified other significant Year 2000 concerns. For example, (1) Illinois’ Office of the Auditor General reported that significant future efforts were needed to ensure that the year 2000 would not adversely affect state government operations, (2) Vermont’s Office of Auditor of Accounts reported that the state faces the risk that critical portions of its Year 2000 compliance efforts could fail, (3) Texas’ Office of the State Auditor reported that many state entities had not finished their embedded systems inventories and, therefore, it is not likely that they will complete their embedded systems repairs before the Year 2000, and (4) Florida’s Auditor General has issued several reports detailing the need for additional Year 2000 planning at various district school boards and community colleges. State audit offices have also made recommendations, including the need for increased oversight, Year 2000 project plans, contingency plans, and personnel recruitment and retention strategies. Federal/State Data Exchanges Critical to Delivery of Services To fully address the Year 2000 risks that states and the federal government face, data exchanges must also be confronted—a monumental issue. As computers play an ever-increasing role in our society, exchanging data electronically has become a common method of transferring information among federal, state, and local governments. For example, SSA exchanges data files with the states to determine the eligibility of disabled persons for disability benefits. In another example, the National Highway Traffic Safety Administration provides states with information needed for driver registrations. As computer systems are converted to process Year 2000 dates, the associated data exchanges must also be made Year 2000 compliant. If the data exchanges are not Year 2000 compliant, data will not be exchanged or invalid data could cause the receiving computer systems to malfunction or produce inaccurate computations. Our recent report on actions that have been taken to address Year 2000 issues for electronic data exchanges revealed that federal agencies and the states use thousands of such exchanges to communicate with each other and other entities. For example, federal agencies reported that their mission-critical systems have almost 500,000 data exchanges with other federal agencies, states, local governments, and the private sector. To successfully remediate their data exchanges, federal agencies and the states must (1) assess information systems to identify data exchanges that are not Year 2000 compliant, (2) contact exchange partners and reach agreement on the date format to be used in the exchange, (3) determine if data bridges and filters are needed and, if so, reach agreement on their development, (4) develop and test such bridges and filters, (5) test and implement new exchange formats, and (6) develop contingency plans and procedures for data exchanges. At the time of our review, much work remained to ensure that federal and state data exchanges will be Year 2000 compliant. About half of the federal agencies reported during the first quarter of 1998 that they had not yet finished assessing their data exchanges. Moreover, almost half of the federal agencies reported that they had reached agreements on 10 percent or fewer of their exchanges, few federal agencies reported having installed bridges or filters, and only 38 percent of the agencies reported that they had developed contingency plans for data exchanges. Further, the status of the data exchange efforts of 15 of the 39 state-level organizations that responded to our survey was not discernable because they were not able to provide us with information on their total number of exchanges and the number assessed. Of the 24 state-level organizations that provided actual or estimated data, they reported, on average, that 47 percent of the exchanges had not been assessed. In addition, similar to the federal agencies, state-level organizations reported having made limited progress in reaching agreements with exchange partners, installing bridges and filters, and developing contingency plans. However, we could draw only limited conclusions on the status of the states’ actions because data were provided on only a small portion of states’ data exchanges. To strengthen efforts to address data exchanges, we made several recommendations to OMB. In response, OMB agreed that it needed to increase its efforts in this area. For example, OMB noted that federal agencies had provided the General Services Administration with a list of their data exchanges with the states. In addition, as a result of an agreement reached at an April 1998 federal/state data exchange meeting,the states were supposed to verify the accuracy of these initial lists by June 1, 1998. OMB also noted that the General Services Administration is planning to collect and post information on its Internet World Wide Web site on the progress of federal agencies and states in implementing Year 2000 compliant data exchanges. In summary, federal, state, and local efforts must increase substantially to ensure that major service disruptions do not occur. Greater leadership and partnerships are essential if government programs are to meet the needs of the public at the turn of the century. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have at this time. GAO Reports and Testimony Addressing the Year 2000 Crisis FAA Systems: Serious Challenges Remain in Resolving Year 2000 and Computer Security Problems (GAO/T-AIMD-98-251, August 6, 1998). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, August 1998). Internal Revenue Service: Impact of the IRS Restructuring and Reform Act on Year 2000 Efforts (GAO/GGD-98-158R, August 4, 1998). Social Security Administration: Subcommittee Questions Concerning Information Technology Challenges Facing the Commissioner (GAO/AIMD-98-235R, July 10, 1998). Year 2000 Computing Crisis: Actions Needed on Electronic Data Exchanges (GAO/AIMD-98-124, July 1, 1998). Defense Computers: Year 2000 Computer Problems Put Navy Operations At Risk (GAO/AIMD-98-150, June 30, 1998). Year 2000 Computing Crisis: A Testing Guide (GAO/AIMD-10.1.21, Exposure Draft, June 1998). Year 2000 Computing Crisis: Testing and Other Challenges Confronting Federal Agencies (GAO/T-AIMD-98-218, June 22, 1998). Year 2000 Computing Crisis: Telecommunications Readiness Critical, Yet Overall Status Largely Unknown (GAO/T-AIMD-98-212, June 16, 1998). GAO Views on Year 2000 Testing Metrics (GAO/AIMD-98-217R, June 16, 1998). IRS’ Year 2000 Efforts: Business Continuity Planning Needed for Potential Year 2000 System Failures (GAO/GGD-98-138, June 15, 1998). Year 2000 Computing Crisis: Actions Must Be Taken Now to Address Slow Pace of Federal Progress (GAO/T-AIMD-98-205, June 10, 1998). Defense Computers: Army Needs to Greatly Strengthen Its Year 2000 Program (GAO/AIMD-98-53, May 29, 1998). Year 2000 Computing Crisis: USDA Faces Tremendous Challenges in Ensuring That Vital Public Services Are Not Disrupted (GAO/T-AIMD-98-167, May 14, 1998). Securities Pricing: Actions Needed for Conversion to Decimals (GAO/T-GGD-98-121, May 8, 1998). Year 2000 Computing Crisis: Continuing Risks of Disruption to Social Security, Medicare, and Treasury Programs (GAO/T-AIMD-98-161, May 7, 1998). IRS’ Year 2000 Efforts: Status and Risks (GAO/T-GGD-98-123, May 7, 1998). Air Traffic Control: FAA Plans to Replace Its Host Computer System Because Future Availability Cannot Be Assured (GAO/AIMD-98-138R, May 1, 1998). Year 2000 Computing Crisis: Potential For Widespread Disruption Calls For Strong Leadership and Partnerships (GAO/AIMD-98-85, April 30, 1998). Defense Computers: Year 2000 Computer Problems Threaten DOD Operations (GAO/AIMD-98-72, April 30, 1998). Department of the Interior: Year 2000 Computing Crisis Presents Risk of Disruption to Key Operations (GAO/T-AIMD-98-149, April 22, 1998). Tax Administration: IRS’ Fiscal Year 1999 Budget Request and Fiscal Year 1998 Filing Season (GAO/T-GGD/AIMD-98-114, March 31, 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Year 2000 Computing Crisis: Federal Regulatory Efforts to Ensure Financial Institution Systems Are Year 2000 Compliant (GAO/T-AIMD-98-116, March 24, 1998). Year 2000 Computing Crisis: Office of Thrift Supervision’s Efforts to Ensure Thrift Systems Are Year 2000 Compliant (GAO/T-AIMD-98-102, March 18, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Public/Private Cooperation Needed to Avoid Major Disruptions (GAO/T-AIMD-98-101, March 18, 1998). Post-Hearing Questions on the Federal Deposit Insurance Corporation’s Year 2000 (Y2K) Preparedness (GAO/AIMD-98-108R, March 18, 1998). SEC Year 2000 Report: Future Reports Could Provide More Detailed Information (GAO/GGD/AIMD-98-51, March 6, 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time Is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computer Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the year 2000 risks facing the nation, focusing on: (1) GAO's major concerns with the federal government's progress in correcting its systems; (2) state and local government year 2000 issues; and (3) critical year 2000 data exchange issues. GAO noted that: (1) the public faces a high risk that critical services provided by the government and the private sector could be severely disrupted by the year 2000 computing crisis; (2) the year 2000 could cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years and contain embedded computer systems to control, monitor, or assist in operations; (3) overall, the government's 24 major departments and agencies are making slow progress in fixing their systems; (4) in May 1997, the Office of Management and Budget (OMB) reported that about 21 percent of the mission-critical systems for these departments and agencies were year 2000 compliant; (5) in May 1998, these departments reported that 40 percent of the mission-critical systems were year 2000 compliant; (6) unless progress improves dramatically, a substantial number of mission-critical systems will not be compliant in time; (7) in addition to slow governmentwide progress in fixing systems, GAO's reviews of federal agency year 2000 programs have found uneven progress; (8) some agencies are significantly behind schedule and are at high risk that they will not fix their systems in time; (9) other agencies have made progress, although risks continue and a great deal of work remains; (10) governmentwide priorities in fixing systems have not yet been established; (11) these governmentwide priorities need to be based on such criteria as the potential for adverse health and safety effects, adverse financial effects on American citizens, detrimental effects on national security, and adverse economic consequences; (12) business continuity and contingency planning across the government has been inadequate; (13) in their May 1998 quarterly reports to OMB, only four agencies reported that they had drafted contingency plans for their core business processes; (14) OMB's assessment of the status of federal year 2000 progress is predominantly based on agency reports that have not been consistently reviewed or verified; (15) GAO found cases in which agencies' systems compliance status as reported to OMB had been inaccurate; (16) end-to-end testing responsibilities have not yet been defined; (17) state and local governments also face a major risk of year 2000-induced failures to the many vital services that they provide; (18) recent surveys of state year 2000 efforts have indicated that much remains to be completed; and (19) at the time of GAO's review, much work remained to ensure that federal and state data exchanges will be year 2000 compliant.
Background Since December 1995, the United States has deployed military forces in and around Bosnia to assist in implementing the Dayton Peace Accords. U.S. forces are part of a multilateral coalition under the command of NATO. From December 1995 to December 1996, the coalition was called the Implementation Force (IFOR). In December 1996, NATO authorized a new mission and renamed the coalition the Stabilization Force (SFOR). That mission was scheduled to end in June 1998 but has since been extended indefinitely. In voting to continue the mission, the North Atlantic Council retained the name SFOR. The Council stated that the extent of support over time will be adapted to developments in the political and security situation and to progress in the implementation of the civilian elements of the accords. Force levels will be reviewed at regular intervals. The United States has been a major force provider to the mission, as shown in table 1. The SFOR level will likely remain at 31,000, but the United States is seeking to reduce its troop commitment in Bosnia to 6,900. The United States plans to continue basing about 3,400 troops in neighboring countries. If the President determines that it is necessary to augment the active forces for an operational mission, he may use PSRC authority, which allows for the activation of up to 200,000 reservists at any one time, with each reservist limited to no more than 270 days of involuntary service. In December 1995, the President invoked this authority for the Bosnia mission. USAREUR and USAFE Have Supported Bosnia Mission With Some Help From Reserve and U.S.-Based Active Forces USAREUR is the major Army command primarily responsible for providing people for the Bosnia mission. Because USAREUR units are assigned fewer personnel than they are authorized, deployed units were augmented with personnel from other European units and active and reserve forces from the United States. At the same time, the relatively small size of the U.S. force in Bosnia required the Army to deploy partial units. USAFE is the principal Air Force command providing people for the mission. The Air Force’s personnel requirements are much smaller and the nature of the Air Force’s responsibilities allows for extensive use of reserve component volunteers. Army and Air Force Reserves Have Supported the Bosnia Mission Almost 16,000 Army reservists and about 10,000 Air Force Air Reserve Component members have participated in the Bosnia mission between its inception in December 1995 and January 1998. Most of the Army reservists were involuntarily called to active duty under PSRC, whereas most Air Force reservists volunteered to participate in the Bosnia mission. Reserve units have participated because some required support capabilities reside primarily or solely in the reserves. For example, most of the Army’s movement control teams, civil affairs units, and fire fighter detachments are in the reserves. Of the 36 civil affairs units in the Army, only 1 is in the active force. The remainder are in the Army reserve. Reserve units also have been used to reduce the high level of activity of some active forces. For example, the Army uses reserve military police to relieve the high personnel tempo of active-duty military police units, and the Air Force uses reserve aircrews to relieve the high personnel tempo of its active-duty aircrews. Army Personnel Needed to Be Added to Forces/Units Deploying to Bosnia Augmentation from reserve forces and U.S.-based active forces was required because USAREUR did not possess all the personnel and capabilities the mission needs. According to USAREUR documents, prior to the first deployment to Bosnia in December 1995, the units chosen to deploy were staffed at 87 percent of required strength, whereas deployed forces were required to be at least at 92 percent of required strength. Further, 8 to 10 percent of these units’ personnel were not deployable for a variety of reasons. To bring these units to at least 92 percent of required strength, personnel had to be shifted from other USAREUR units and from U.S. Army Forces Command (FORSCOM) units, a process known as cross-leveling. Throughout the Bosnia mission, individuals have been needed to augment headquarters staffs at USAREUR, U.S. European Command, the American contingent of various NATO command elements such as the SFOR Headquarters in Saravejo, and actual units. FORSCOM provided 1,035 active component individual augmentees to support Bosnia, while the Army Reserve Personnel Center and the U.S. Army Reserve Command provided another 1,613 Army Reserve individual augmentees. Army Deploys Some Split-Based and New Units to Bosnia The relatively small size of the U.S. portion of SFOR has not always required deploying entire units. According to FORSCOM and USAREUR officials, active units have employed split-based operations. Split-based operations occur when only parts of units are deployed and the elements left behind must continue to perform their missions at the home station. For example, as of January 1998, elements of the First Armor Division’s headquarters, military intelligence, signal, artillery, and support command were deployed to Bosnia, while the rest of the division, including its combat brigades, did not deploy. Split-basing has been a strain on USAREUR units. For example, during SFOR, USAREUR reported to the Department of the Army through its Status of Resources and Training System, which is used to measure readiness, that split-basing had a negative impact on the readiness of several of its units because of reduced personnel, equipment levels, or a combination of both factors at home station. For reserve units, the Army also has engaged in split-based operations. In many instances, the Army extracted elements of existing units and formed them into what is known as derivative units. It has been staffing derivative units with individuals drawn from the original unit and other Selected Reserve units. According to its mobilization data, the Army formed over 700 derivative units for the Bosnia mission since December 1995. Examples of derivative units that have been mobilized for Bosnia include the target acquisition radar elements of National Guard divisions, elements of Army Reserve garrison support units, and platoons from postal companies. According to a FORSCOM operations official, extracting these elements often affects the ability of the parent unit to conduct its normal peacetime activities, such as training, much in the same way that it has affected split-based active units. In responding to a draft of this report, the Army stated that a positive effect of split-based operations is that the redeploying soldiers can bring better developed or newly acquired skills to the unit. The Army has also created derivative units, such as various mobilization support detachments, that are not elements of existing units. These detachments are ad hoc units created for the sole purpose of activating individuals with assorted capabilities that are needed to meet miscellaneous individual requirements. According to an Army Reserve Personnel Center official, this was done because PSRC authority did not allow for the call-up of individuals unless they were members of a unit in the Selected Reserve. The requirements have been met in part by members who initially belonged to the Individual Ready Reserve but volunteered to join the Selected Reserve temporarily so that they could be subject to PSRC. Of the previously mentioned 1,613 reservists provided by the Army Reserve Personnel Center and the U.S. Army Reserve Command, 1,068 were reservists from the Individual Ready Reserve who transferred to the Selected Reserve, including 551 who were placed into derivative units. Follow-on Force Planning Underway The Army has conducted multiple force rotations for varying lengths of time to support the Bosnia mission. With the recent extension of the mission for an unspecified duration, the Army is planning to (1) replace forces currently deployed to Bosnia; (2) identify forces needed for future rotations; and (3) relieve pressure placed upon USAREUR, which has provided most of the forces for the Bosnia mission to date. Formal guidance has not been finalized regarding Bosnia mission needs after June 1998. In the meantime, USAREUR is making plans for a follow-on force. It intends to provide personnel for the initial follow-on force according to the current SFOR organization and authorized troop level of 8,500 troops in Bosnia. If the United States succeeds in its effort to have other troop-contributing nations provide more troops, the U.S. force level in Bosnia could drop to 6,900. A smaller force, according to USAREUR and FORSCOM officials, would most likely have fewer combat support and combat service support units. For example, one of the potential reductions involves having only one aviation brigade—a combat support unit—responsible for supporting the forces in Bosnia and the SFOR Commander’s reserve force. Additional force reductions will come from units such as the Army Center for Lessons Learned and military history units. With the decision to extend the Bosnia mission indefinitely, the Army canceled its June 1998 troop withdrawal and extended its plans for providing personnel beyond June. A USAREUR document on providing future forces specifies that the initial follow-on force deployment will be from June 1998 through October 1998. The Department of the Army has assigned USAREUR responsibility for providing the Army portion of this force. USAREUR has developed a strategy and identified those specific requirements that it can fill, which is most of the initial follow-on force requirements. According to USAREUR and FORSCOM officials, FORSCOM will provide most of the requirements that cannot be met by USAREUR units through October 1998. The Army plans for FORSCOM to provide most of the forces for the next two follow-on rotations—one in October 1998 and the other in April 1999. This decision was made to relieve USAREUR of the high operating and personnel tempo it has experienced since the Bosnia mission began and to allow it to focus on training for its wartime mission. According to USAREUR officials, FORSCOM’s providing the bulk of the forces for the Bosnia mission for a year will allow USAREUR to recover from the adverse impacts of almost 3 years continuous deployment to Bosnia. For example, in June 1995, 6 months before the first deployment to Bosnia, 46 percent of USAREUR’s units achieved the readiness rating expected of these units; by August 1997, that percentage had dropped to 30 percent as a result of personnel, equipment, and training required for wartime missions being diverted to peacekeeping operations. The Air Force has rotated aircraft, aircrews, and ground support personnel to support the Bosnia mission. According to a USAFE operations official, to support the mission in fiscal years 1998 and 1999 the Air Force has identified capabilities and units possessing those capabilities it plans to deploy. USAFE will continue to be the principal force provider. Providing Needed Capabilities for the Mission Is Becoming Difficult in a Few Instances Extending the Bosnia mission beyond June 1998 is causing the services to seek alternative ways to provide some needed capabilities. Although the vast majority of the ground-based combat support and aviation-related requirements can be filled, about a dozen unit capabilities will require special attention in the future because the capabilities are primarily in the reserves and many of these capabilities have already been mobilized and deployed in support of the operation. Requirements for these capabilities have totaled several hundred persons per rotation. Solutions have been developed for providing some of these capabilities and are being sought for the others. FORSCOM recently identified 12 unit types that it may have difficulty providing from its forces in future rotations primarily because most of these types of units are predominately or exclusively in the reserves and have already been called up and participated in the Bosnia mission. Each succeeding call-up has reduced the available pool of members and units available for future rotations. Table 2 lists the number of these units in the Army and their distribution between the active and reserve components. Table 3 provides details on the 12 unit types, including the number of units needed per rotation, the number of reserve units remaining, and additional information on the active units. According to an Army operations official, other Army commands, such as the U.S. Army Pacific and Eighth U.S. Army, have some of the 12 unit types that FORSCOM has identified as difficult to fill. However, units assigned to those commands generally would not be able to meet Bosnia mission requirements because many of the units would be committed to missions in their command’s primary area of responsibility. Three of the 12 unit types—Broadcast Public Affairs, Replacement Headquarters, and Rear Tactical Operations Center—do not exist in the active force. The active Army does maintain force structure for the nine remaining unit types, but two have already deployed to Bosnia and the availability of others may be limited. For example, a FORSCOM official told us that each Army division’s artillery has a target acquisition battery or detachment, but deploying the target acquisition element independently would severely degrade the readiness of the remaining division. The active centralized movement control teams and the military history detachment have already deployed to Bosnia at least once, and FORSCOM would prefer to not deploy them again. The active movement control battalion headquarters, air terminal movement control team, mobile public affairs detachment, engineering fire fighter detachment, and medical distribution units have not been deployed to Bosnia. The availability of the Reserve Component to meet some of the future Bosnia mission requirements at current force levels is limited to some extent. According to FORSCOM, all Reserve Component Broadcast Public Affairs, Replacement Headquarters, Target Acquisition, Movement Control Headquarters, and some movement control teams already have been called up under PSRC. Only the Rear Tactical Operations Center, Military History and Medical Distribution missions can currently be met from existing reserve force structure. Moreover, FORSCOM told us that some of the remaining Reserve Component Mobile Public Affairs, Engineering Fire Fighters, and Medical Distribution units currently cannot meet the criteria for deployment. According to FORSCOM, these units could be used, but, depending on the unit, would require from 1 month to 12 months of notice before deployment to receive personnel, equipment, training, or a combination of these sufficient to meet the deployment criteria. FORSCOM requested the U.S. Atlantic Command, which can draw forces from all services in the continental United States, to assess whether other services can provide these capabilities. Solutions were identified for two of these areas for the next rotation. An air movement control unit requirement will be met by an ad hoc Marine Corps unit, and the Broadcast Public Affairs unit requirement will be met by an ad hoc unit that was formed by pooling individuals from the military services because no one unit was available in any service. According to FORSCOM and USAREUR, the use of ad hoc units will increase as the Army’s ability to provide specific capabilites (active or reserve) decreases. As of April 1998, the U.S. Atlantic Command was developing solutions for the other areas. If the Atlantic Command is unable to find other services to meet these requirements, other solutions will be considered. These include using more ad hoc units, contracting civilians to perform the function, and seeking to have other NATO partners assume some of these responsibilities. For example, according to USAREUR, engineering fire fighter requirements will be contracted out. In addition to FORSCOM’s identification of types of units that will be difficult to fill, the U.S. Army Special Operations Command has also identified certain components of its civil affairs and psychological operations capabilities that may be difficult to fill. However, the Commanding General of the U.S. Army Civil Affairs and Psychological Operations Command told us that some of these difficulties will be mitigated as the command improves its utilization of these capabilities and trains officers from other nations to undertake some of the mission. Air Force Has Not Had to Involuntarily Call Up Many Reservists The only instances in which PSRC has been used by the Air Force to support Bosnia is for the air traffic control and combat communications missions. Of these two missions, only air traffic control has posed a problem. Two-thirds of the Air Force’s air traffic control capability is in the Air National Guard, and the Guard recently reduced its air traffic control force structure by almost 50 percent. Initially the mission was handled by volunteers on a rotating basis from air traffic control units, but, beginning in June 1996, PSRC was used to call up Guard personnel for between 120 days and 179 days to perform this function. All Guard air traffic control units already have been activated under the Bosnia PSRC. In October 1997, the Air National Guard informed the active Air Force that it would be unable to solely meet these requirements beyond July 1, 1998, and asked to have the active Air Force provide the personnel to meet the requirement. Active and reserve Air Force officials have been seeking short- and long-term solutions to air traffic control requirements for the extended mission. To attract more volunteers from the Guard, rotations for Guard personnel will be decreased from 120 days to 45 days. Overall radar operator and tower personnel requirements will be reduced, and Guard personnel will fill only about half of these requirements with the balance to be filled by active Air Force personnel. Beginning in October 1998, U.S.-trained Hungarian controllers are expected to replace U.S. controllers, further reducing operator requirements. USAFE plans to implement contractor maintenance for the maintenance portion of the mission by November 1998. This will leave only one person from the Air National Guard involved with air traffic control maintenance. The Air Force hopes these actions will reduce the need to involuntarily activate Guard personnel for the air traffic control mission. Air Force Reserve Component officials told us that as the mission lengthens additional capabilities may require changes from current practices for providing personnel. Most requirements are now met by volunteers, but according to an Air Force Reserve official the longer the mission lasts the greater the probability PSRC will have to be used to ensure that needed capabilities are provided. PSRC Permits Involuntary Activation of Selected Reserves Subject to Limits If the President determines that it is necessary to augment the active forces for an operational mission, he may initiate a PSRC call-up under 10 U.S.C. § 12304. With this authority, units and members of the Selected Reserve may be ordered to active duty without their consent. The statute does not limit the number of missions that may be undertaken with reserve support. However, reservists cannot be required to serve on a mission for “more than 270 days.” The 270-day time limit on PSRC activations is long enough to allow for multiple involuntary activations that cumulate to less than 270 days, and the statute does not prohibit such multiple involuntary activations. Scope of PSRC Authority Authority to mobilize the reserves is governed by statute. Upon declaring a national emergency, the President is authorized to mobilize the Ready Reserve under 10 U.S.C. § 12302. That section’s involuntary activation authority extends to 1 million reservists for up to 24 months’ of service. In 1976, Congress recognized that circumstances at times may exist that would require access to the reserves but would not support an emergency declaration. Congress enacted the PSRC statute to broaden nonconsensual access to the reserves. This authority was expected to complement DOD’s “Total Force Concept,” under which well-trained reserves became more fully integrated into the force structure. The President can initiate a PSRC call-up when he determines “that it is necessary to augment the active forces for any operational mission.” Units and members of the Selected Reserve are subject to involuntary activation under PSRC authority. The Selected Reserve is a component of the Ready Reserves. Prior to 1995, PSRC authority had been used twice—for the Gulf War and the operation in Haiti. In each instance, an executive order stated the need for activating the Selected Reserve and defined the mission. As implemented in the past, the mission statements were broad in scope. Consistent with that practice, the scope of the Bosnia PSRC mission is defined in Executive Order 12982 as the “conduct of operations in and around former Yugoslavia.” The PSRC statute allows the Secretary of Defense to prescribe policies and procedures concerning such matters as the number and types of Selected Reserve units to be activated, the timing of the calls-ups, the number of reservists to be activated, and the time required for each reservist to remain on active duty. These matters are discussed in DOD Directive 1235.10, July 1, 1995. In addition, an end date for the use of PSRC authority may be set by the Secretary of Defense. The end date for the Bosnia PSRC was first set at May 1997, then extended to August 1998, and, as of February 1998, the end date was extended indefinitely. The Office of the Assistant Secretary of Defense for Reserve Affairs has stated that the current policy is that the Department of Defense will not request that the President invoke PSRC authority a second time for Bosnia to recall reservists who already have served 270 days. In November 1997, Congress enlarged the reserve components available for PSRC call-up. Section 511 of Public Law 105-85 directed the Secretary to create a PSRC mobilization category for members of the Individual Ready Reserves. Up to 30,000 of the Individual Ready Reserve members who volunteer for the mobilization category can then be involuntarily activated as individuals. Before passage of this amendment, members of the Individual Ready Reserves sometimes volunteered to temporarily join the Selected Reserve to allow them to be subject to PSRC. Department of Defense officials believe that this authority is not available for a Bosnia-type mission because they interpret it as being available only as a bridge to partial mobilization. The statute also caps at 200,000 the total number of reservists who may be serving on active duty under PSRC “at any one time.” This number has grown since the statute was first enacted in 1976. Originally, the total number of selected reservists who could be activated at any one time was set at 50,000. This was increased to 100,000 in l980 and to 200,000 in 1986. The PSRC statute has always included a limit on the number of days a reservist could be required to serve without consent. As originally enacted, the involuntary activation period was “not more than 90 days.” Over the years, that limit has been expanded to the current level of “not more than 270 days.” A 270-day limit is long enough to permit multiple involuntary activations, and the statute does not prohibit such multiple activations. The Army’s policy, however, is not to allow reactivations. The Air Force does not have a similar policy, and may soon face the necessity of reactivating some air traffic controllers who have already served 120 days under the current PSRC. Conclusions The military services have successfully provided the needed capabilities for the Bosnia mission for the past 2-1/2 years. The Army, which has provided the bulk of the armed forces, has taken a number of steps to match its existing units to the mission requirements, including deploying partial units, creating derivative units by borrowing personnel from nondeploying units, and creating ad hoc units to deploy individual augmentees. These steps have met the mission’s needs with varying impacts on parent units that have provided personnel and have required the Army to operate in a fashion different from the way in which it organizes its forces, which is as entire units to fight a major theater war. The Army also has relied on the reserves both to provide support capabilities that reside primarily or solely in the reserve component and to reduce the high level of activity of some active forces. The Air Force has required a smaller number of forces, and the nature of the Air Force’s responsibilities has allowed for extensive use of reserve and guard volunteers. The recent decision to extend the Bosnia mission will require rotating military forces for the foreseeable future. While formal guidance has not been finalized regarding mission needs, the Army is planning for follow-on forces through late 1999. The first follow-on force is expected to deploy in June 1998 to relieve the forces that are currently deployed. Although the vast majority of the types of capabilities needed do not represent challenges in providing personnel, a handful do, principally in the Army. These challenges exist because in a few instances all the needed capability is in the reserves and has already been involuntarily called up under the current PSRC and in other instances because there is limited capability in the active force structure or because the active capability is vital to its parent unit. For these capabilities, the Army’s force structure does not match the needs of a mission of the duration and with the continuing requirements of the one in Bosnia. Solutions are being developed to meet these challenges. These challenges may exist as long as the mission continues at its current size and with its current tasks. The President has statutory authority to involuntarily activate units and members of the Selected Reserve. The Bosnia mission has led to a situation in which in some instances all of the units with needed capabilities already have been ordered to duty and served the maximum time permitted. The Office of the Assistant Secretary of Defense for Reserve Affairs has said that the Department of Defense’s current policy is that the Department will not request that the President invoke PSRC authority a second time for Bosnia to recall reservists who already have served 270 days. In some instances, the services have activated reservists involuntarily for shorter periods of time than the statute allows. It is possible that some of these reservists could be recalled to serve the full activation period under a single PSRC of 270 days. The statute does not prohibit multiple activations as long as the total number of days on active duty does not exceed 270 days. Agency Comments and Our Evaluation In written comments on a draft of this report, the Department of Defense partially concurred with the report. (See app. I.) The Department said that operations in Bosnia have been and will continue to be a success story. The Department further said that the overall tenor of our report implies that despite success in mission accomplishment and active and reserve component integration, challenges in manning a follow-on-mission to Bosnia are insurmountable. We state in the report that the Department has successfully provided needed capabilities for the Bosnia mission for the past 2-1/2 years and do not mean to imply that the Department will be unable to successfully staff the mission in the future. We do, however, point out that there are a few types of units that will become increasingly challenging to fill and that solutions are being developed to meet these challenges. The Department stated in its technical comments that there are some types of units that will require more management and we agree. The Department also stated it will continue to rely on the reserves and to task organize, split-base, and cross level units to get the right force mix to accomplish the mission. Our report describes how the Department has used these capabilities and techniques to meet mission requirements and explains why they have been used. The Department also provided technical comments, which we have incorporated where appropriate. Scope and Methodology To determine how the military services are providing needed capabilities for the Bosnia mission and plan to provide follow-on forces for the extended mission, we reviewed documents and interviewed personnel at the Office of the Joint Chiefs of Staffs; Department of the Army headquarters and Department of Air Force headquarters, Washington, D.C.; U.S. European Command, USAREUR, and USAFE, all located in Germany; U.S. Atlantic Command, Norfolk, Virginia; Special Operations Command, MacDill Air Force Base, Florida; Forces Command, Fort McPherson, Georgia; Air Combat Command, Langley Air Force Base, Virginia; and Office of the Secretary of Defense (Reserve Affairs). Because the Navy and the Marine Corps provided few personnel directly to the Bosnia mission, we did not include them in our work. To gain reserve component perspectives on the ability to provide capabilities in the future, we reviewed documents and interviewed personnel at the Chief, Army Reserve, Army and Air National Guard, Washington, D.C.; Army Reserve Command, Fort McPherson, Georgia; and the Air Force Reserve Command, Robins Air Force Base, Georgia. Although we obtained documents showing the number of individual reservists that the Army has deployed to Bosnia, the data did not identify the specific military skills possessed by these individuals. To examine the PSRC authority, we reviewed the applicable U.S. statutes and their legislative history. We also requested and received the Department of Defense Office of General Counsel’s written interpretation of the statute with regard to multiple activations within the 270-day call-up limitation and the ability to invoke the statute’s authority for a second time for a similar mission. We performed our review between July 1997 and April 1998 in accordance with generally accepted government accounting standards. We are sending copies of this report to other congressional committees; the Secretaries of Defense, the Army, and the Air Force; and the Director, Office of Management and Budget. Copies will also be made available to others on request. Major contributors to this report are listed in appendix II. If you or your staff have any questions about this report, please contact me at (202) 512-3504. Comments From the Department of Defense Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. Atlanta Field Office Kansas City Field Office Office of the General Counsel Margaret Armen, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the military services' efforts to provide the needed capabilities for continued military operations in Bosnia, focusing on: (1) how the services have provided the needed capabilities for the operation thus far; (2) how the services plan to provide them in the future; and (3) the President's ability to call up reserves under his Presidential Selected Reserve Call-up (PSRC) authority. GAO noted that: (1) the military services have successfully provided needed capabilities for the Bosnia mission for the past 2 1/2 years; (2) the U.S. Army Europe (USAREUR) has provided the majority of Army forces, augmented with reserve forces and active forces from the United States; (3) because USAREUR units are assigned fewer personnel than they are authorized, the Army had to borrow personnel from nondeploying units so that deploying units could deploy with the required number of people; (4) also, because the operation did not always require entire units, the Army deployed partial ones; (5) these steps enabled the Army to meet the mission's needs but, in some cases, have had an adverse impact on the parent units that have provided personnel; (6) the U.S. Air Forces Europe has provided the majority of air forces, augmented with U.S.-based active and reserve forces; (7) both services have used their reserve components to meet the mission requirements because some critical support capabilities reside primarily or solely in the reserves and because use of the reserves reduces the high level of activity of some active forces; (8) most Army reservists were involuntarily activated through PSRC, while most Air Force Air Reserve Component members were volunteers; (9) with the decision to extend the Bosnia mission indefinitely, the Army and the Air Force are developing plans for a follow-on force; (10) though the vast majority of the ground-based combat support and the aviation-related requirements for the mission can be filled, about a dozen unit capabilities will require special attention in the future because the capabilities are primarily in the reserves and many of these capabilities have already been used; (11) requirements for these capabilities have totalled several hundred persons per rotation; (12) to satisfy future mission needs, the military services and the U.S. Atlantic Command are considering using similar capabilities in the other military services, asking for greater participation from other countries, and contracting for some of the needed capabilities; (13) some reservists have served for fewer than the 270 days that the PSRC statute allows; (14) because the statute does not prohibit multiple involuntary activations if the total does not exceed the 270-day limit, some of these reservists could be recalled to serve up to the full activation period; and (15) in addition, the Bosnia mission has led to a situation in which in some instances all of the reservists with needed capabilities have been ordered to duty and served the maximum time allowed for a single call-up.
Background In response to global challenges the government faces in the coming years, we have a unique opportunity to create an extremely effective and performance-based organization that can strengthen the nation’s ability to protect its borders and citizens against terrorism. There is likely to be considerable benefit over time from restructuring some of the homeland security functions, including reducing risk and improving the economy, efficiency, and effectiveness of these consolidated agencies and programs. Realistically, however, in the short term, the magnitude of the challenges that the new department faces will clearly require substantial time and effort, and will take additional resources to make it fully effective. The Comptroller General has testified that the Congress should consider several very specific criteria in its evaluation of whether individual agencies or programs should be included or excluded from the proposed department. Those criteria include the following: Mission Relevancy: Is homeland security a major part of the agency or program mission? Is it the primary mission of the agency or program? Similar Goals and Objectives: Does the agency or program being considered for the new department share primary goals and objectives with the other agencies or programs being consolidated? Leverage Effectiveness: Does the agency or program being considered for the new department promote synergy and help to leverage the effectiveness of other agencies and programs or the new department as a whole? In other words, is the whole greater than the sum of the parts? Gains Through Consolidation: Does the agency or program being considered for the new department improve the efficiency and effectiveness of homeland security missions through eliminating duplications and overlaps, closing gaps, and aligning or merging common roles and responsibilities? Integrated Information Sharing/Coordination: Does the agency or program being considered for the new department contribute to or leverage the ability of the new department to enhance the sharing of critical information or otherwise improve the coordination of missions and activities related to homeland security? Compatible Cultures: Can the organizational culture of the agency or program being considered for the new department effectively meld with the other entities that will be consolidated? Field structures and approaches to achieving missions vary considerably between agencies. Impact on Excluded Agencies: What is the impact on departments losing components to the new department? What is the impact on agencies with homeland security missions left out of the new department? Federal, state, and local government agencies have differing roles with regard to public health emergency preparedness and response. The federal government conducts a variety of activities, including developing interagency response plans, increasing state and local response capabilities, developing and deploying federal response teams, increasing the availability of medical treatments, participating in and sponsoring exercises, planning for victim aid, and providing support in times of disaster and during special events such as the Olympic games. One of its main functions is to provide support for the primary responders at the state and local level, including emergency medical service personnel, public health officials, doctors, and nurses. This support is critical because the burden of response falls initially on state and local emergency response agencies. The President’s proposal would transfer the Laboratory Registration/ Select Agent Transfer Program—which controls biological agents with the potential for use in bioterrorism—from HHS to the new department. Currently administered by the Centers for Disease Control and Prevention (CDC), the program’s mission is the security of those biologic agents that have the potential for use by terrorists. The proposal provides for the new department to consult with appropriate agencies, which would include HHS, in maintaining the select agent list. In addition, the President’s proposal transfers control over many of the programs that provide preparedness and response support for the state and local governments to a new Department of Homeland Security. Among other changes, the proposed legislation transfers HHS’s Office of the Assistant Secretary for Public Health Emergency Preparedness to the new department. Included in this transfer is the Office of Emergency Preparedness (OEP), which currently leads the National Disaster Medical System (NDMS) in conjunction with several other agencies and the Metropolitan Medical Response System (MMRS). The Strategic National Stockpile, currently administered by CDC, would also be transferred, although the Secretary of HHS would still manage the stockpile and continue to determine its contents. Under the President’s proposal, the new department would also be responsible for all current HHS public health emergency preparedness activities carried out to assist state and local governments or private organizations to plan, prepare for, prevent, identify, and respond to biological, chemical, radiological, and nuclear events and public health emergencies. Although not specifically named in the proposal, this would include CDC’s Bioterrorism Preparedness and Response program and the Health Resources and Services Administration’s (HRSA) Bioterrorism Hospital Preparedness Program. These programs provide grants to states and cities to develop plans and build capacity for communication, disease surveillance, epidemiology, hospital planning, laboratory analysis, and other basic public health functions. Except as otherwise directed by the President, the Secretary of Homeland Security would carry out these activities through HHS under agreements to be negotiated with the Secretary of HHS. Further, the Secretary of Homeland Security would be authorized to set the priorities for these preparedness and response activities. The new Department of Homeland Security would also be responsible for conducting a national scientific research and development program, including developing national policy and coordinating the federal government’s civilian efforts to counter chemical, biological, radiological, and nuclear weapons or other emerging terrorist threats. Its responsibilities would also include establishing priorities and directing and supporting national research and development and procurement of technology and systems for detecting, preventing, protecting against, and responding to terrorist acts using chemical, biological, radiological, or nuclear weapons. Portions of the Departments of Agriculture, Defense, and Energy that conduct research would be transferred to the new Department of Homeland Security. The Department of Homeland Security would carry out its civilian health-related biological, biomedical, and infectious disease defense research and development through agreements with HHS, unless otherwise directed by the President. As part of this responsibility, the new department would establish priorities and direction for programs of basic and applied research on the detection, treatment, and prevention of infectious diseases such as those conducted by the National Institutes of Health (NIH). Transfer of Certain Public Health Programs Has Potential to Improve Coordination The transfer of federal assets and resources in the President’s proposed legislation has the potential to improve coordination of public health preparedness and response activities at the federal, state, and local levels. Our past work has detailed a lack of coordination in the programs that house these activities, which are currently dispersed across numerous federal agencies. In addition, we have discussed the need for an institutionalized responsibility for homeland security in federal statute. The proposal would transfer the Laboratory Registration/Select Agent Transfer Program from HHS to the new department. The select agent program, recently revised and expanded by the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, generally requires the registration of persons and laboratory facilities possessing specific biologic agents and toxins—called select agents—that have the potential to pose a serious threat to public health and safety. Select agents include approximately 40 viruses, bacteria, rickettsia, fungi, and toxins. Examples include Ebola, anthrax, botulinum, and ricin. The 2002 act expanded the program to cover facilities that possess the agents as well as the facilities that transfer the agents. The mission of the select agent program appears to be closely aligned with homeland security. As stated earlier, one key consideration in evaluating whether individual agencies or programs should be included or excluded from the proposed department is the extent to which homeland security is a major part of the agency or program mission. By these criteria, the transfer of the select agent program would enhance efficiency and accountability. The President’s proposal also provides the potential to consolidate programs, thereby reducing the number of points of contact with which state and local officials have to contend. However, coordination would still be required with multiple agencies across departments. Many of the agencies involved in these programs have differing perspectives and priorities, and the proposal does not sufficiently clarify the lines of authority of different parties in the event of an emergency, such as between the Federal Bureau of Investigation (FBI) and public health officials investigating a suspected bioterrorist incident. We have reported that many state and local officials have expressed concerns about the coordination of federal public health preparedness and response efforts. Officials from state public health agencies and state emergency management agencies have told us that federal programs for improving state and local preparedness are not carefully coordinated or well organized. For example, federal programs managed by the Federal Emergency Management Agency (FEMA), Department of Justice (DOJ), OEP, and CDC all currently provide funds to assist state and local governments. Each program conditions the receipt of funds on the completion of a plan, but officials have told us that the preparation of multiple, generally overlapping plans can be an inefficient process. In addition, state and local officials told us that having so many federal entities involved in preparedness and response has led to confusion, making it difficult for them to identify available federal preparedness resources and effectively partner with the federal government. The proposed transfer of numerous federal response teams and assets to the new department would enhance efficiency and accountability for these activities. This would involve a number of separate federal programs for emergency preparedness and response, whose missions are closely aligned with homeland security, including FEMA; certain units of DOJ; and HHS’s Office of the Assistant Secretary for Public Health Emergency Preparedness, including OEP and its NDMS and MMRS programs, along with the Strategic National Stockpile. In our previous work, we found that in spite of numerous efforts to improve coordination of the separate federal programs, problems remained, and we recommended consolidating the FEMA and DOJ programs to improve the coordination.The proposal places these programs under the control of the Under Secretary for Emergency Preparedness and Response, who could potentially reduce overlap and improve coordination. This change would make one individual accountable for these programs and would provide a central source for federal assistance. The proposed transfer of MMRS, a collection of local response systems funded by HHS in metropolitan areas, has the potential to enhance its communication and coordination. Officials from one state told us that their state has MMRSs in multiple cities but there is no mechanism in place to allow communication and coordination among them. Although the proposed department has the potential to facilitate the coordination of this program, this example highlights the need for greater regional coordination, an issue on which the proposal is silent. Because the new department would not include all agencies with public health responsibilities related to homeland security, coordination across departments would still be required for some programs. For example, NDMS functions as a partnership among HHS, the Department of Defense (DOD), the Department of Veterans Affairs (VA), FEMA, state and local governments, and the private sector. However, as the DOD and VA programs are not included in the proposal, only some of these federal organizations would be brought under the umbrella of the Department of Homeland Security. Similarly, the Strategic National Stockpile currently involves multiple agencies. It is administered by CDC, which contracts with VA to purchase and store pharmaceutical and medical supplies that could be used in the event of a terrorist incident. Recently expanded and reorganized, the program will now include management of the nation’s inventory of smallpox vaccine. Under the President’s proposal, CDC’s responsibilities for the stockpile would be transferred to the new department, but VA and HHS involvement would be retained, as well as continuing review by experts of the contents of the stockpile to ensure that emerging threats, advanced technologies, and new countermeasures are adequately considered. Although the proposed department has the potential to improve emergency response functions, its success depends on several factors. In addition to facilitating coordination and maintaining key relationships with other departments, these factors include merging the perspectives of the various programs that would be integrated under the proposal and clarifying the lines of authority of different parties in the event of an emergency. As an example, in the recent anthrax events, local officials complained about differing priorities between the FBI and the public health officials in handling suspicious specimens. According to the public health officials, FBI officials insisted on first informing FBI managers of any test results, which delayed getting test results to treating physicians. The public health officials viewed contacting physicians as the first priority in order to ensure that effective treatment could begin as quickly as possible. New Department’s Control of Essential Public Health Capacities Raises Concern The President’s proposal to shift the responsibility for all programs assisting state and local agencies in public health emergency preparedness and response from HHS to the new department raises concern because of the dual-purpose nature of these activities. These programs include essential public health functions that, while important for homeland security, are critical to basic public health core capacities. Therefore, we are concerned about the transfer of control over the programs, including priority setting, that the proposal would give to the new department. We recognize the need for coordination of these activities with other homeland security functions, but the President’s proposal is not clear on how the public health and homeland security objectives would be balanced. Under the President’s proposal, responsibility for programs with dual homeland security and public health purposes would be transferred to the new department. These include such current HHS assistance programs as CDC’s Bioterrorism Preparedness and Response program and HRSA’s Bioterrorism Hospital Preparedness Program. Functions funded through these programs are central to investigations of naturally occurring infectious disease outbreaks and to regular public health communications, as well as to identifying and responding to a bioterrorist event. For example, CDC has used funds from these programs to help state and local health agencies build an electronic infrastructure for public health communications to improve the collection and transmission of information related to both bioterrorist incidents and other public health events. Just as with the West Nile virus outbreak in New York City, which initially was feared to be the result of bioterrorism, when an unusual case of disease occurs public health officials must investigate to determine whether it is naturally occurring or intentionally caused. Although the origin of the disease may not be clear at the outset, the same public health resources are needed to investigate, regardless of the source. States are planning to use funds from these assistance programs to build the dual-purpose public health infrastructure and core capacities that the recently enacted Public Health Security and Bioterrorism Preparedness and Response Act of 2002 stated are needed. States plan to expand laboratory capacity, enhance their ability to conduct infectious disease surveillance and epidemiological investigations, improve communication among public health agencies, and develop plans for communicating with the public. States also plan to use these funds to hire and train additional staff in many of these areas, including epidemiology. Our concern regarding these dual-purpose programs relates to the structure provided for in the President’s proposal. The Secretary of Homeland Security would be given control over programs to be carried out by HHS. The proposal also authorizes the President to direct that these programs no longer be carried out through agreements with HHS, without addressing the circumstances under which such authority would be exercised. We are concerned that this approach may disrupt the synergy that exists in these dual-purpose programs. We are also concerned that the separation of control over the programs from their operations could lead to difficulty in balancing priorities. Although the HHS programs are important for homeland security, they are just as important to the day-to- day needs of public health agencies and hospitals, such as reporting on disease outbreaks and providing alerts to the medical community. The current proposal does not clearly provide a structure that ensures that the goals of both homeland security and public health will be met. Transfer of Control and Priority Setting over Dual-Purpose Research and Development Raises Concern The proposed Department of Homeland Security would be tasked with developing national policy for and coordinating the federal government’s civilian research and development efforts to counter chemical, biological, radiological, and nuclear threats. In addition to coordination, we believe the role of the new department should include forging collaborative relationships with programs at all levels of government and developing a strategic plan for research and development. However, we have many of the same concerns regarding the transfer of responsibility for the research and development programs that we have regarding the transfer of the public health preparedness programs. We are concerned about the implications of the proposed transfer of control and priority setting for dual-purpose research. For example, some research programs have broad missions that are not easily separated into homeland security research and research for other purposes. We are concerned that such dual-purpose research activities may lose the synergy of their current placement in programs. In addition, we see a potential for duplication of capacity that already exists in the federal laboratories. We have previously reported that while federal research and development programs are coordinated in a variety of ways, coordination is limited, raising the potential for duplication of efforts among federal agencies.Coordination is limited by the extent of compartmentalization of efforts because of the sensitivity of the research and development programs, security classification of research, and the absence of a single coordinating entity to ensure against duplication. For example, DOD’s Defense Advanced Research Projects Agency was unaware of U.S. Coast Guard plans to develop methods to detect biological agents on infected cruise ships and, therefore, was unable to share information on its research to develop biological detection devices for buildings that could have applicability in this area. The new department will need to develop mechanisms to coordinate and integrate information on research and development being performed across the government related to chemical, biological, radiological, and nuclear terrorism, as well as user needs. We reported in 1999 and again in 2001 that the current formal and informal research and development coordination mechanisms may not ensure that potential overlaps, gaps, and opportunities for collaboration are addressed. It should be noted, however, that the President’s proposal tasks the new department with coordinating the federal government’s “civilian efforts” only. We believe the new department will also need to coordinate with DOD and the intelligence agencies that conduct research and development efforts designed to detect and respond to weapons of mass destruction. In addition, the first responders and local governments possess practical knowledge about their technological needs and relevant design limitations that should be taken into account in federal efforts to provide new equipment, such as protective gear and sensor systems, and help set standards for performance and interoperability. Therefore, the new department will have to develop collaborative relationships with these organizations to facilitate technological improvements and encourage cooperative behavior. The President’s proposal could help improve coordination of federal research and development by giving one person the responsibility for creating a single national research and development strategy that could address coordination, reduce potential duplication, and ensure that important issues are addressed. In 2001, we recommended the creation of a unified strategy to reduce duplication and leverage resources, and suggested that the plan be coordinated with federal agencies performing research as well as state and local authorities. The development of such a plan would help to ensure that research gaps are filled, unproductive duplication is minimized, and that individual agency plans are consistent with the overall goals. The President’s proposal would also transfer the responsibility for civilian health-related biological defense research and development programs to the new department, but the programs would continue to be carried out through HHS. These programs, now primarily sponsored by NIH, include a variety of efforts to understand basic biological mechanisms of infection and to develop and test rapid diagnostic tools, vaccines, and antibacterial and antiviral drugs. These efforts have dual-purpose applicability. The scientific research on biologic agents that could be used by terrorists cannot be readily separated from research on emerging infectious diseases. For example, NIH-funded research on a drug to treat cytomegalovirus complications in patients with HIV is now being investigated as a prototype for developing antiviral drugs against smallpox. Conversely, research being carried out on antiviral drugs in the NIH biodefense research program is expected to be useful in the development of treatments for hepatitis C. The proposal to transfer responsibility to the new department for research and development programs that would continue to be carried out by HHS raises many of the same concerns we have with the structure the proposal creates for public health preparedness programs. Although there is a clear need for the new department to have responsibility for setting policy, developing a strategy, providing leadership, and overall coordinating of research and development efforts in these areas, we are concerned that control and priority-setting responsibility will not be vested in those best positioned to understand the potential of basic research efforts or the relevance of research being carried out in other, non-biodefense programs. In addition, the proposal would allow the new department to direct, fund, and conduct research related to chemical, biological, radiological, nuclear, and other emerging terrorist threats on its own. This raises the potential for duplication of efforts, lack of efficiency, and an increased need for coordination with other departments that would continue to carry out relevant research. We are concerned that the proposal could result in a duplication of capacity that already exists in the current federal laboratories. Concluding Observations Many aspects of the proposed consolidation of response activities are in line with our previous recommendations to consolidate programs, coordinate functions, and provide a statutory basis for leadership of homeland security. The transfer of the HHS medical response programs has the potential to reduce overlap among programs and facilitate response in times of disaster. However, we are concerned that the proposal does not provide the clear delineation of roles and responsibilities that is needed. We are also concerned about the broad control the proposal grants to the new department for research and development and public health preparedness programs. Although there is a need to coordinate these activities with the other homeland security preparedness and response programs that would be brought into the new department, there is also a need to maintain the priorities for basic public health capacities that are currently funded through these dual-purpose programs. We do not believe that the President’s proposal adequately addresses how to accomplish both objectives. We are also concerned that the proposal would transfer the control and priority setting over dual- purpose research and has the potential to create an unnecessary duplication of federal research capacity. Contact and Acknowledgments For further information about this statement, please contact me at (202) 512-7118. Robert Copeland, Marcia Crosse, Greg Ferrante, and Deborah Miller also made key contributions to this statement. Related GAO Products Homeland Security Homeland Security: Title III of the Homeland Security Act of 2002. GAO- 02-927T. Washington, D.C.: July 9, 2002. Homeland Security: New Department Could Improve Biomedical R&D Coordination but May Disrupt Dual-Purpose Efforts. GAO-02-924T. Washington, D.C.: July 9, 2002. Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed. GAO-02-918T. Washington, D.C.: July 9, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-901T. Washington, D.C.: July 3, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-900T. Washington, D.C.: July 2, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-899T. Washington, D.C.: July 1, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: Proposal for Cabinet Agency Has Merit, but Implementation Will Be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. Homeland Security: Responsibility and Accountability for Achieving National Goals. GAO-02-627T. Washington, D.C.: April 11, 2002. Homeland Security: Progress Made; More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: September 21, 2001. Public Health Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02- 149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessment and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. Combating Terrorism National Preparedness: Technologies to Secure Federal Buildings. GAO- 02-687T. Washington, D.C.: April 25, 2002. National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Chemical and Biological Defense: DOD Should Clarify Expectations for Medical Readiness. GAO-02-219T. Washington, D.C.: November 7, 2001. Anthrax Vaccine: Changes to the Manufacturing Process. GAO-02-181T. Washington, D.C.: October 23, 2001. Chemical and Biological Defense: DOD Needs to Clarify Expectations for Medical Readiness. GAO-02-38. Washington, D.C.: October 19, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-02-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement. GAO-01-666T. Washington, D.C.: May 1, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, DC: April 24, 2001. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement. GAO-01-463. Washington, D.C.: March 30, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01- 14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed. GAO/T-HEHS/AIMD-00-59. Washington, D.C.: March 8, 2000. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed. GAO/HEHS/AIMD-00-36. Washington, D.C.: October 29, 1999. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999. Chemical and Biological Defense: Coordination of Nonmedical Chemical and Biological R&D Programs. GAO/NSIAD-99-160. Washington, D.C.: August 16, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/T-NSIAD-99-184. Washington, D.C.: June 23, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO/NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO/NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Observations on Crosscutting Issues. GAO/T- NSIAD-98-164. Washington, D.C.: April 23, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Disaster Assistance Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures. GAO-01-837. Washington, D.C.: August 31, 2001. Chemical Weapons: FEMA and Army Must Be Proactive in Preparing States for Emergencies. GAO-01-850. Washington, D.C.: August 13, 2001. Federal Emergency Management Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-832. Washington, D.C.: July 9, 2001. Budget and Management Budget Issues: Long-Term Fiscal Challenges. GAO-02-467T. Washington, D.C.: February 27, 2002. Results-Oriented Budget Practices in Federal Agencies. GAO-01-1084SP. Washington, D.C.: August 2001. Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies. GAO-01-592. Washington, D.C.: May 25, 2001. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. Managing for Results: Using the Results Act to Address Mission Fragmentation and Program Overlap. GAO-AIMD-97-146. Washington, D.C.: August 29, 1997. Government Restructuring: Identifying Potential Duplication in Federal Missions and Approaches. GAO/T-AIMD-95-161. Washington, D.C.: June 7, 1995. Government Reorganization: Issues and Principles. GAO/T-GGD/AIMD- 95-166. Washington, D.C.: May 17, 1995.
Federal, state, and local governments share responsibility for terrorist attacks. However, local government, including police and fire departments, emergency medical personnel, and public health agencies, is typically the first responder to an incident. The federal government historically has provided leadership, training, and funding assistance. In the aftermath of September 11, for instance, one-quarter of the $40 billion Emergency Response Fund was earmarked for homeland security, including enhancing state and local government preparedness. Because the national security threat is diffuse and the challenge is highly intergovernmental, national policymakers must formulate strategies with a firm understanding of the interests, capacity, and challenges facing those governments. The development of a national strategy will improve national preparedness and enhance partnerships between federal, state, and local governments. The creation of the Office of Homeland Security is an important and potentially significant first step. The Office of Homeland Security's strategic plan should (1) define and clarify the appropriate roles and responsibilities of federal, state, and local entities; (2) establish goals and performance measures to guide the nation's preparedness efforts; and (3) carefully choose the most appropriate tools of government to implement the national strategy and achieve national goals. The President's proposed Homeland Security Act of 2002 would bring many federal agencies with homeland security responsibilities--including public health preparedness and response--into one department to mobilize and focus assets and resources at all levels of government. GAO believes that the proposed reorganization has the potential to repair fragmentation in the coordination of public health preparedness and response programs at the federal, state, and local levels. The proposal would institutionalize the responsibility for homeland security in federal statute. In addition to improving overall coordination, the transfer of programs from multiple agencies to the new department could reduce overlap among programs and facilitate response in times of disaster. There are concerns about the proposed transfer of control of public health assistance programs that have both public health and homeland security functions from Health and Human Services to the new department. Transferring control of these programs, including priority setting, to the new department has the potential to disrupt some programs that are critical to basic public health responsibilities. GAO does not believe that the President's proposal is sufficiently clear on how both the homeland security and public health objectives would be accomplished.
Background The study of human factors examines how humans interact with machines and other people (pilots, air traffic controllers, or design and acquisition personnel) and determines whether procedures and regulations take into account human abilities and limitations. Identifying chances for human error can reduce the need for later replacing or modifying equipment and procedures. Human factors affect the operation of all of FAA’s functions, including research, the acquisition of equipment, and safety. FAA’s work on human factors focuses on such issues as whether equipment is designed to enhance operators’ performance and minimize errors and whether the procedures used by air traffic controllers promote safe operations. For example, much of the information conveyed to pilots by air traffic controllers has been standardized to minimize the possibility of misunderstanding. (See app. I for a more complete definition of human factors and examples of how human factors have affected safety in specific situations.) The Aviation Safety Research Act of 1988 directed FAA to augment its research on human factors and coordinate its work with that of NASA and Defense because the Congress believed that FAA did not have sufficient expertise in all areas of human factors. A report by the Office of Technology Assessment, cited in the House report on the act as the basis for the legislation, recommended that FAA allocate resources for developing its regulatory support staffs’ expertise in human factors and establish a focal point for human factors within the agency. In addition, the Congress has indicated through the budget process that research on human factors should be a priority in FAA’s overall research program. Figure 1 compares the congressional appropriations for FAA’s research on human factors with FAA’s funding requests. FAA’s Organization for Human Factors Key aspects of FAA’s human factors organization are the 1993 policy, the position of Chief Scientist, and the guidance on considering human factors in the acquisition process. The 1993 policy prescribes the roles and responsibilities of FAA’s assistant and associate administrators and program directors, as well as of the Human Factors Coordinating Committee (HFCC), including its chair, the Chief Scientist. The Chief Scientist also manages the Human Factors Division, which is housed in FAA’s Office of Aviation Research. (Fig. 2 illustrates the location of the Human Factors Division within FAA’s organizational structure.) On April 1, 1996, FAA changed its acquisition process and method of incorporating the consideration of human factors into that process. The creation of an Office of System Safety in 1995 may further affect the organizational structure for human factors. Human Factors Policy In October 1993, FAA issued an order for incorporating and coordinating the consideration of human factors throughout the agency. Under the order, assistant and associate administrators and program directors are responsible for, among other things, establishing formal procedures to ensure the systematic consideration of human factors within their organizations. However, FAA’s order does not prescribe the (1) methods for considering human factors, (2) minimal standards for incorporating human factors, or (3) requirements for seeking guidance on human factors from specialists that the administrators and directors are to follow. FAA officials in the three units where we held discussions—research and acquisitions, regulation and certification, and air traffic services—indicated that they have not fully established formal procedures for incorporating the consideration of human factors in their activities. FAA created the Human Factors Coordinating Committee in 1989 to facilitate the agency’s work on human factors and enhance the use of information on human factors. However, according to the Chief Scientist, the committee is not a decision-making body, even though its members are designated by the agency’s assistant and associate administrators and program directors. Instead, the Chief Scientist said, the committee is primarily a forum for exchanging information. As the committee’s chair, the Chief Scientist carries out most of the committee’s responsibilities. Chief Scientist In addition to chairing the Human Factors Coordinating Committee, the Chief Scientist heads the Human Factors Division. This division is housed within the headquarters Office of Aviation Research, under the Associate Administrator for Research and Acquisitions. Among other things, the Human Factors Division develops policies on human factors that promote the productivity and safety of the national airspace system. The division is staffed by seven professional human factors specialists—six full-time and one part-time. According to its mission statement, the Human Factors Division seeks to provide scientific and technical support for FAA’s research on human factors in civil aviation and its applications in the agency’s programs for acquisitions, regulation and certification, and air traffic services. However, we found that the Human Factors Division’s ability to provide this support depends on the extent to which the associate administrators and program directors use the division. FAA does not require the division—or any other unit with scientific and technical expertise in human factors—to review the quality of the work on human factors performed by other FAA units or contractors. FAA does not require its administrators to seek guidance from human factors specialists, such as those in the Human Factors Division. Although the scope of our audit did not include a detailed examination of the application of human factors in acquisitions, we have previously found inadequate technical oversight in FAA’s management of acquisitions. For example, in a previous review of FAA’s modernization program, we found that not following the technical principles of the human factors discipline in designing equipment delayed some projects. Instead of relying on the discipline’s objective criteria for measuring the performance of alternative designs, FAA consulted users’ preferences, only to find that its efforts were misdirected because different groups of users had different preferences. Incorporating the Consideration of Human Factors in Acquisitions and Operations Recent legislative and organizational changes may affect how the formal consideration of human factors is incorporated in the acquisition process and may strengthen the application of human factors in operations, such as safety. Acquisitions and Human Factors Several offices under the Associate Administrator for Research and Acquisitions are responsible for developing and acquiring new systems, such as air traffic control equipment. According to staff in the Human Factors Division, applying considerations of human factors increases a product’s or a process’s performance and efficiency while decreasing developmental, operational, and maintenance costs over the lifetime of the product or process. To develop and deploy equipment more efficiently, in 1995, FAA adopted a new management approach that relies on integrated product teams,whose members include end-users, contractors, and all other parties responsible for developing or procuring new equipment or processes. As a first step in ensuring that human factors are considered in acquisitions, the Human Factors Division developed a requirement in FAA’s 1993 acquisition policy that all new acquisition projects include a human factors plan. Such a plan was to (1) describe how considerations of human factors should be applied and (2) document how a piece of equipment or a process should perform when operated as expected by the end-users. However, on April 1, 1996, in response to new legislation exempting FAA from most federal procurement statutes, FAA implemented the Federal Aviation Administration Acquisition Management System, which superseded FAA’s 1993 acquisition policy. According to the initial guidance provided for this new system, human factors may be formally considered at an earlier stage in the acquisition process than previously, but this early consideration is not required. Furthermore, the extent to which human factors should be considered is not specified in the system’s guidance, nor is a separate plan for human factors required. There is no requirement for integrated product teams to obtain recommendations from human factors specialists. Operations and Human Factors According to some FAA human factors specialists, considering human factors is key to improving the safety of aviation operations. In 1990, the FAA Administrator testified before the Congress that the agency’s objective in aviation safety is zero accidents. The following year, the Administrator testified that human error was the most serious impediment to FAA’s achieving that goal. He said that FAA planned to accentuate its consideration of human factors in all of its programs, from training to procurement. To help reach its goal of zero accidents in aviation operations, FAA, in 1995, created a staff Office for System Safety. This office is headed by the Assistant Administrator for System Safety, who reports directly to the FAA Administrator. The objective of this office is to proactively determine potential sources of accidents and prevent them from occurring. The Assistant Administrator for this office has indicated that human factors will be an important part of his office’s work. Although the Human Factors Division administers FAA’s research on human factors, some of which is directly concerned with safety, its staff are not involved in some applications of human factors to safety. For example, the Office of Regulation and Certification—responsible for aircraft certification, safety inspections, and flight operational safety—plans to strengthen its emphasis on human factors by hiring at least one specialist, rather than rely on the specialists in the Human Factors Division. According to the Associate Administrator for Regulation and Certification, the specialists in the Human Factors Division do not have the expertise needed to apply considerations of human factors to developing requirements for regulation and certification. FAA’s Research on Human Factors The Human Factors Division is responsible for identifying aviation-related issues in research on human factors and for allocating and coordinating FAA’s resources for internal and external research on human factors. Identifying Research Issues To identify aviation-related issues in research on human factors, the Human Factors Division consults with FAA units and other members of the aviation community. To develop its initial objectives for research on human factors, FAA participated in a task force in April 1989, sponsored by the Air Transport Association of America. This task force identified a number of significant research topics, which FAA incorporated into the National Plan for Civil Aviation Human Factors. This plan—developed by the Human Factors Division in conjunction with the Department of Defense, NASA, industry, and academia—includes a framework that categorizes research on the basis of five priorities, or “thrusts,” and provides guidelines for initiating and managing research on human factors in aviation. (See app. II for a description of each priority and a listing of the ongoing projects under each.) Besides participating in the task force, the Human Factors Division has worked with the aviation community to develop research issues by participating in conferences and workshops. In comparing FAA’s processes to the aviation community’s, we found that FAA not only looks to the aviation community but the aviation community also often looks to FAA to focus attention on particular research issues. For example, FAA sponsored a national conference in 1995 on the challenge of approaching zero accidents. In addition, the Human Factors Division identifies research issues that the aviation community may not. For example, by managing the research sponsored by FAA units, the Human Factors Division is able to identify research needs that may apply to other FAA units and the aviation community as a whole. According to the Assistant Administrator, the newly created Office of System Safety will proactively seek to identify safety issues that may indicate the need for additional research on human factors. For example, this office has assumed responsibility from the Office of Aviation Safety for an ongoing project to develop methods for extracting information on human factors from FAA’s existing sources of data. However, according to the Assistant Administrator, this office has not yet developed a research agenda. While staff from the office have met with personnel from the Human Factors Division, no joint activities have been established and no plans have been developed for interactions between the two units. Although the Human Factors Division identifies FAA’s needs for research on human factors, at least one operating unit is also independently identifying and executing its own research needs. The Office of Regulation and Certification identifies research issues on the basis of its needs and determines what organization will conduct the research. Specifically, the Associate Administrator for Regulation and Certification has established a Human Factors Task Force to review existing literature; obtain information from avionics manufacturers, operators, and industry technical groups; and conduct simulations. The task force was not chartered to initiate research; however, it may make recommendations leading to research on human factors. The Human Factors Division was involved neither in determining the need for the task force nor in planning its work. The possibility exists that the task force’s recommendations could lead the Office of Regulation and Certification to initiate research duplicating the work of the Human Factors Division. Thus, FAA would be deprived of the opportunity to leverage resources for research. Allocating and Coordinating Resources for Research Although the Human Factors Division is primarily responsible for allocating and coordinating FAA’s resources for internal and external research on human factors, FAA’s other units are not required to coordinate their research with the division, whether their research is performed internally, by the units themselves, or externally, through interagency agreements or through contractors. Internal Allocation and Coordination Starting in 1995, the Office of Aviation Research made the Human Factors Division responsible for allocating most of the agency’s Research, Engineering, and Development funds for research on human factors—nearly $28 million. In fiscal year 1995, the Human Factors Division funded research projects in support of FAA’s acquisition ($5 million), regulation and certification ($12.5 million), and air traffic services ($10.5 million) programs. The Human Factors Division has also assumed the responsibility for funding contracts or grants for research on human factors at entities such as FAA’s Civil Aeromedical Institute (CAMI) located in Oklahoma City, FAA’s Technical Center near Atlantic City, NASA, the Department of Transportation’s Volpe Transportation Center, and other institutions.Previously, when its research on human factors was funded solely by its operating units, FAA provided no centralized planning for and oversight of its core research on human factors. Now that the Human Factors Division is coordinating FAA’s funding for research (conducted by CAMI, FAA’s Technical Center, NASA, the Volpe Transportation Center, and other institutions), it is constructing a combined database of ongoing research projects, which should give greater visibility to FAA’s research on human factors and permit closer monitoring of the research projects that the agency has funded. As a part of its research administration, the Human Factors Division also monitors whether scientific and technical principles are being applied to the research it funds. Some FAA units may not be coordinating their research on human factors with the Human Factors Division. For example, some integrated product teams may be conducting such research through contractors, but FAA has no mechanism to ensure that the information developed by a private contractor for one team is made available to another contractor addressing similar issues for another team. Thus, because the FAA units that sponsor their own research on human factors are not required to coordinate their work with that of other units or to inform the Human Factors Division about their research, the possibility of duplication exists. External Coordination The Human Factors Division has memoranda of agreement or understanding with NASA and the Department of Defense. According to officials in both the Human Factors Division and NASA, a beneficial result of their coordination is that NASA has not duplicated research being conducted by the division. In addition, the Human Factors Division contracts with NASA to conduct some of its research on human factors in areas where NASA has more experience and/or expertise. FAA also contracts with the Department of Defense to conduct research on human factors. While much of Defense’s research is specific to defense needs, Defense officials indicated that using the framework articulated in the National Plan for Civil Aviation Human Factors will enable the Department to better coordinate its research on human factors with FAA’s work in similar areas. Conclusions The organizational structure for FAA’s work on human factors is still evolving. Therefore, it is too soon to evaluate the effectiveness of the agency’s procedures for incorporating the consideration of human factors throughout FAA and for monitoring the quality of the agency’s work on human factors. Nonetheless, we have found that some FAA units are not coordinating their research with the Human Factors Division, although this division is, currently, primarily responsible for allocating and coordinating FAA’s resources for internal and external research on human factors. Without agencywide coordination of the research on human factors, the potential for duplication exists and the opportunity to leverage the agency’s research dollars by combining related projects is diminished. Recommendation To reduce the possibility of duplication and maximize the opportunity to leverage resources for research on human factors, we recommend that the Secretary of Transportation direct the Administrator, FAA, to ensure that all units within FAA coordinate their research through the agency’s Human Factors Division. Agency Comments We provided copies of a draft of this report to the Department of Transportation (DOT) and FAA. We met with officials from the Office of the Secretary of Transportation, including the Chief of the Audit Liaison Division, and FAA officials, including the Special Assistant to the Associate Administrator for Regulation and Certification and the Chief Scientist and Technical Advisor on Human Factors, who generally agreed with the report’s findings and recommendation. They provided us with information clarifying FAA’s formal consideration of human factors in the agency’s new acquisition process; we incorporated this information into the text as necessary. According to the Office of Regulation and Certification, the possibility that its Task Force on Human Factors would recommend research duplicating the work of the Human Factors Division is minimal because the research might be administered by the Human Factors Division. However, the Human Factors Division is concerned that, without adequate coordination, the task force could initiate future research that might duplicate the division’s work. FAA indicated that the Office of Regulation and Certification is taking steps to hire a human factors specialist whose first duty will be to develop, in conjunction with the Human Factors Division, a documented process for coordinating research. Unless FAA ensures that research will be administered through the Human Factors Division or until the agency establishes a documented process for coordinating research, we continue to believe that the possibility of duplication exists. DOT expressed concern about our discussion of FAA’s practice of not reviewing the quality of the agency’s work on human factors, noting that quality is difficult to assess. While we agree that assessing quality is difficult, we continue to believe that scientific and technical standards are available for assessing the quality of the agency’s work on human factors. We further believe that adherence to such standards is important to ensure the usefulness of the work’s results. Scope and Methodology To determine how FAA has incorporated the consideration of human factors into its research, acquisition, and safety programs, we examined FAA’s organizational structure and reviewed FAA’s policy orders, formal guidance, and strategies for compiling and applying information on human factors. We interviewed FAA officials in the research and acquisitions, regulation and certification, and air traffic services units, but we did not discuss the consideration of human factors in the airports and civil aviation security units because of time constraints. To determine the processes that FAA uses to identify issues in aviation-related research on human factors and compare these processes to those of the aviation community, we reviewed FAA’s plans and research abstracts, interviewed agency officials, and contacted members of the aviation community. To determine how FAA allocates and coordinates resources internally and externally, we interviewed FAA, NASA, and Defense officials and other members of the aviation community and reviewed the legislative requirements for these activities. Because FAA’s work on human factors was not centralized, we relied on data from the Human Factors Division on activities in the Research, Engineering, and Development budget. However, we were not able to obtain similar information for the work on human factors supported through other FAA accounts because such information is not available. We conducted our review from September 1995 through June 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Transportation and the FAA Administrator. We will also make copies available to others on request. Please call me at (202) 512-3650 if you or your staff have any questions about this report. Definition of Human Factors The human factors discipline is a scientific and technical approach for designing, operating, and maintaining systems. The goal of this approach is to improve the efficiency and reliability of systems by enhancing the integration of these systems’ components. These components generally consist of the facilities and equipment, rules and regulations, human operators, and environment (physical, economic, political, and social) in which they operate. Thus, the human factors discipline tries to optimize the interactions between the components of a system. To achieve its goal, the human factors discipline relies on research that combines human sciences and systems engineering. In aviation, the application of human factors research focuses on the complex connections between (1) the members of the flight crew, (2) the flight crew and the aircraft they pilot, (3) the flight crew and the air traffic controllers, (4) the air traffic controllers and their equipment, and (5) the rules, regulations, laws, and standard operating procedures that govern aviation operations. Table I.1 illustrates human factors issues in selected aviation incidents. Type(s) of human factors issue(s) Specific potential human factors issue(s) The crew was not familiar with sophisticated new flight control equipment requiring accurate interpretation and operation. The crew could have (1) misinterpreted the vertical speed/flight path angle display on the flight control computer or (2) entered the wrong data. Communication and coordination between the captain and the first officer could have been poor. Both the captain and first officer might have had limited experience with this type of aircraft. A last-moment air traffic control approach procedure might have distracted the crew’s attention from the aircraft’s position in relation to the airport and to the altitude/descent rate. The pilot disengaged two computerized safety features: an autothrottle and an alpha floor protection function. The pilot did not fully understand the safety features’ functions: The autothrottle maintains a specified speed and the alpha floor protection function prevents the engine from stalling. An antivibration clamp on an engine-mounted hydraulic tube was missing. A maintenance technician forgot to install the clamp. FAA did not initially determine the systems’ operational requirements. FAA should have established performance baselines for the systems being developed. The pilot did not achieve a satisfactory level of performance, despite remedial training. The pilot may not have possessed the skills needed to become competent, despite training. The pilot and first officer had little experience flying together, and the first officer may not have known that the pilot’s skills were inadequate. (Table notes on next page) French Transport Ministry officials, as quoted in an article appearing in Aviation Week and Space Technology (Jan. 3, 1994). We did not verify the accuracy of the facts presented in this article. Steven M. Casey, Set Phasers on Stun and Other True Tales of Design, Technology, and Human Error, Santa Barbara: Aegean Publishing Co., 1993. R. Curtis Graeber and David A. Marx, “Reducing Human Error in Aircraft Maintenance Operations,” Seattle: Boeing Commercial Airplane Group (Presented at the Flight Safety Foundation’s 46th Annual International Air Safety Seminar, Nov. 8-11, 1993). Former FAA contractor. American Eagle officials, as quoted in an article in U.S.A. TODAY (Sept. 27, 1995). We did not verify the accuracy of the facts presented in this article. Human Factors Research Areas and Ongoing Research Projects FAA’s framework for research on human factors is organized into five broad areas: (1) human-centered automation, (2) information management and display, (3) selection and training, (4) human performance assessment, and (5) bioaeronautics. Human-Centered Automation Human-centered automation research focuses on the role of the operator and the effects of using automation to assist humans in accomplishing their assigned tasks with greater safety and efficiency. The research in this area is designed to identify and apply knowledge of the relative strengths and limitations of humans in an automated environment. It investigates the implications of computer-based technology for the design, evaluation, and certification of controls, displays, and advanced systems. Information Management and Display Research conducted under this area seeks to improve safety and performance by addressing the presentation and transfer of information among components in the national airspace system (NAS), including controllers’ workstations, the flight deck, operational and airway facilities, and all the interfaces in between. Selection and Training The National Airspace System’s efficiency and effectiveness are enhanced through research to understand the relationship between human abilities and the performance of aviation tasks; to enhance the measures and methods for predicting future job/task performance; to develop a scientific basis for designing training programs, devices, and aids; to define criteria for assessing future training requirements; and to identify new ways for selecting aviation system personnel. The recipients of research findings on selection and training are flight crews, air traffic controllers, airways facilities systems management personnel, aircraft maintenance technicians, airport security personnel, and others in the aviation community who contribute to safety and efficiency through staffing and training decisions. Areas of Ongoing Research Selection, Training, Certification, and Staffing of ATC Personnel Model Advanced Qualification Program (AQP) (ATCS/PTS) Human Performance Assessment Research in this area is designed to improve the understanding of human performance capabilities and limitations in aviation and the means to measure them. Individuals’ cognitive and interpersonal skills, teams’ characteristics, and organizational factors directly shape the safety and efficiency of aviation operations. This research will provide information to improve safety and productivity through better equipment design, training, and system performance. Areas of Ongoing Research Automated Analysis of Machine Measured Performance Human Performance in Inspection Basic Scientific Information on Factors Impacting Controller Performance Pilot-ATC Communication: Identification of Human Factors Associated With Effective Transfer of Information Crew Resource Management (CRM) in Aircraft Maintenance and Inspection Air Crew Performance Measurement Assessing Automation Impacts on Controller/Sector Performance and Aviation System Safety Monitoring Organizational and Environmental Factors Affecting Controller Basic Scientific Knowledge of Human Performance Factors Models of Aeronautical Decision-Making Color Vision Deficiency and Use of Advanced Color-Coded Displays Assessment of ATCs Crew Performance: Development and Validation Readiness to Perform (RTP) Test Validation Glare Vision Testing in the Certification of Pilots Human Factors of Performance and Pilot Aging Assessing Automated ATC Systems Through the Use of NAS Data Organizational Impact of New Technologies on Airway Facilities Human Factors Considerations in the Use of Nondestructive Test (NDT) CAMI Cabin Safety Database Shiftwork in Controllers of Varying Age Factors in Aircraft Accident Rates (Utilizing the Consolidated Database) Bioaeronautics This area, which focuses on the bioengineering, biomedicine, and biochemistry associated with performance and safety, seeks to enhance personal performance and safety by maximizing the health and physiological integrity of crews and passengers. Major Contributors to This Report Resources, Community, and Economic Developmnent Division, Washington, D.C. Atlanta Field Office Veronica O. Mayhand The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the Federal Aviation Administration's (FAA) organizational structure for incorporating human factors into aviation-related research. GAO found that: (1) FAA has incorporated a human factors policy order, a Chief Scientific and Technical Advisor for human factors, and guidance for considering human factors in the acquisition process; (2) the order assigns responsibility for ensuring that human factors are considered in FAA research activities, but does not establish minimal standards for meeting this requirement; (3) recent legislative and organizational changes may affect the application of human factors research in FAA acquisitions and operations; (4) the FAA Acquisition Management System considers human factors at an earlier stage in the acquisition process, but there is no mention of the extent to which such factors should be considered; (5) the FAA Human Factors Division (HFD) consults with other members of the aviation community and participates in industry task forces and conferences to identify issues associated with human factors in aviation; (6) HFD solicits ideas for research from FAA acquisition and operating units and is responsible for internal and external coordination of FAA research; (7) HFD allocates most FAA funding for core research, and enters into interagency agreements with the National Space and Aeronautics Administration and the Department of Defense to coordinate the agencies' human factors research; and (8) the possibility of duplicating human factors research exists because FAA units are not required to coordinate their research activities.
Background The space shuttle is the world’s first reusable space transportation system. It consists of a reusable orbiter with three main engines, two partially reusable solid rocket boosters, and an expendable external fuel tank. The space shuttle is an essential element of NASA’s transportation plan that includes a framework for maintaining shuttle fleet capability to fly safely through 2020. The space shuttle is NASA’s largest individual program accounting for about 25 percent of the agency’s fiscal year 2004 budget request. Since it is the nation’s only launch system capable of transporting people, the shuttle’s viability is critical to the space station. We have reported in the past that extensive delays in the development and assembly of the ISS and difficulties defining requirements and maturing technologies for the next generation space transportation systems have hindered the development and funding of a long-term space transportation program. We have also testified that NASA faced a number of programmatic and technical challenges in making shuttle upgrades, including revitalizing its workforce and defining shuttle technical requirements. In another report, we reported that NASA continued to rely on qualitative risk assessments to supplement engineering judgments and had made only limited progress in the use of quantitative assessment methods. Recognizing such needs, NASA has taken steps to bring a more formal approach to identifying, prioritizing, and funding improvements. In February 1997, NASA established the Space Shuttle Program Development Office at NASA’s Johnson Space Center to sustain, improve, and add capability to the space shuttle through an upgrade program. In December 2002, a new selection and prioritization process for upgrades was implemented through the Service Life Extension Program. The SLEP provided a formal process to select, prioritize, and fund upgrades needed to keep the shuttle flying safely and efficiently and allow upgrades to be evaluated and approved on a priority basis. Shuttle upgrades are items that contribute toward the Space Shuttle Program goals to (1) fly safely, (2) meet the manifest, (3) improve mission supportability, and (4) improve the system in order to meet NASA’s commitments and goals for human operations in space. According to NASA, upgrades achieve major reductions in the operational risks inherent in the current systems by making changes that eliminate, reduce, or mitigate significant hazards and critical failure modes and that increase the overall reliability of the current system with respect to the likelihood of catastrophic failure. Examples of upgrade projects currently funded to improve safety include Cockpit Avionics, Vehicle Main Landing Gear Tire and Wheel, External Tank Friction Stir Weld, and Shuttle Main Engine Advanced Health Management System. Shuttle Requirements Process Lacks Systematic Approach To keep the shuttle flying safely, NASA needs to fully implement an upgrade program to modernize various shuttle components. However, efforts to do so have been stymied by the agency’s inability to develop a long-term strategic investment plan and a systematic approach for defining shuttle requirements, because the spacecraft’s life expectancy and mission have continued to change. Key decisions about the ultimate life and mission of the basic elements of the integrated transportation plan—the ISS and the Orbital Space Plane (OSP)—were not made prior to fully defining shuttle requirements. Originally, the shuttle was designed for a 10-year/100-flight service— transporting satellites and other cargo for the Department of Defense and others and placing in orbit and maintaining the Hubble Space Telescope— after which its life was to end. During this time, NASA was reluctant to make long-term investments due to the shuttle’s perceived short life expectancy. With the advent of the ISS, the agency’s transportation plan indicated that the shuttle would be used to operate and support the ISS until 2012, when a new space launch vehicle was to take over that mission. Recently, use of the new launch vehicle was de-emphasized by a new ISTP, which in its place proposed development of an OSP (to transfer the crew to the ISS) and continued use of the shuttle (to transfer cargo). The new plan proposes upgrading the shuttle’s software and hardware to extend its operational life to 2020. NASA recognizes the need for a systematic approach for defining requirements to upgrade the shuttle, and it recently institutionalized a new process to select and prioritize shuttle upgrades. However, NASA has not yet fully defined the basic elements of the ISTP—which include the ISS, the OSP, and the Next Generation Launch Technology. NASA has not precisely determined when the ISS will be completed; its ultimate mission, its useful life, and even how many astronauts will be on board, for example. Specifically, NASA has not made explicit decisions on shuttle requirements-–such as its future mission, lift capability, and life expectancy. According to NASA officials, these decisions will significantly affect shuttle upgrades. Similarly, the CAIB found that the shifting date for shuttle replacement has severely complicated decisions on how to invest in shuttle upgrades. NASA’s Process for Selecting and Prioritizing Upgrades Could Be Further Improved NASA is making an effort to improve how it identifies, selects, and prioritizes shuttle upgrades. In December 2002, NASA initiated a SLEP as the primary framework for ensuring safe and effective operations. By March 2003, NASA had prepared a formal management plan documenting roles and responsibilities and defining an annual process for selecting and prioritizing upgraded projects and studies. Prior to the SLEP, NASA had no documented systematic selection process, and managers made decisions on upgrades using their professional insight and judgment and a limited number of quantitative or analytic tools rather than extensive use of hard data or rigorous analysis. As a result, projects that were identified, funded, and implemented flowed from an informal “bottom-up” approach that relied largely on insight and judgment of selected managers and limited use of quantitative tools. Earlier Process to Identify and Prioritize Upgrades According to NASA officials, prior to the new SLEP process, the identification, selection, and prioritization of shuttle upgrade projects largely involved an informal bottom-up approach. The upgrades were first proposed in an open and a continuous call for projects concepts and were drawn from shuttle element project organization, industry, or other shuttle program stakeholders. Upgrade projects would then go to the Space Shuttle Program Manager, the Shuttle Program Development Manager, and the directors of the affected NASA field centers, who would provide proposed projects to the Associate Administrator for Space Flight, who would select and prioritize the projects. This early process was much more strongly driven by collective management insight or “judgment” rather than by hard data or rigorous analysis. During this process, there was little guidance from top management as to how the decisions on shuttle upgrades integrated with all the other elements of the ISTP. The identification, selection, and prioritization of the Cockpit Avionics Upgrade (CAU) is one example of a lack of a documented, structured, and systematic selection process prior to the SLEP. The CAU is estimated to cost $442 million and is NASA’s most costly of the currently approved upgrade projects. The CAU will update the cockpit’s dials and gauges with a modern instrument panel. By automating complex procedures in the shuttle cockpit, the upgrade is intended to improve the situational awareness of the crew and to better equip them to handle potential flight problems by reducing crew workload. (See fig. 1.) Managers gave the CAU project the highest priority based on their professional insight and judgment and a limited number of quantitative or analytic tools rather than extensive use of hard data or rigorous analysis. The upgrade was ranked as the highest priority based on the perceived importance of crew situational awareness. NASA did not have a metric to show the relationship of the cost of the upgrade to an increase in shuttle life and/or safety. The ranking was essentially a collaborative voting process based on their professional knowledge that crew error accounts for 50 percent of all incidents. As crew awareness depends on a number of human factors, a quantitative metric, such as NASA’s Quantitative Risk Assessment System, could not be used since it did not contain key human attributes needed to evaluate the percentage of safety improvement of the upgrade project. The SLEP Process Currently in Place In December 2002, NASA initiated a SLEP as the primary framework for ensuring safe and effective operations, along with a management plan a few months later, documenting roles and responsibilities and an annual process for selecting and prioritizing upgraded projects and studies. The new process, which was first used in March 2003 at the first SLEP Summit, uses panels of experts from NASA, which are mostly chaired by the Deputy Center Directors, who meet periodically to develop and assess project recommendations. The SLEP is structured around eight panels of senior managers that make greater use of quantitative tools in areas such as safety and sustainability, including an outside panel of industry experts and an Integration Panel. The Integration Panel refines the prioritized recommendations of each panel into final recommendations to a group of top-level managers known as the Space Flight Leadership Council (the Council). As a result of the last Summit in March 2003, the Council approved all project recommendations of the Integration Panel with a total estimated cost of about $1.7 billion from fiscal years 2004-08. (See app. II.) In making its recommendations, the Council was not restricted by fiscal constraints. The Council endorsed 60 SLEP upgrade projects for fiscal year 2004 costing $416 million. By contrast, NASA’s fiscal year 2004 budget request, submitted in February 2003, asked for $379 million. The difference is being deliberated within NASA’s internal budget process. One product resulting from the SLEP 2003 Summit was NASA’s selection and identification of upgrade projects related to safety improvement, sustainability, and requirements for new capabilities as defined by “customers” such as the ISS. NASA then placed the projects into one of the following four categories: (1) “Should Start”—projects strongly recommended for start in fiscal year 2004 and which would create near term risk if they did not start, (2) “Existing Commitments”—projects previously authorized, (3) “Foundational Activities”—projects that add insight into the current condition of assets, and (4) “Projects and Studies”—system specific activities at various levels of maturity. (See table 1.) NASA also considers development of the infrastructure to sustain shuttle operations through 2020 equally as important as upgrades to keep the shuttle flying safely. One example of a sustainability project for fiscal year 2004 is the replacement of the roof of the 39-year-old Vehicle Assembly Building at Kennedy Space Center, which is in poor condition, as shown by the bubbles that have developed in its surface. (See fig. 2.) The roof replacement is estimated to cost $16 million and is part of NASA’s total spending on infrastructure of $54 million in fiscal year 2004. Further Improvements in the SLEP Possible NASA needs to improve its analytic tools to help it improve the basis for identifying and selecting shuttle upgrades. NASA uses Probabilistic Risk Assessment (PRA) methodologies, specifically the Quantitative Risk Assessment System, to improve safety by assessing the relative risk reduction of potential upgrade projects to overall shuttle risk. However, program managers are aware that the PRA is incomplete and does not contain certain key attributes that would make it more accurate, reliable, and useful. Early next year, they plan to begin using a revised PRA more oriented toward the shuttle. In addition, the Manager of the Shuttle Program Development Office believes it is important to develop a new Sustainability Health Metric System in order to mitigate the risk that an asset required to fly may not be available. The metric would score a proposed sustainability project after an evaluation of a set of common sustainability factors for all elements of shuttle flight and ground systems and subsystems. Similarly, the CAIB could not find adequate application of a metric that took an integrated systematic view of the entire space shuttle system. NASA is considering development of a sustainability metric, and the Manager of the Shuttle Program Development Office believes that if approved, it could be ready for use during the SLEP Summit in February 2004. NASA expects that the nomination of projects at that meeting will come from a more comprehensive evaluation through extensive use of hard data and rigorous analysis. Although creation of the SLEP may improve the identification and selection process, further improvements are possible. According to SLEP program officials responsible for identifying, selecting, and prioritizing shuttle upgrades, they need clear guidance from top management as to how those decisions integrate with the other elements of the ISTP, such as the ISS and the OSP. In addition, SLEP program officials said the identification and selection of upgrades for the shuttle program lack a clear measurable metric showing the relationship of an upgrade investment to an increase in shuttle operational life. They believe such a metric would be useful to decision makers in identifying, selecting, and prioritizing shuttle upgrades. Finally, according to NASA Headquarters officials, recommendations of the CAIB are under study and will likely change the selection and prioritization of shuttle upgrades for both the near term and the long term. Shuttle Upgrades Could Potentially Cost Billions More Than Currently Estimated Until NASA finalizes the basic requirements for the shuttle and further improves its process for identifying and selecting upgrades, it will be difficult to accurately and reliably estimate the total cost of upgrades through 2020. NASA’s current estimate for the cost of upgrading the shuttle is itself highly uncertain. Accurate and reliable cost estimates to upgrade the shuttle to continue operations are needed by decision makers. We found that the agency has not yet attempted to prepare a detailed life-cycle cost estimate for all upgrades through 2020. NASA did prepare a rough order of magnitude estimate based on an analysis of current project estimates through 2020. The total cost of shuttle upgrades, however, could potentially be significantly greater as the estimate did not include potential projects such as a crew escape system. In addition, a number of potential changes could significantly increase the estimated cost, such as changes in program requirements, schedule slippages caused by delays in software and hardware integration, and implementation of recommendations of the CAIB. Current Estimate Is Rough Order of Magnitude A NASA official stated that it is difficult to develop accurate and reliable long-term estimates of shuttle upgrades through 2020, particularly in light of uncertainty of the shuttle’s basic requirements such as its life expectancy. However, developing life-cycle cost estimates for agency programs is not a new issue in the federal government. The Office of Management and Budget maintains guidelines for preparing a cost-effectiveness analysis, including life-cycle cost estimates applicable to all federal agencies within the executive branch. Cost estimates should include all costs consistent with agency policy guidance. NASA performs a cost and systems analysis to produce feasible concepts and explore a wide range of implementation options to meet its program objectives. To do this, NASA must develop the life cycle of the program to include the direct, indirect, recurring, nonrecurring, and other related costs for the design, development, production, operation, maintenance, support, and retirement of the program. Comprehensive life-cycle cost estimates include both the project cost estimate and the operations cost through the end of shuttle operations. NASA has not prepared a detailed total life-cycle cost estimate for upgrades through 2020 due to the uncertainty of the shuttle’s basic requirements, as well as the difficulty of preparing estimates of out-year funding to 2020. However, in June 2003, the agency estimated the shuttle upgrade cost through that year by using a rough order of magnitude estimate of $300 million-$500 million a year, or a total of $5 billion- $8 billion. The $300 million-$500 million per year estimate projected for out-year funding was modeled using a simulation tool and developed by an independent consulting firm. According to a NASA official, they will rerun this estimate by the next SLEP Summit in February 2004, using as a basis whatever the recommended upgrade projects are at the time. We performed an analysis of the rough order of magnitude estimate completed by NASA for all upgrades through 2020. Based on the data, we found that the $300 million-$500 million range of estimated costs per year, and the methodology used to estimate the costs, appears to be reasonable. According to a NASA official, NASA’s cost estimates are focused on the annual budget process, rather than long term through 2020, because any individual project takes a while to mature and near-year estimates, such as those from the current year and through 2008, would be more accurate than those from 2009 and beyond, which are more likely to change. NASA’s estimate is based on known projects for fiscal years 2004 and 2005 whose costs taper off in later years and the assessment of an additional 20 projects through 2020, where cost estimates and implementation plans are not certain. Although the rough order of magnitude estimate, as well as the methodology used to derive it, appears to be reasonable, the total cost could be billions more since potential upgrade projects such as a crew escape system are not included. Initially, Boeing released a list of safety and supportability options that included crew/cockpit escape concepts for the shuttle. Figure 3 illustrates the primary types of crew escape presently under consideration. The approximate costs involved for the eight present concepts range between $1 billion and $3.9 billion, depending on the one selected. There are three other ejection concepts under development, none of which have received a full assessment. These other concepts will be assessed in a more in-depth manner, as well as previous metrics and costs, at the next SLEP Summit in February 2004. (Appendix III contains information on all 11 concepts.) Potential Program Changes Could Increase Total Upgrade Cost A number of potential program changes could significantly increase the estimated cost of shuttle upgrades through 2020. For example, rough order of magnitude estimates do not account for possible slippages in the shuttle schedule. According to a NASA official, if NASA and/or Congress deem a crew escape option a major priority, more highly developed costs and schedules would be created. Also, slippage due to delays in hardware or software integration can affect projects where the final vehicle modifications are planned for the major maintenance periods. NASA has not yet made explicit decisions about the end state of the International Space Station. For example, if the useful life of the ISS were extended and/or an OSP were put into service to support the station as an alternative to the shuttle, the life-cycle costs of the shuttle may be affected. Until all requirements about the ISS have been fully defined, it will be difficult to determine a detailed cost of shuttle upgrades through 2020. Other potential program changes that would increase costs include a requirements change, such as additional lift capability that would require a new rocket booster. Any redesign option, if selected, would add billions to the total upgrade cost. For example, redesign and development of new liquid-fueled rocket boosters is estimated at a rough order of magnitude cost of $5 billion. Redesign and development of a five-segment solid booster would be a cheaper but less flexible option, at an estimated rough order of magnitude of $2 billion. Another major driver of increased costs would be implementing the recommendations of the CAIB. Its numerous recommendations, such as major changes to the shuttle’s thermal protection system, could potentially increase costs. NASA officials have said the agency intends to implement all the recommendations the CAIB issued in its report, but precise costs have yet to be determined. Conclusions NASA is at a critical juncture in the life of the space shuttle. NASA had planned to upgrade the shuttle in the future. Now, after the Columbia tragedy, NASA has an increased emphasis to fly the shuttle safely through 2020. NASA officials acknowledge that the loss of the Columbia will be a key influence on the selection and prioritization of shuttle upgrades as NASA officials assess both the short-and long-term implications of the CAIB recommendations. Although creation of the Space Shuttle Service Life Extension Program institutionalizes the process for identifying, selecting, and prioritizing upgrades, additional changes are needed to further strengthen that process such as increased use of analytic tools and metrics to complement professional judgment. NASA management has also not yet made explicit decisions about the basic requirements for key elements in its Integrated Space Transportation Plan—the ISS, the OSP, and the space shuttle. The agency’s lack of a long-term plan, caused by frequent changes in the life of the shuttle, has made it hard to fully define, select, and prioritize shuttle upgrade requirements, which form a basis for identifying needed upgrades. Such a long-term plan needs to be developed now in conjunction with activities to return the shuttle to fly safely. In addition, accurate and reliable life-cycle cost estimates are important for determining resources needed for the selection and priority of upgrades and to determine annual budget requests. Even though an estimate of the total life-cycle cost has not been made, it is evident that the cost of upgrades through 2020 could be billions more than NASA’s current rough order of magnitude estimate if potential projects, such as a crew escape system and new projects resulting from the CAIB recommendations, are included. Unless improvements are made in NASA’s shuttle modernization efforts, NASA will not be able to ensure upgrades are being made to address the most necessary needs or to articulate the extent of safety that has been enhanced, and determine the total cost of the program. Recommendations for Executive Action To strengthen the agency’s efforts to modernize the space shuttle, we recommend that the NASA Administrator take the following four actions: Fully define the requirements for all elements of the ISTP so that those responsible for identifying, selecting, and prioritizing shuttle upgrades will have the guidance and a sound basis to ensure their decisions on upgrade projects are completely integrated with all other elements of the transportation plan. In particular, the Administrator should determine, in conjunction with its international partners, the ultimate life and mission of the ISS in order to provide a sound basis for fully defining shuttle requirements. Develop and consistently apply a clear measurable metric to show the relationship of upgrade investments to an increase in shuttle operational life and/or safety for the entire space shuttle system. NASA’s Quantitative Risk Assessment System could be a basis for such a metric since it is intended to measure the safety improvement of a single upgrade project. Continue to pursue development of analytic tools and metrics to help assure that SLEP program officials have accurate, reliable, and timely quantifiable information to complement their professional judgment. Develop a total cost estimate for all upgrades through 2020 by updating the current rough order of magnitude estimate to include new projects resulting from the CAIB recommendations, estimates of project life-cycle costs, and estimates of major potential projects, such as a crew escape system, so that the resources needed to fund shuttle upgrades can be ascertained. Agency Comments In written comments on a draft of this report, NASA’s Deputy Administrator stated that the agency concurred with the first three recommendations. Furthermore, NASA concurred with the intent of the fourth recommendation concerning development of a cost estimate for all shuttle upgrades through 2020. However, the Deputy Administrator commented that there were major uncertainties that severely limit the agency’s ability to foresee budget requirements beyond 3 to 5 years, such as unanticipated technical problems and the required time to accurately assess upgrade projects. Consequently, NASA believes that it is better to size the long-term (5 to 15 years) anticipated budget run-out based on broad estimates rather than on specific lists of projects. We recognize that there can be many uncertainties in developing long-term budget estimates. However, NASA’s proposal of an anticipated budget run- out based on broad estimates is not a substitute for identifying the financial implications of identified needs. Specifically, in order for NASA to develop a credible Integrated Space Transportation Plan, the agency needs a more accurate and reliable long-term total cost estimate. As we stated in our recommendation, establishing such an estimate could be facilitated by (1) using life-cycle cost estimating techniques on its list of potential projects that NASA used to develop its cost estimate through 2020, (2) updating its list of potential upgrade projects to include recommended projects of the CAIB, and (3) including major potential upgrade projects currently under consideration, such as a crew escape system. The comprehensive nature of this cost estimate will enable (1) NASA to formulate a more definitive picture of how it will ensure that the shuttle fleet flies safely in the future and (2) decision makers to understand associated costs. Therefore, our recommendation remains unchanged. Scope and Methodology To assess NASA’s requirements and plans to upgrade the shuttle for continuous service through 2020, we obtained and reviewed internal documents and independent studies and discussed the requirements and plans with responsible NASA officials. To assess how NASA determined what upgrades were needed and how they were identified, selected, and prioritized, we obtained and analyzed schedules and documents from program officials and obtained an understanding of the process for identifying, selecting, and prioritizing shuttle upgrades. We also reviewed documents regarding analytic tools used to select and prioritize shuttle upgrades. To assess the estimated life-cycle cost of shuttle upgrades, we reviewed and discussed NASA’s guidance regarding preparation of life-cycle cost estimates with program officials. To assess the rough order of magnitude estimate for out-year funding completed by NASA for all upgrades through 2020, we obtained data and analyzed the estimate using a Monte Carlo simulation tool called @Risk—an Excel-based spreadsheet. Monte Carlo simulation helps to assess the risks and uncertainties associated with Microsoft Excel spreadsheet models by randomly generating values for uncertain variables over and over to simulate a model. We assessed this technique to determine the level of confidence around the estimates and verified our assessment with responsible program officials. To accomplish our work, we interviewed officials and analyzed documents at NASA Headquarters, Washington, D.C.; Johnson Space Center, Houston, Texas; and Kennedy Space Center, Florida. We also reviewed reports and interviewed representatives of NASA’s Office of the Inspector General, Washington, D.C., and NASA’s Independent Program Assessment Office, Langley Research Center, Hampton, Virginia. We conducted our work from April to October 2003 in accordance with generally accepted government auditing standards. Unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. In addition, the report will be available on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 if you or your staffs have any questions about this report. Major contributors to this report are listed in appendix IV. Appendix I: Comments from the National Aeronautics and Space Administration Appendix II: Recommended Upgrade Projects Resulting from the Service Life Extension Program Summit (Customer Driven Capabilities Related) Performance Trade Studies (Lift, Power, Stay Time) Projects and Studies (Sustainability Related) Appendix III: Comparison of Crew Escape Concepts Under Consideration Capsule 10k – 210k Seat 10k – 70k 6,024 lb added 2,700 ballast 4 y after ATP 12 m OMM xxxx lb added xxxx ballast x.x y after ATP xx m OMM xxxx lb added xxxx ballast x.x y after ATP xxx m OMM xxxx lb added xxxx ballast x.x y after ATP xx m OMM 52% d/ePRA and Ascent coverage based on 42A ejection seat assessment. Assessment due by February 2004 for Service Life Extension Program Summit. Appendix IV: Staff Acknowledgments Acknowledgments Individuals making key contributions to this report included Jerry Herley, Thomas Hopp, T. J. Thomson, and Karen Richey.
The Columbia tragedy has accentuated the need to modernize the 20-year-old space shuttle, the only U.S. launch system that carries people to and from space. The shuttle will now be needed for another two decades. As it ages, the spacecraft's components will also age, and it may become increasingly unreliable. GAO examined the National Aeronautics and Space Administration's (NASA) plans to upgrade the shuttle through 2020, how it will identify and select what upgrades are needed, how much the upgrades may cost, and what factors will influence that cost over the system's lifetime. NASA cannot fully define shuttle upgrade requirements until it resolves questions over the shuttle's operational life and determines requirements for elements of its Integrated Space Transportation Plan (ISTP) such as the International Space Station (ISS). Prior efforts to upgrade the shuttle have been stymied because NASA could not develop a strategic investment plan or systematically define the spacecraft's requirements because of changes in its life expectancy and mission. NASA is trying to improve how it identifies, selects, and prioritizes shuttle upgrades. In March 2003, it institutionalized a Space Shuttle Service Life Extension Program (SLEP) to ensure safe and effective operations, along with a management plan documenting roles and responsibilities and an annual process for selecting upgraded projects and studies. In addition, NASA will try to improve shuttle safety by implementing the recommendations of the Columbia Accident Investigation Board (CAIB). NASA's estimate of the total cost to upgrade the shuttle--$300 million-$500 million a year, or a total of $5 billion-$8 billion through 2020--is reasonably based but could be significantly higher, as it does not include potential projects such as a crew escape system. It will be difficult for NASA to make an accurate estimate until it firmly establishes the basic requirements (such as life expectancy) for the shuttle and the process for selecting shuttle upgrades. A number of potential changes could significantly increase the cost of shuttle upgrades, including responses to the recommendations of the CAIB.
Background PFCs are federally authorized fees which were established in 1990 to help pay for capital development at commercial service airports. PFCs are currently capped at $4.50 per flight segment with a maximum of two PFCs charged on a one-way trip or four PFCs on a round trip, for a maximum of $18 total. About $2.8 billion in PFCs was collected by airlines on behalf of airports in 2013. Certain categories of passengers and flights are exempt from paying PFCs. For example, passengers flying on frequent-flier award coupons are exempt from paying a PFC. The intent of the PFC program is to further airport development that (1) preserves or enhances airports’ safety, security, or capacity; (2) reduces noise generated by airport activities; or (3) enhances airline competition. PFCs give airports a source of funding for airport development over which they have greater local control because airlines have more limited say regarding how PFCs are used than they may have regarding the use of airport terminal rents or landing fees. This way, if an airport wants to build additional gates to attract new competition, an incumbent airline cannot block the project by refusing to fund it. PFCs can be applied to FAA approved eligible projects, and can be used as a match for AIP grants or to finance the debt on approved projects. Airports must apply to the FAA for authority to collect PFCs for use on approved projects, and if approved by FAA, airlines are required to collect PFCs and remit them to appropriate airport recipients. Each airport’s application must list specific eligible projects that PFCs will fund and the total amount to be collected. Once PFC applications are approved, airlines must add any approved PFC to the base fare (along with other federal taxes and fees) at the point of sale on the ticket by an airline, a travel agent, or Global Distribution Systems (GDS). Airlines must remit PFCs to airports on a monthly basis. Airlines are able to keep the “float”— that is, interest accumulated on the fees between the time they are collected and remitted—as well as 11 cents per PFC collected for administration costs. Airlines that annually collect at least 50,000 PFCs are required to have annual independent audits of their PFC collections, and airports can request and receive the results of audits. FAA has the authority, though not an obligation, to review the audits. (See fig. 1). From 1990 through August 2014, FAA approved airports’ requests to collect a total of around $89 billion in PFCs. This amount includes future approved collections—with about a third of collecting airports approved to collect PFCs to at least 2024 or later. Of the $89 billion, about 34 percent has been committed for “landside” projects such as terminals; 34 percent for interest on debt used to pay for projects either in development or completed; 18 percent for “airside” projects such as runways and taxiways; 7 percent for airport access such as roads and rail connecting to airports; 4 percent for noise reduction; and 4 percent for the construction of Denver International Airport. (See fig. 2). Most airports that are eligible to collect PFCs do so at the maximum rate $4.50 per flight segment. As of October 1, 2014, according to FAA data, 358 out of 538 eligible airports were collecting PFCs, and 351 of the 390 approved airports chose to collect at the maximum rate. In all, 98 of the top 100 airports have been approved to collect PFCs, with approximately 90 percent of all PFCs (by amount) collected by large and medium hubs. Airports that impose a PFC may become ineligible to receive up to 50 percent (if collecting PFCs at the $1, $2, or $3 level) or 75 percent (if collecting PFCs at the $4 or $4.50 level) of the formula AIP grants that they would otherwise receive. The vast majority of the funding reduction (87.5 percent) is then made available to smaller airports through AIP discretionary grants through the Small Airport Fund, with the remainder available to any airport under FAA’s AIP discretionary grant program. The President’s 2015 Budget proposes an increase of the PFC cap to $8.00, while the airport trade associations have proposed an increase in the PFC cap to $8.50 but also periodically adjusted for inflation thereafter. Some airports have advocated for a complete lifting of any cap on PFCs, and while one airport trade association previously advocated for alternative collection methods to collecting the PFC on the ticket as a way to increase the PFC cap; the association is no longer doing so. As part of the last FAA reauthorization process, legislation was introduced that would have allowed up to six airports to impose an unlimited PFC collected directly from passengers by the airport, if the fee were not collected on the ticket; however, this proposal was not part of the final Act. In addition to PFCs, there are federal taxes and fees that support aviation activity, including the 7.5 percent ticket tax and a $4.00 per-flight segment fee for domestic flights and an international arrival and departure tax of $17.50 per segment for international flights which are deposited into the AATF, as well as security and customs and border protection taxes, among others which are distributed to their respective agencies. All these taxes and fees are part of the ticket purchase transaction and together make up to 13.7 percent of the total cost of a ticket on average, with PFCs representing about 2.9 percent of the total ticket cost. In Fiscal Year 2013, aviation taxes contributed almost $12.9 billion to the AATF, with roughly $11.7 billion (91 percent) from passenger-related taxes and the rest from fuel-based or cargo taxes. Increasing the Cap on PFCs Would Significantly Increase Airport Funding but Could Also Have Other Effects Increasing the PFC Cap Would Significantly Increase Airport PFC Collections, but Key Assumptions Influence These Estimates To estimate the potential amount of funding available to airports, as well as associated effects on passenger demand and ticket tax revenues from increasing the PFC cap, we developed an economic demand model. The general approach of this analysis was to model airport collections and passenger traffic under various PFC cap levels. We modeled three different increases in the PFC cap amount each starting in 2016. Those three scenarios are: PFC cap of $6.47 (which is the 2016 equivalent of $4.50 indexed to the Consumer Price Index (CPI) starting in 2000 when the cap was first instituted); PFC cap of $8 based on the President’s 2015 budget proposal; and PFC cap of $8.50 that would be indexed to inflation based on the airports’ trade associations’ legislative proposal. Assuming that the PFC increase is fully passed on to consumers and not absorbed through a reduced lower base (before tax) fares, the higher cost of air travel could reduce passenger demand according to economic principles. Economic principles and past experience dictate that any increase in the price of a ticket—even if very small—will have an effect on some consumers’ decisions on whether to take a trip or not. For example, an increase in the price by a few dollars may not affect the decision of a business flyer going for an important business meeting but could affect the decision of a family of four going on vacation. An increase in the price will also have different effects depending on the type of air travel, for example, on short-haul and long-haul flights, and the availability of substitutes such as driving or taking a train instead of flying. Thus, the extent to which people decide whether to fly depends on the extent of consumer sensitivity to changes in the cost of air travel and is referred to as the “elasticity of demand”—the more elastic the demand, the more passenger air traffic is reduced by increases in price. For our base model analysis, we assumed a demand price elasticity of -0.8. In addition, to show the potential funding available to airports, we assumed that airports would adopt the maximum possible PFC cap at the start of 2016, but in reality, adoption of higher PFC levels would likely be a gradual process undertaken by individual airports according to their financial needs. Accordingly, model results in this report should be considered upper bound estimates of the funds available to airports that were approved to collect PFCs as of July 31, 2014. A full description of the model, data sources, and key assumptions appears in appendix II. Increasing the PFC cap under the three different scenarios that we modeled would significantly increase the potential amount of PFC collections in comparison to what could be available without an increase in the PFC cap. (See table 1). As with any modeling exercise, these projections depend on assumptions about participants’ behavior, in this case the behavior of consumers, airlines, and airports. The results presented above reflect three key assumptions about these behaviors. Elasticity of demand. There is uncertainty associated with demand analysis, because the estimated reductions in air travel are highly dependent on the assumptions about consumers’ sensitivities to changes in price. As noted above, to account for this uncertainty, we used an elasticity rate of -0.8, meaning that a 1 percent increase in price would result in a 0.8 percent reduction in the quantity of air travel. This rate is based on the assumption that PFC increase will affect all routes across the nation and will affect all routes equally. If PFC increases occur at fewer airports, demand would be more elastic because consumers could substitute their routing to some extent and the elasticity rate might be greater. As a result, we modeled three different elasticity rates drawn from economic literature to test the sensitivity of our results to these rates and found that for small price increases, small differences in the elasticity rate have very little impact. We discussed the selection of this elasticity rate with experts who have published on aviation economics, and they generally agreed with the selection. The model results from all three elasticity rates are shown in appendix II (table 5). PFC pass-through. We assumed that the entire PFC increase would be fully passed on to consumers and not absorbed by the airlines by adjusting of their base fares downward. Airline statements and experts with whom we spoke largely support our assumption that airlines would attempt to pass the PFC increase on to consumers. However, consumers’ response may vary from market to market and may not happen all at once, as airlines adjust capacity to respond to higher fares. For example, in the immediate period when airlines have fixed capacity, airlines’ may have to absorb all or some of this increase in order to maximize their revenues.as airlines adjust their capacity, they may gradually pass on the PFC increase to passengers. In addition, funding airport projects through PFCs instead of through airline rates and charges could reduce airline costs in the long run. If such conditions occur, airlines may adjust their airfares downward so that an increase in the fee is not fully passed onto consumers. The more the airlines absorb, the less the increase in the cost of travel for passengers and the lower the adverse effect on passenger demand. We consider the effect of different pass- through rates in appendix II. In the following years, Airport adoption. We assumed that airports that currently impose a PFC would raise it to the maximum allowed amount in the first year. While it is unrealistic to assume that all airports would immediately raise their PFC level in the first year, based on near universal adoption of the current maximum by nearly all of the largest airports, it is not unrealistic to expect that most airports would be at the maximum by 2024. Following the introduction of the PFC in 1991 and the increase in the level in 2000, airports quickly moved to the higher PFC level as indicated in figure 3 below. If fewer airports increase their PFC level that would proportionally reduce PFC collections and the associated changes to the AATF and allow some consumers to avoid the PFC making the consumer response more elastic as noted above. The results of using a scenario with a reduced PFC adoption rate by airports are shown in appendix II (table 6). Increasing the PFC Cap Could Marginally Slow Growth in Revenues to the Airport and Airway Trust Fund Increasing the PFC cap under the three different scenarios that we modeled could marginally slow the growth of AATF revenues compared to what it could have been without the PFC increase. About 91 percent of AATF revenues in 2013 were derived from taxes and fees on passengers. Under all our cap scenarios, AATF revenues from passengers would likely continue to grow overall based on current projections of passenger growth; however, passenger growth could be slower with a PFC cap increase if it results in a higher total cost of air travel and thus reduces passenger demand. As a consequence of fewer anticipated passengers flying, the tax base on which these taxes are levied would be reduced compared to the tax base with no PFC increase. If the PFC increase is not passed on to consumers but absorbed by airlines through their adjustment of base fares downward, it would still reduce the trust fund’s revenues from the ad valorem tax that is levied as 7.5 percent on the base fare. Similarly, when airlines introduced ancillary fees for such services as checked baggage, there is some evidence that airlines adjusted their base fares downward to lessen the effect on passenger demand but not by as much as the amount of the fees. Because ancillary fees are not taxed, both reduced passenger demand and reduced base fares resulting from the introduction of fees would have reduced trust fund revenues. We did not include ancillary fees as part of our base fare calculation due to the lack of comprehensive ancillary fee but including ancillary fees would result in higher air-travel costs data;thereby making any PFC increase a smaller percentage of the total price and therefore resulting in a smaller loss of passenger demand. Under an $8 PFC cap and the entire PFC increase passed on to consumers, AATF revenues could be lower by $161 million to $186 million annually, as compared to what they could be without a PFC increase, assuming a demand elasticity of -0.8. This potential loss in AATF passenger revenues is small relative to total AATF passenger revenues—for example, between -0.58 and -1.68 percent of the total in 2024 depending on the size of the cap increase. The extent to which the AATF is affected will depend on the extent of the reduction in passenger traffic (elasticity assumption) as well as the extent to which the increase is passed on to consumers under each scenario (pass through rate). (See table 2.) A PFC Cap Increase Could Benefit Airports, but the Effects Differ Depending on Their Size Because passenger traffic is highly concentrated at larger airports, that is, large and medium hub airports, PFC collections are similarly concentrated. Thus, larger airports could benefit most from an increase in the PFC. A hub level analysis of a PFC cap increase shows that large hub airports could receive nearly three-quarters of all PFCs, while large and medium hubs together could account for nearly 90 percent of total PFCs, similar to what they do now. For example, under an $8 PFC, large hub airports could receive additional PFC revenues of $1.74 to $2.08 billion annually and medium hubs could receive additional PFC revenues of $372 to $435 million annually from 2016 to 2024. Small and non-hub airports could receive up to $212 million and $82 million in additional annual PFC revenues respectively from 2016 to 2024. (See table 3.) While an increase in PFCs could largely flow to the larger airports, smaller airports could also benefit from increased PFC collections, especially under the President’s proposed budget for 2015. As previously noted, under current rules, large and medium hubs’ apportionment of AIP formula funds may be reduced, which in fiscal year 2014, resulted in a redistribution of approximately $553 million. The majority of this funding (87.5 percent) goes to the Small Airport Fund for redistribution among small airports. The remaining 12.5 percent became available as AIP discretionary funds, which FAA uses to award grants to eligible projects regardless of airport size. Under the President’s 2015 budget proposal, all AIP formula grants for large hub airports, which FAA estimates to be $80 million in fiscal year 2015, would be eliminated in return for an $8 PFC. In addition, the President’s 2015 budget proposal calls for a decrease in the total amount of AIP funds, a decrease that under current law would result in automatic changes in how AIP grants are allocated. Increasing PFCs also could affect the dynamics of how airports and airlines can influence airport investment decisions. Airports rely on several funding mechanisms in order to pay for airport development projects. These include PFCs, non-aeronautical revenues (e.g., parking and concession revenue), AIP grants, rates and charges agreements with airlines, and state and local funds. Generally, PFCs offer airports relative independence over investment decisions at their airports. While airports must notify and consult with the airlines on how they spend PFCs, as long as FAA approves, airlines cannot block these decisions. Airlines can choose to serve other airports, however, so airports have an incentive to listen to airline concerns. Airport representatives said that one of the reasons airports want an increased PFC cap is because airports have already committed a significant portion of their current PFCs to past and current projects and have relatively fewer PFC-approved funds available with the $4.50 cap in place. According to FAA, $30 billion in PFCs was approved from1992 to September 2014 to pay interest on debt, with some airports scheduled to service debt for as long as 2058. Some airports have indicated that an increased PFC would allow them to reduce their debt costs, which could limit revenues available to those airports to secure new debt financing. Conversely, airline representatives told us that in their view, airports have many sources of revenue available and ready access to debt markets, so there is no need to increase the PFC cap. All else being equal, lower PFCs can provide airlines with more influence over airport infrastructure decisions and higher PFCs can provide airports more control over local capital-funding decisions, including the ability to decide how to apply PFC revenues to support capital projects and thus how those revenues might influence airline rates and charges. Stakeholders Reported That the Current PFC Collection Method Works Well but Lacks Some Transparency for Airports Stakeholders Said That the Current PFC Collection Method Works Well In order to evaluate the current PFC collection method, we used the following factors that we identified as key considerations for evaluating passenger fee collection methods in our February 2013 report: passenger experience, costs to administer, legal issues, customer transparency, and technology readiness. Passenger Experience Industry experts and representatives from airports, airlines, trade associations, and consumer groups universally said that the current method of PFC collection has the least impact on passenger experience, because the PFC is paid as part of the total ticket price and at the time of purchase. Airlines and travel agencies use computerized reservation networks that facilitate payments for fares and required taxes and fees (including the PFC) as part of one transaction. Passengers therefore do not need to determine which taxes and fees they must pay in accordance with their itinerary, as this is done automatically through the ticketing process. In addition, passengers are only required to pay one time, a method that saves passengers time, provides transparency, and reduces confusion. Including taxes and fees as part of the ticket purchase is also the standard globally for collecting government and airport fees, such as the PFC. Costs to Administer Both airport and airline representatives that we spoke with agreed that the administrative and infrastructure costs of the current collection method system are relatively low, as the method is integrated into existing infrastructure and business processes. As we mentioned previously, airlines currently keep 11 cents per PFC to cover their costs—which include costs for transactions such as credit card fees, legal and audit fees, and maintenance and upgrades of information systems—as well as the “float” (interest accumulated on the fees between the time they are collected and remitted). Airline representatives told us that they do not regularly track their administrative costs associated with collecting PFCs and therefore could not immediately say whether the administrative fee covers these costs. The administrative fee was last raised from 8 to 11 cents per PFC in 2004. Legal Issues The statute that authorizes the PFC program provides an exemption to the Anti-Head Tax Act which generally prohibits states, local governments, and airport authorities from levying or collecting any tax, fee, head charge, or other charge, directly or indirectly on individuals traveling by air. The statute authorizing PFCs also authorizes the Secretary of Transportation to require airlines to collect the fee and remit it to airports.an airline, we did not identify any legal issues associated with the current collection method as part of this work. Customer Transparency Representatives from consumer groups that we spoke with said that the current collection system provides transparency to the customer in terms of total travel costs. Current DOT policy requires that fares be advertised with PFCs and other taxes and fees, and included at the time of purchase. However, one airline representative with whom we spoke told us that there could be greater transparency for customers in terms of other factors, such as how fees are used for airport projects. Some airports provide information about their PFC-funded projects through their websites, signage at the airport, and community outreach, and all airports are required to distribute a notice locally to the public, with general information about PFC projects, amounts, and timing, in advance of submitting an application to impose or use PFCs. FAA does not publish information on specific PFC-funded projects at airports on its website but does provide aggregated information for the entire PFC program on PFC approval amounts and project categories, such as landside, airside, and noise reduction, and subcategories. According to FAA, airports’ PFC applications and the FAA’s decisions are public documents that airports may release to the public. In addition, the FAA provides information on applications and decisions upon request if they are not under deliberation. FAA does not require airports to track each fee paid to a specific project at an airport, only to an approved application which may be for many projects. Thus, a passenger may not have readily accessible information about the use and intended purpose of their fee payment at the time of payment but could obtain some additional information if desired. Technology Readiness The current collection method has been in place since the inception of the PFC program in 1992 and relies on widely used and accepted ticketing technologies for both online and in-person transactions. Technology company representatives whom we interviewed generally indicated that PFC collection is not constrained by current technology. However, implementing new fee rules could be problematic. For example, according to media reports, instituting the TSA security fee increase in July 2014, which uses the same ticketing technologies as PFCs, resulted in inaccurate collections while the programming code was being updated. According to an airline industry representative, that problem was subsequently fixed. The Accuracy of PFC Collections Is Not Transparent to Airports Airport officials with whom we spoke generally told us that the PFC collection process by airlines is not adequately transparent to them, and therefore, they cannot be sure they are receiving all of PFC collections they are due. While airports receive monthly remittances, quarterly reports, and in some cases, annual audit reports from airlines, airport officials told us it can be very difficult for airports to ensure the accuracy of the remittances because they cannot be reconciled to passenger enplanements at the airport. Passengers flying on frequent flyer coupons as well as flight segments beyond the first two, Essential Air Service flights, and some Alaska and Hawaii flights are exempt from paying PFCs. In addition, airlines and airports have different fee-collection and remittance systems, and airline code shares mean that the airline collecting and remitting PFCs may not be the airline transporting the passenger. Furthermore, airport officials told us that the timing of collections and remittances can hinder their efforts to track and verify the accuracy of PFC remittances. Airlines receive PFCs with ticket payments, while airports receive remittances on a monthly basis. Passengers, however, may fly on a later date well outside the monthly window. To help ensure that airports receive the full amount of the collections they are due, FAA requires that all airlines that annually collect at least 50,000 PFCs have an annual independent audit of their PFC accounts and processes. Airports can request a copy of the independent auditor’s report, but airlines are not required to provide audit reports absent a request. In addition, FAA may periodically audit or review the collection and remittance of airline PFC collections under the FAA’s federal oversight responsibility. To assist airlines, FAA has developed audit guidance for airlines’ auditors to follow in conducting their audits. This guidance is comprehensive and includes testing procedures to ensure that airline systems are properly recording PFC collections. While adherence to the guidance is voluntary, FAA has determined that using the guidance will provide sufficient assurance that the airline has met its PFC regulatory requirements and that additional reports, a government audit, or other investigations will not normally be needed. FAA’s guidance expressly underscores the importance of the assurance that using the guidelines provides, stating that it is reflected in FAA’s approach to resolving alleged collection and remittance discrepancies raised by airports to estimate local PFC collections. In cases where the airlines’ auditors did not use the guidance, any allegation of a discrepancy by airports could trigger additional FAA activities, including additional reporting or an audit by the Department of Transportation’s Office of Inspector General. FAA officials told us they do not know to what extent airlines’ auditors use the audit guidance and only review the audit reports if questions are raised by airports about possible discrepancies. FAA officials also told us that they generally do not receive airline audits and do not know how many airlines’ auditors follow the audit guidelines. FAA officials also do not know how many airports are receiving the audit reports, but explained that disputes over the accuracy of collections have been rare and have been generally limited to collections by smaller airlines or those in bankruptcy. However, as noted above, it would be very difficult for an airport to know if its PFC remittances were not accurate, and in some cases, airports are not receiving audit reports and may not be aware they can be requested. Moreover, although airports have the right to review audits, our interviews with a limited number of airport officials raise questions about the extent to which airports are aware of their rights to review the audits. Three of the five airport managers whom we interviewed told us that they have received unsolicited copies of audits in the past, whereas two other airport managers had not received copies. Absent a request, there is no requirement for airlines to give airports or FAA the audits, even if there is a qualified or adverse audit opinion. FAA officials told us that while airports’ rights to review the audits are set forth in FAA guidance that is available to all airports, they could consider additional steps to ensure that all airports understand their right to request copies of the airline’s audits as well as FAA’s reliance on airports to identify discrepancies. Doing so would be consistent with Standards for Internal Control in the Federal Government, which call for agencies to ensure that there are effective means of communicating with, and obtaining information from, external stakeholders’ that may have a significant impact on the agencies achieving its goals. Given that FAA relies on airports to alert it to potential inaccuracies in PFC collections and those airports have difficulty determining the accuracy of PFC collections for the reasons discussed earlier in this report, it is important that airports are aware of their right to request copies of airline PFC audit reports and to ask for additional follow-up by FAA, such as an audit by the Department of Transportation’s Office of Inspector General if the audits or other information indicate discrepancies. By taking actions to better educate airports about the importance of obtaining and reviewing airline PFC audits, such as through notifications or posting this on the FAA’s website, FAA would better position airports to understand their rights including the potential for requesting further investigations, as needed. Thus, both FAA and airports could be better informed about the accuracy of PFC remittances. Standards for Internal Control in the Federal Government call for agencies to design their internal controls to assure that ongoing monitoring occurs in the course of normal operations. However, as previously discussed, FAA does not know the extent to which airlines use its audit guidance or generally review the airlines’ audit reports. Thus, FAA is not well positioned to provide a reasonable assurance to Congress, the airports, or airline passengers who pay the PFCs about the reliability of those audits or the PFCs collected. Determining the extent to which airlines’ independent auditors use FAA’s guidance could provide FAA with additional assurance about the reliability of those audits. Moreover, if the guidance is not being extensively used, then taking additional actions to assess the soundness of existing airline audits and the associated costs of airlines following the guidance would better position FAA to determine if it should make its guidance mandatory. Similar to PFCs, TSA imposes a security fee on passengers which is collected by the airlines; however, unlike PFCs, security fee revenues are remitted directly to one entity—the TSA.way trip on passengers on each ticket. TSA conducts direct audits of its fee collections through which it has found remittance discrepancies. This process suggests that, without adequate assurance that airlines are following FAA’s audit guidance, some PFCs may not be collected or, if collected, not accurately remitted to airports. TSA has a compliance office that performs its own on-site audits of approximately 20 airlines annually. TSA officials stated that they regularly identify additional funds that should TSA charges $5.60 per one- have been collected and remitted to TSA, though these unremitted funds are relatively small when compared to overall collections. According to TSA officials, the agency identified and collected $2 million in unremitted funds in fiscal year 2013 from its audits compared to its $2-billion annual fee collections. TSA’s audit findings have been upheld in court when challenged by an airline. For example, a TSA audit of Alaska Airlines found that the airline owed an additional $1 million in security fee remittances for flights between 2002 and 2006, which Alaska Airlines unsuccessfully challenged. TSA officials stated that the agency used to require that all airlines that collect the security fee from at least 50,000 passengers provide an annual audit to TSA. However, this audit requirement was waived on January 23, 2003, because according to the federal registry announcement, TSA initiated its own audits of air carriers, and according to TSA, air carriers have demonstrated a high level of compliance with TSA’s collection and remittance rules and thus find it unnecessary for air carriers to expend resources for independent audits. Alternative Methods of PFC Collection Are Feasible but Would Impose Additional Steps and Costs Stakeholders Identified Three Alternatives to the Current Ticket-Based PFC Collection but Said They Could Diminish Passenger Experience Stakeholders we interviewed identified three general alternatives to the current method of PFC collection, alternatives that could be used in combination or independently. Kiosks/Counter Payments An alternative collection method that has been used at a few airports internationally is the use of a self-service kiosk or payment counter to pay for airport fees. Departing passengers pay the fee at the airport using a kiosk or payment counter as part of the check-in process. Connecting passengers could pay the fee at a facility within the terminal between departure gates. Payment could be verified prior to departure at check-in, security, or the boarding gate. We identified few airports around the world that currently use this method. Those that do include Blackpool Airport in the United Kingdom, which required passengers to purchase an airport- development fee ticket at a kiosk or retail outlet at the airport. In addition, Ireland West Airport Knock in the Republic of Ireland requires passengers to pay a development fee that can be done at a dedicated desk at the airport. Both of these are relatively small regional airports and Blackpool Airport closed on October 15, 2014. Other airports have instituted kiosk and payment counters but later abandoned the method in favor of imposing the fee on the ticket at time of purchase. For example, Vancouver International Airport, Calgary International Airport, and Montréal–Pierre Elliott Trudeau International Airport (all in Canada) initially used payment counters to collect airport-improvement fund fees from passengers following airport privatization in Canada in the 1990s. However, the payment counter approach was abandoned, and the fee was added back onto the ticket after payments at the airport became cumbersome and inconvenient for passengers, according to a Canadian airport trade association representative. There is some evidence, however, that in-airport kiosks and payment counters can work. Airlines use self-service kiosks and counters for their airline check-in processing and ancillary fee purchases, such as for checked baggage. Some airports, such as McCarran International Airport in Las Vegas, have implemented common use self-service kiosks in which passengers can check in to any airline that operates at the airport and make ancillary fee purchases. Such kiosks could also be configured to collect PFCs. Online Payments Another alternative collection method that the Airports Council International-North America (ACI-NA), an airport trade association, identified more than 10 years ago is online payments in which a passenger would pay the PFC fee through a dedicated website at the time of ticket purchase or at some point before check-in. Individual airports or a group of airports would directly operate or contract with a third party provider to manage a website to collect required fees directly from the passenger who would pay via credit card or debit card. Passengers could also be automatically directed to the website to pay PFCs after paying for a ticket online, a process that would require airline and travel agent cooperation. Passengers could go directly to the website at any time before check-in to pay the fee. In all these cases, airports would have to establish or contract with a clearinghouse that would collect and distribute PFCs or perform that function. Payments could be verified at the airport at a check-in counter, security checkpoint, or the boarding gate. We did not identify any airports currently using this method, but clearinghouses collect and distribute other aviation taxes and fees on tickets purchased online through GDSs. Mobile Payments Another alternative collection method technology company representatives identified are mobile payments. Passengers would pay the PFC at the airport using a mobile technology—such as a smartphone or tablet, or a credit, debit or prepaid card—with payment functionality embedded or added through an application. Departing passengers could scan their mobile device or card at kiosks, payment counters, or other payment stations. Connecting passengers could also use this method to pay at kiosks or payment counters and stations as they move through the airport to their next departure gate. Like the other alternative collection methods, airports could individually or as a group develop and implement information systems and infrastructure to collect and distribute PFCs on their own or contract through a third party. Airports could also use an existing clearinghouse such as those used by airlines which could collect and distribute PFCs to airports. We did not identify any airports using this method, but technology company representatives told us that they are being used in other sectors such as retail. In addition, technology company representatives with whom we spoke said that airport kiosks used for check-in could be modified or configured to accept additional forms of payment, including near-field communication (NFC)-enabled Many airlines devices, chip-and-PIN or magnetic strip, or mobile wallets.also have mobile applications for check-in and boarding processes, which could be modified to transmit payment of PFCs. Airlines also use handheld devices to collect ancillary fee payments for additional services like carry-on luggage, and in-flight meals and beverages. NFC payment through mobile phones has been implemented by MasterCard, mobile phone providers such as Verizon, and retailers such as Office Max® and Toys “R” Us®. Some transit systems have also begun to pilot NFC payments for passenger travel, such as the Metropolitan Transit Authority in New York City and the Washington Metropolitan Area Transit Authority in Washington, D.C. Though the Technology Exists, Alternatives Would Impose Additional Steps and Costs We evaluated these alternative methods relative to the current ticket- based PFC collection method using the same factors that we identified as key considerations for evaluating alternative passenger-fee collection methods—passenger experience, costs to administer, legal issues, customer transparency, and technology readiness. Stakeholders including airports and airlines and their respective domestic and international associations, and industry experts that we interviewed said that the current collection method is better than the identified alternatives. Stakeholders told us that the technology to support alternative PFC collection methods is ready to be implemented, though it would require additional steps and costs and changes to business processes. Passenger Experience All three alternative-collection methods introduce additional steps to the ticketing and boarding process, which could potentially diminish the passenger experience. Payment at kiosks or payment counters introduces an additional requirement at check-in, which could increase check-in time for passengers. Technology company representatives told us that it can take between 2 and 4 minutes for a passenger to interact with a standard airport kiosk. Additionally, a technology company representative told us that only about 50 percent of eligible passengers at one large airport use check-in kiosks, and unfamiliar passengers may need additional time or assistance to complete transactions. Connecting passengers could be required to pay the fee between flights, a step that could lead to missed connections or flight delays. Online payments introduce an additional step to online ticket purchases and potentially additional costs. Customers who are not aware of the required PFC purchase could be confused or suspicious of additional websites. Technology company representatives suggested that additional steps for online payment may cause consumers to abandon their purchase. Required mobile payments could present challenges for customers who do not use enabling devices. While 91 percent of individuals in the United States currently use mobile phones, only 50 percent of cell phone owners download applications, according to a 2013 This would require the airport to create enforcement nationwide survey.and backup collection systems to ensure that it is collecting all required PFCs. Airport and airline trade association representatives whom we spoke to reported that the industry is focused on reducing customer check-in times, and expressed concern that PFC payments at airports could delay these efforts. The International Air Transport Association is developing international standards for mobile check-in to streamline the passenger experience with a goal of moving the passenger from curb to gate in 10 minutes. Other airlines are increasing the use of mobile applications and automatic check-in. For example, Air New Zealand terminals in Auckland, Wellington, and Christchurch in New Zealand allow passengers to drop off their checked baggage and proceed directly to security and then to the gate, where passengers can also scan their boarding pass for domestic flights. JetBlue has introduced automatic check-in processes for select passengers, where boarding passes are emailed 24 hours before flight, and Air France introduced an NFC-enabled boarding pass and check-in process pilot in Toulouse, France. Costs to Administer Airports would incur greater administrative and infrastructure costs if they implemented an alternative PFC collection method. A technology company representative told us that electronic payment kiosks can cost from $10,000 for a computer screen and magnetic credit card reader to $60,000 for a payment kiosk that incorporates additional methods of fee collection and higher-end design standards and elements. Technology company representatives also told us that electronic kiosks require network connections and infrastructure in order to send payment to banks through payment networks. Kiosks would require additional terminal space, increasing the need for terminal modifications at a time when pre- departure areas of terminals are shrinking. However, as we discuss later, existing airport kiosks could be reconfigured to allow for PFC collections. Technology company representatives told us that online payments would require website development and information service infrastructure and that all methods could require additional staff to verify collections and provide oversight to payments. An airport representative expressed concern that in order to collect PFCs from all eligible passengers when using alternative collection methods, airport operators would need to establish new systems. For example, in an airport which establishes a mobile payment system, customers that do not own NFC-enabled mobile phones would need to pay using a credit card or other means. Passengers who could not pay using a credit card would require a cash transaction. This process could increase financial security risk and associated costs related to securing and accounting for cash transactions. Mobile payments present additional difficulties, as NFC standards have not been created. The two dominant forms of mobile phones—Subscriber Identity Module and Global System for Mobile Communications—have different readers, and a kiosk or mobile payment station utilizing NFC- based payment would require two separate scanners. In addition, an industry survey has shown that only 12 percent of mobile phone owners and in the United States have utilized their phones as payment devicessome stakeholders we interviewed cited lack of awareness, difficulty and unfamiliarity of use, as well as security and privacy concerns, as barriers to mobile payment adoption. Alternative collection methods would thus require additional steps and costs and changes to business processes. (See fig. 4). All alternative methods would require legal modifications to enable airports to collect the PFC directly. As discussed above, the Anti-Head Tax Act prohibits local and state governments, and airport authorities from collecting user fees or taxes on travelers. The Anti-Head Tax Act was enacted in response to significant public concern and objection to local and state governments that imposed a tax on enplaning or departing passengers. Any alternative collection method implemented by an airport would require an exemption to the Anti-Head Tax Act or express statutory authority in order to collect fees. Furthermore, current DOT regulations require airlines to disclose the total price of airfare, including all taxes and fees; and would need to be revised if airports directly collect the fee from passengers. Also, airlines cannot be required to publicly disclose proprietary business information, including individual airfare transactions and passenger itineraries, which airports would need to determine whether a particular passenger is required to pay a PFC and to ensure that the total PFC imposed does not exceed the statutory maximum (currently $18). Customer Transparency All alternative collection methods could decrease transparency to the customer because individuals may not be aware of the need to pay the PFC until after the ticket has been purchased. In addition, since payment of the PFC is not verified until check-in or departure, passengers may not be prepared to pay an unexpected fee. In this way, customers may not know the full cost of travel at the time of ticket purchase, which raises questions about transparency. An industry expert and representatives from consumer groups that we spoke to noted the importance of informing customers of all mandatory fees and taxes at the time of ticket purchase to ensure that customers are aware of the full cost of their travel. Similarly, we have recommended that the Department of Transportation (DOT) require airlines to consistently disclose optional fees at the time of purchase. Rulemaking that proposes to require airlines and ticket agents to disclose optional fees at the time of purchase. GAO, Commercial Aviation: Consumers Could Benefit from Better Information about Airline-Imposed Fees and Refundability of Government-Imposed Taxes and Fees, GAO-10-785 (Washington D.C.: July 14, 2010). Technology Readiness Stakeholders such as technology company representatives told us that all the alternative collection methods discussed above are feasible, have been implemented for other applications by airports or retailers, and could be adapted for use in the airport environment. For example, kiosks could be adapted to collect PFCs. Technology company representatives we spoke to said that existing common-use self-service and airline kiosks could be modified, if not already enabled, to have a magnetic stripe card reader and an NFC reader. Technology company representatives also stated that airlines have online websites and mobile applications for passenger ticketing, check-in, and ancillary fees payments that could automatically link a passenger to an airport or third-party website to pay the fee as well as handheld devices that are used to accept ancillary fee payments that could also be used at the gate to collect PFCs. However, some means of verifying payment would still be needed before boarding the flight. Retailers in the United States have accepted online payments for decades and have begun to integrate mobile payments into their business practices. Some merchants have established “tap and pay” NFC terminals alongside traditional magnetic stripe readers, allowing customers to use credit cards as well as NFC-enabled mobile devices. Conclusions As part of any consideration of an increase in the PFC cap, it is paramount that FAA and airports have confidence that airlines are accurately collecting and remitting existing PFCs. Ensuring the accuracy of PFC collections and remittances to airports depends on audits conducted by airlines’ auditors and oversight by FAA and airports to identify possible inaccuracies. However, while FAA has promulgated comprehensive audit guidance for airlines’ auditors’ use, it is voluntary and FAA does not know to what extent airlines’ auditors’ use the guidance, if at all. Thus, FAA is not well positioned to provide reasonable assurance to Congress, airports, or passengers who pay PFCs on the reliability of those audits and the PFCs collected. Further, some airports may not be aware that they can request and review airline audits and ask for an investigation if they suspect PFC remittances are inaccurate. As a result, FAA does not have sufficient assurance that PFC collections and remittances to airports meet its own regulatory requirements. Recommendations for Executive Action To ensure the accuracy of Passenger Facility Charge collections and remittances to airports, we recommend that the Secretary of Transportation should require the FAA to take the following two actions: Review the extent to which airlines’ auditors use FAA’s audit guidance and, if found to be minimal, evaluate whether airlines’ auditors should be required to use the FAA’s audit guidance by considering the soundness of existing airline audits and the associated costs of airlines’ having to follow the guidance. Better educate airports that collect PFCs, such as through notifications or the FAA’s website, about airports’ rights to review airline audits and ask for additional investigation if the audits reveal issues or inaccuracies are suspected. Agency Comments and Our Evaluation We provided a draft of this report to DOT, ACI-NA, AAAE, and Airlines for America (A4A) for their review and comment. In an email received on November 24, 2014, the Deputy Director of Audit Relations at DOT provided us with the Department’s comments. Specifically, in response to our recommendations, DOT partly concurred with the first recommendation to review the extent to which airlines’ auditors use the FAA’s audit guidance. DOT noted that responses by the airlines will be voluntary, as FAA’s PFC oversight authority may not be sufficient to compel responses. However, based on the responses FAA does receive, if airlines’ auditors’ usage is found to be minimal, FAA stated that it will evaluate whether the auditors should be required to use the guidance pursuant to regulation or policy. GAO believes that this will fully address the intent of our recommendation. DOT fully concurred with the second recommendation to better educate airports about their rights to review airline audits and noted that it planned to better educate airports by including notification on its website. GAO believes that this will fully address the intent of our recommendation. DOT also provided technical comments that we incorporated as appropriate. In an email received on November 19, 2014, an Executive Vice President at ACI-NA provided us with the association’s comments, principally noting that the model estimations of future collections under various PFC caps could be misconstrued by some readers to be the actual amounts that airports will be collecting rather than the PFC-funding capacity of airports. They also noted that they believe -0.65 is a more appropriate elasticity rate than the -0.8 that we used in our base model. We disagree for two reasons. First, our report clearly notes that depending on the assumptions applied, the model could provide different results and indicates that the base model reflects the funding capacity of airports under each cap scenario and not the likely outcome. Second, we believe -0.8 is a more appropriate elasticity rate based on our economic literature review of air traffic demand elasticity rates and discussions with experts who have published on aviation economics. Nonetheless, we also modeled -.65 and found very little difference in the model results, as demonstrated in appendix II. ACI-NA and AAAE provided technical comments that we incorporated as appropriate. A4A reviewed the draft and did not have any comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Transportation and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to examine (1) what are the potential effects of raising the PFC cap on airport and federal aviation revenues? (2) how well does the current PFC collection process work? and (3) what is known about alternative PFC collection methods and how well they might work? To assess potential impacts of increasing the PFC cap, we developed an economic demand model, including a series of scenarios that vary the amount of the cap and various assumptions. The development of this model is discussed in detail in appendix II. To determine how well the current PFC-collection process works and alternative fee-collection methods, we updated GAO’s report Alternative Methods for Collecting Airport Passenger Facility Charges issued in February 2013 in response to a congressional mandate. In summary, we identified three basic alternative methods to the current airline-ticket- based method of PFC collections. These methods are not mutually exclusive and could be used by either individual airports or a group or airports—methods such as kiosk/payment counter; online payment; and mobile payments. We evaluated these alternative methods relative to the current ticket-based collection method using factors that we identified as key considerations for evaluating alternative passenger-fee- collection methods—the factors: passenger experience, customer transparency, administrative costs, technology readiness, and legal effects. For this study, we conducted additional work by interviewing 17 aviation stakeholders and we interviewed or collected responses from officials representing airports and airlines that we had interviewed in our February 2013 report to obtain any additional views on the current collection method. We selected the airlines based on airline size measured by the number of departures and passengers and type of carrier (legacy, low cost, and regional carrier). We selected the airports based on airport size, amount of PFC charged, and percentage of originating versus connecting passenger traffic. Our interviews with these airlines and airports provided qualitative information that is non-generalizable to all airlines and airports. Given that work, we examined issues regarding the verification of airline PFC collection and remittance amounts with airlines and airports and their trade associations and consumer groups. We reviewed the FAA’s Passenger Facility Charges Audit Guide for Air Carriers to identify audit requirements and recommended internal controls audit procedures for airline collection, handling, remittance, and reporting of PFCs. We reviewed an airline’s independently conducted audit of PFC collections for an airport. We reviewed applicable statutes and regulations regarding FAA’s role and authority to audit airline PFC collections and remittances and discussed with the agency its efforts to revise the PFC FAA Order 5500.1, which provides guidance and procedures for FAA’s airports’ offices to administer the PFC program. We also interviewed Transportation Security Administration officials to discuss the agency’s procedures and processes for audits of its security fee collections. For this study, we also conducted additional work looking at alternative collection methods by conducting a review of literature on changes that have occurred since February 2013 that could support alternative collection methods. We spoke with technology company representatives, including those companies that have implemented kiosks for passenger check-in and customs and border protection processing at airports, to obtain their views on the applicability of using kiosks to collect payments at airports. We interviewed officials from technology companies that develop emerging technology systems and devices to obtain their views on the applicability of using online and mobile payment systems to collect payments. We interviewed FAA, principal airport and airline trade associations and airline-passenger consumer representatives, and interviewed or collected responses to our follow-up questions from five airports and four airlines to obtain their views on the use of alternative methods to collect PFCs. As we did for our February 2013 report, we evaluated these alternative methods relative to the current ticket-based collection method using factors that we identified as key considerations for evaluating alternative passenger-fee-collection methods—passenger experience, customer transparency, administrative costs, technology readiness, and legal effects. For a list of interviewees and airports and airlines from which we collected responses to our follow-up questions, see table 4. Appendix II: Economic Demand Model The model described in this appendix is designed to estimate the potential impact of increases in the PFC cap on funds available for airport investment and federal aviation revenues between 2016 and 2024. This model presents results for three PFC-cap level scenarios in addition to a baseline scenario representing no change in the current PFC cap. The first scenario is a $6.47 cap, which is the 2016 equivalent of the $4.50 cap indexed to the Consumer Price Index starting in 2000 when the cap was first instituted. The second scenario is taken from the President’s budget proposal for 2015, which sets the cap at $8. A hub level analysis of this scenario was also conducted to illustrate the distributional effects of the PFC increase. The third scenario is the airport trade associations’ proposal of $8.50 annually adjusted for inflation using the CPI. The following sections describe the 1) model’s structure and data sources; 2) key assumptions; and 3) sensitivity analysis. Model’s Structure and Data Sources The general approach of the model was to use passenger enplanement forecasts from 2016 through 2024 to project changes in PFC revenue under the four scenarios outlined above for the 362 airports that had approval to collect PFCs as of July 31, 2014. Passenger enplanement data for these airports were taken from FAA’s Terminal Area Forecast (TAF) enplanement projections. Enplanements were separated into international enplanements (i.e., enplanements originating in the U.S. with a foreign destination) and domestic enplanements (i.e., enplanements originating in the U.S. with a U.S. destination). We used projections from FAA Aerospace forecasts, which indicate that international enplanements will gradually rise from 12 percent of total enplanements in 2014 to 14 percent in 2024. The remaining enplanements were considered to be domestic. Due to several exemptions to PFC collection, including for segments beyond the first two and nonfare (e.g., frequent flyer) passengers, a PFC is not collected for all enplanements at airports that charge a PFC. Thus, we reduced international enplanements by 4 percent and domestic enplanements by 10 percent for each year in order estimate the total number of chargeable enplanements. The 4 percent exemption rate for international enplanements is based on the percentage of passengers using frequent flyer miles to purchase international tickets. The 10 percent exemption rate for domestic enplanements is a 5-year average calculated using data from 369 airports that collected PFCs between 2009 and 2013. To calculate the domestic exemption rate, PFC revenues from international enplanements (estimated assuming the 4 percent international exemption rate) were first subtracted from total PFC collections to get domestic PFC collections. These domestic PFC collections were then divided by the average PFC level for that calendar year in order to estimate how many enplanements were charged a PFC. This estimate was compared to the total enplanements from the TAF data, and the gap between the two was considered to be the number of domestic TAF enplanements that were not charged a PFC (10 percent on aggregate). The model assumes that all increases in PFC are passed onto the consumers and not absorbed by the airlines. When these PFC increases are passed onto the consumers, it increases their air travel costs. Our model thus takes into account these effects of raising the total cost of air travel on passenger demand and the resulting secondary impacts on PFC and trust fund revenues. Generally an increase in the cost of air travel will convince some number of passengers to seek other travel arrangements or to not travel at all. In order to model this potential decrease in passenger demand, we calculated the increase in the PFC level as a percentage of both international roundtrip and domestic average gross fares per enplanement. Data on average gross fares were collected and summarized from Department of Transportation Origin and Destination survey data for average annual fares from calendar year 2013. Fares were adjusted for inflation annually from 2014 onwards using the CPI. In order to calculate the fare per enplanement, we divided the domestic fare by the average number of flights per ticket (1.37), which is derived from U.S. DOT’s non-directional data from calendar year 2013. International trips are assumed to have only one flight per ticket because other domestic flights that may be part of the ticket are captured in the domestic category. We also use roundtrip fares for international enplanements as that is the relevant cost of travel to which PFCs should be compared since incoming international flights do not have to pay a PFC. The increase in the PFC was added to the domestic ticket price per enplanement and international roundtrip ticket price to calculate the percentage change resulting from the change in the PFC cap. To translate the increase in ticket price into an impact on passenger demand, an elasticity rate was applied. The elasticity rate is a ratio representing the percentage change in quantity to a percentage change in price. Air travel elasticity thus shows the percentage change in trips demanded by customers as result of percentage change in air fare. Applying the elasticity rate provides an estimate of the reduction in passenger demand for enplanements due to the increase in price per enplanement, which is used to calculate net chargeable enplanements. The net enplanements are then used to estimate PFC revenues. PFC revenues are estimated by multiplying these demand- adjusted enplanements by the maximum allowable PFC under each scenario less an 11-cent administrative fee kept by the airlines. PFC revenues thus reflect the collections that the airports would expect to receive if all 362 airports adopted the maximum rate starting in 2016. It is important to note that adoption of the maximum rate is likely to be a gradual process, and thus actual collections are likely to be lower than these estimates, especially in earlier years. The reduction in enplanements due to higher ticket costs also affects trust fund revenues as it reduces the passenger tax base that contributes to the trust fund. Our model results show the projected change in trust fund revenues from passengers under the various cap scenarios relative to the baseline. Negative estimated changes in trust fund revenue would likely represent a marginal slowing of growth in trust fund revenues from passengers rather than an absolute decline. The impact on the trust fund is calculated by multiplying the change in domestic enplanements due to demand effects by the $4 segment tax and the change in international enplanements by the $17.50 international arrival and departure tax. We also calculate the loss from the ad valorem tax based on the fewer number of trips that are taken as a result of the higher PFC. Key Assumptions Elasticity of Demand As indicated above, elasticity rates are a measure of the demand response of passengers to changes in price, and thus, they can have an impact on passenger demand projections. The higher the demand elasticity, the more sensitive the demand is to a change in price, and hence the higher the reduction in enplanements due to a PFC increase. The elasticity rate we chose for our base model analysis was -0.8, which was drawn from a 2007 study conducted by InterVISTAS consulting for IATA and is based on a universal price increase at a national level. We also examined different elasticity rates of -0.65 and -1.122 to see how it affected our results. The -0.65 elasticity is drawn from a November 2014 study of demand elasticity also conducted by InterVISTAS Consulting Inc. for ACI-NA. The -1.122 elasticity comes from a study completed by D.W. Gillen et al. in 2003. PFC Pass-Through Edward Huang and Adib Kanafani, Taxing for Takeoff: Estimating Airport Tax Incidence through Natural Experiments (January 2010). and their industry’s trade association generally oppose PFC increases at a national policy level. However, if funding airport projects through PFCs instead of through airline rates and charges would reduce airline costs, then it would increase the ability and likelihood of airlines absorbing some of the PFC increase by lowering fares instead of making consumers pay for it. The more the airlines absorb, the less the increase in travel costs and the lower the adverse effect on passenger demand. However, under any PFC increase and pass-through scenario, trust fund revenues from passengers will be reduced relative to the baseline because even if airlines lower fares enough to absorb the entire PFC increase, the lower fares will result in less revenue from the 7.5% excise tax on fares. Airport Adoption We assumed that airports that were approved to impose a PFC as of July 31, 2014, would raise their PFC to the maximum allowed amount in the first year and that airports that do not currently have approval to collect a PFC would not obtain approval to impose one. Interviews with FAA and airport representatives indicate that the number of airports charging PFCs is not expected to change significantly in the future. While it is unrealistic to assume that all airports that are currently collecting a PFC would immediately raise their PFC in the first year, based on near universal adoption of the current maximum by nearly all of the largest airports, it is not unrealistic to expect that airports would be at or near the maximum by 2024. Following the introduction of the PFC in 1991 and the increase in 2000, airports quickly moved to the higher PFC level as indicated in figure 3 in the report. However, the extent to which airports continually have projects that fall under the PFC-eligibility criteria and gain FAA approval will also influence the adoption of higher PFCs by airports over time. Small airports in particular may not have as many PFC-eligible projects to justify moving to a higher PFC. If a significant number of airports that currently collect a PFC do not move to the maximum under a new cap, it would offer passengers more alternatives as passengers could avoid paying the higher PFC by substituting a nearby airport that does not charge at the higher rate. This would result in a higher overall rate of demand elasticity. Thus the final effect would depend on the specific pattern of airports that do or do not adopt a higher PFC. Sensitivity Analysis In order to test the sensitivity of the results to changes in the key assumptions about elasticity and pass-through, changes to PFC and trust fund revenue from passengers were modeled using a -0.65 and a -1.122 elasticity rate, and a 50 percent pass-through rate. The results are presented below in table 5. Under these alternative elasticity scenarios and the $8 cap, estimated changes to PFC revenues vary by less than 1.5 percent from the standard scenario estimated using a -0.8 elasticity. Similarly, under a scenario that uses an $8 PFC cap, an elasticity rate of - 0.8, and a 50 percent pass-through rate, estimated changes to PFC revenues varied by less than 2 percent relative to the standard scenario. Changes in trust fund revenues from passengers showed greater sensitivity to changes in elasticity rate and pass-through in percentage terms as these are the only variables in the calculations of these changes. To test the sensitivity of our results to key assumptions about airport adoption, we developed an alternative adoption scenario based on airport adoption behavior after the previous increase in the PFC cap in 2000. For the results located in Table 6, we assume that 50% of airports charge the maximum rate of $8 from 2016 to 2018, 75% of airports from 2019 to 2021 and 90% of airports from 2022 to 2024. The results show that additional revenue from the increase in the cap varies proportionally to the percentage of airports that adopt the higher cap. The impact on trust fund revenues from passengers is lower relative to the standard scenario because fewer passengers are affected by the PFC cap increase if fewer airports adopt it. Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact Gerald L. Dillingham, Ph.D, 202-512-2834, or [email protected]. Staff Acknowledgments In addition to the contact named above, the following individuals made important contributions to this report: Paul Aussendorf, Assistant Director; Namita Bhatia Sabharwal; Benjamin Emmel; Bert Japikse; Delwen Jones; Maureen Luna-Long; Josh Ormond; Madhav Panwar; and Reed Van Beveren.
About $2.8 billion in Passenger Facility Charges (PFCs) were collected in 2013. PFCs are federally authorized fees paid by passengers at the time of ticket purchase to help pay for capital development at commercial service airports and have been capped at $4.50 per flight segment since 2000. Airports are seeking an increase in the PFC cap to $8.50. Airlines, which collect PFCs at the time of purchase and remit the fees to airports, oppose an increase because it could potentially reduce passenger demand. Some airports have suggested that alternative PFC collection methods could allow the PFC cap to be raised without adversely impacting demand. GAO was asked to examine these issues. This report discusses (1) the potential effects of PFC cap increases, (2) how well the current PFC collection process works, and (3) alternative PFC collection methods. GAO developed a model to assess the potential effects of PFC cap increases on funds for airport investment and the aviation system. GAO interviewed 26 stakeholders, including airports and airlines representing a range of sizes, as well as consumer groups, to discuss PFC collection methods. Increasing the Passenger Facility Charges (PFC) cap would significantly increase PFC collections available to airports under the three scenarios GAO modeled but could also marginally slow passenger growth and therefore the growth in revenues to the Airport and Airway Trust Fund (AATF). GAO modeled the potential economic effects of increased PFC caps for fiscal years 2016 through 2024 as shown in the table below. Under all three scenarios, AATF revenues, which totaled $12.9 billion in 2013 and fund Federal Aviation Administration (FAA) activities, would likely continue to grow overall based on current projections of passenger growth; however, the modeled cap increases could reduce total AATF revenues by roughly 1 percent because of reduced passenger demand. These projected effects depend on key assumptions regarding consumers' sensitivity to a PFC cap increase, whether airlines would pass on the full increase to consumers, and the rate at which airports would adopt the increased PFC cap. Stakeholders said that the current PFC collection method generally works well, but airport officials said that transparency over PFC collections could be enhanced. Stakeholders universally said that the current method is preferred because the PFC is paid at the time of purchase. Airlines are required to have audits of their PFC collections and FAA provides audit guidance to help provide assurance that collections are accurate. However, the guidance is voluntary and FAA does not know if airlines' auditors use it. FAA relies on airports to alert them of discrepancies but some airports may not be aware they can review audits. FAA could take additional steps beyond what is stated in the guidance to inform airports about their rights, and thus provide reasonable assurance to Congress, airports, and airline passengers about the reliability of those audits and PFCs remitted to airports. Stakeholders GAO interviewed generally said that alternative methods to collect PFCs, such as airport kiosks or online or mobile payments, are technologically feasible but they would impose additional steps for passengers, costs for airports, and changes in business processes. Therefore, stakeholders said that that the current collection method is better than the identified alternatives.
Background Individuals report their rental real estate activities on their tax returns, including for the rental of residential, vacation, and commercial properties. Individuals own and manage a large amount of residential properties in the United States. According to a study by the Department of Housing and Urban Development and the U.S. Census Bureau (Census), individuals owned an estimated 83 percent of the 15.7 million rental housing properties with fewer than 50 units in 2001 (with the remainder owned by partnerships or other entities). Individuals owned 13 percent of the estimated 71,000 rental properties with 50 units or more. Likewise, according to a Census study of rental property management characteristics for 1995, an estimated 67 percent of rental housing properties with fewer than 50 units were managed by their owners as opposed to management companies or another type of manager. Owners managed an estimated 5 percent of rental properties with 50 units or more. According to IRS data, the estimated number of individual taxpayers who reported rental real estate activity for properties they owned directly was 8.7 million in 2001 and 9.1 million in 2005. Individual taxpayers generally must report as income any rent they receive from the use or occupation of real estate on Part I of Schedule E, which they attach to the individual tax return—Form 1040. The amount of income taxpayers must report includes rent payments and other amounts, such as kept security deposits or the fair market value of services taxpayers receive from tenants in lieu of rent. Taxpayers ordinarily are allowed to deduct the expenses of renting property from their rental income on Part I of Schedule E. However, the costs of property improvements that add to the value of a property or extend its useful life, such as a bathroom addition or new built-in appliances, must be depreciated, meaning that taxpayers must deduct such costs on their tax returns over multiple years. Likewise, taxpayers must depreciate the cost of acquiring a rental property. The amount of depreciation that a taxpayer can deduct for both property improvements and the cost of rental property depends on the taxpayer’s basis in the property, among other factors. A taxpayer’s basis in a rental property is generally the cost of the property when it was acquired, excluding the cost of land, which is not depreciable (in practice, taxpayers must determine what portion of the cost of their properties is attributed to land versus actual structures in order to determine their depreciable basis). If individual taxpayers use their properties for both rental and personal purposes in a given tax year the expenses they can deduct may be limited. Personal use of a property includes the use by the taxpayer or any other person who has an interest in the property or use of the property by a family member of either, even if the property is rented at a fair rental price. Personal use also includes use by nonowners and non-family members if the rental is at less than a fair rental price. However, in general, renting property to a family member or another person is not considered to be personal use if the property is rented at a fair rental price and is used by the renter as his or her principal residence. Taxpayers who use their properties for both rental and personal purposes, but whose personal use is not enough for the property to be considered a residence, must allocate their expenses between rental and personal use based on the number of days used for each purpose. To assist in filing their tax returns, individual taxpayers are expected, and in some cases required, to keep records, including those for rent received and expenses. Additionally, taxpayers must keep records to substantiate items on their tax returns in case IRS has questions about the items. Taxpayers who, upon IRS examination, cannot produce evidence to support items they reported on their tax returns may be subject to additional taxes and penalties. For example, taxpayers who cannot substantiate their rental real estate expenses with appropriate records may have their expenses disallowed, resulting in additional taxes owed. Information reporting provides taxpayers, as well as IRS, with some records of rent received and expenses from rental real estate. For example, when an individual taxpayer receives rent of $600 or more through a rental agent, such as a rental management company, the agent is required to report the amount of rent received to the taxpayer and IRS on a Form 1099-MISC. Payees of rent payments of $600 or more made in the course of a trade or business, such as renting office space, are also required to report those payments on Form 1099-MISC. Likewise, financial institutions are required to report to taxpayers and IRS on Form 1098 the amount of interest taxpayers paid on mortgages they held on their rental properties. A taxpayer whose rental real estate activity is a trade or business is required to report service payments of $600 or more on Form 1099-MISC. However, according to IRS, whether a taxpayer’s rental real estate activity is considered a trade or business is determined on a facts and circumstances basis. Generally, taxpayers currently do not have to file Form 1099-MISC for payments made to corporations. IRS relies on both enforcement and taxpayer service programs to ensure compliance by taxpayers with rental real estate activity. Two enforcement programs IRS uses to ensure compliance are the Automated Underreporter program (AUR) and examinations. Through AUR, IRS matches information that taxpayers report on Schedule E for rent received and mortgage interest to amounts that third parties report for these items on Forms 1099-MISC and 1098, respectively. When mismatches arise between amounts on tax returns and Forms 1099-MISC and 1098, IRS may send notices asking taxpayers to explain the discrepancies or pay additional taxes. Examinations may address any type of misreporting and come in three forms. Correspondence examinations are conducted through the mail and usually cover a narrow issue or two. Office examinations are also limited in scope but involve taxpayers going to an IRS office. For field examinations, IRS sends a revenue agent to a taxpayer’s home or business to examine the misreporting that IRS suspects it has identified. During examinations, IRS uses information from third parties, taxpayers, and external sources, such as public records. Through its taxpayer service programs, IRS provides publications, forms, and instructions to help taxpayers understand and comply with their rental real estate reporting requirements. IRS also disseminates relevant information to tax professionals, such as tax return preparer associations, and business organizations. For example, in July 2007, IRS released a fact sheet on the requirements for reporting rental real estate activity. In addition to publishing the fact sheet on its Web site, IRS disseminated the information to the media and a wide network of tax professional and small business organizations. IRS also provides assistance to taxpayers through its toll-free telephone service where taxpayers can call and speak directly with IRS staff about their tax issues. IRS periodically measures taxpayer compliance and the tax gap that results from misreporting, including those for individual taxpayers with rental real estate activity. The portion of IRS’s 2001 tax gap estimate caused by individual underreporting is based on NRP. Through NRP, IRS conducted a review and examination of a representative sample of about 46,000 individual tax returns from tax year 2001. IRS generalized from the NRP sample results to compute estimates of underreporting of income and taxes for all individual tax returns. Because even the detailed NRP reviews could not detect all misreporting, IRS adjusted the NRP results to account for undetected misreporting when estimating the tax gap, as will be discussed in the next section of this report. About Half of Individual Taxpayers with Rental Real Estate Activities Misreported, Often Because of Overstated or Unsubstantiated Expenses Based on the unadjusted NRP results, at least an estimated 53 percent of taxpayers with rental real estate activity (about 4.8 million out of 8.9 million taxpayers) misreported their rental real estate activities for tax year 2001. Individual taxpayers misreported their rental real estate activities more frequently than some other types of income for tax year 2001. For example, we previously reported that an estimated 10 percent, 17 percent, and 22 percent of individual taxpayers with wage and salary, dividend, and interest income, respectively, misreported their income from these sources. This disparity in compliance undermines the fairness of the tax system, because when some taxpayers fail to pay the amount of taxes they should pay under the law, the burden of funding the nation’s commitments falls more heavily on compliant taxpayers. Individual taxpayers misreported an estimated $12.4 billion of net income from rental real estate, before adjusting for tax gap purposes. The unadjusted NRP results understate the amount of net misreported income from rental real estate, as they represent only what IRS detected through NRP examinations. IRS knows that it does not detect all misreporting during its examinations. As such, it uses various methodologies and other sources of data to adjust the aggregate NRP results for tax gap purposes to estimate net misreporting for categories of income or activities, such as rental real estate and royalties (total rental real estate income or loss is reported on the same line of Schedule E). After these adjustments, IRS estimated that the tax gap for rental real estate and royalty activities was $13 billion for tax year 2001. Misreported rental real estate likely accounted for most of the $13 billion because taxpayers misreported an estimated $23.7 million of net income from royalties compared to the $12.4 billion of net income from rental real estate that taxpayers misreported. Of the 4.8 million taxpayers who misreported their rental real estate activities, an estimated 75 percent underreported their net income from rental real estate (by either understating rent received or overstating expenses or loss). For these taxpayers, a relatively small number underreported $10,000 or more in net income from rental real estate, but these taxpayers accounted for a large amount of misreported net income, as shown in table 1. Conversely, nearly one-third of underreporting taxpayers (about 1.2 million out of about 3.6 million underreporting taxpayers) underreported less than $1,000. By comparison, an estimated 25 percent of taxpayers who misreported rental real estate activities overreported their net income from rental real estate (by overstating rent received or understating expenses or loss), although the amounts they misreported were relatively small. For example, the estimated median amount of net income that misreporting taxpayers overreported was $518, about one-quarter of the median amount of net income of $1,981 that misreporting taxpayers underreported. Some taxpayers who misreported did so in a way that may not have affected the amount of income tax they owed but may have affected the amount of employment tax they owed, as discussed later in this report. In terms of income levels, the distribution of taxpayers who misreported rental real estate activity did not vary greatly from the income levels for all taxpayers who reported rental real estate activity for tax year 2001, as shown in table 2. However, taxpayers who reported—and misreported— rental real estate activity were generally of higher income levels than all individual taxpayers. Most of the misreporting, in dollar terms, was attributed to taxpayers with more than $50,000 of adjusted gross income. Misreporting of Rental Real Estate Expenses Was the Most Common Type of Misreporting We Found That IRS Detected during NRP Examinations As shown in table 3, misreporting of rental real estate expenses was the most common type of misreporting that we found through our file review that IRS detected through the NRP examinations of taxpayers with rental real estate activity. The figures in table 3 do not include additional misreporting that IRS assumes to have taken place, which it takes into account when estimating the tax gap. Following table 3 we discuss the specific types of misreporting that we found IRS to have detected through NRP. The most common reason why taxpayers misreported their rental real estate expenses was because they lacked documentation to substantiate some of the expenses they deducted, as shown in table 4. For taxpayers who did not substantiate expenses, some may have incurred the expenses they could not document while other taxpayers may simply have made up expenses. Generally, we could not discern from our case file review why taxpayers did not substantiate reported expenses. We found two scenarios for taxpayers who did not report or fully report all allowable expenses. During the course of IRS’s examinations, some taxpayers discovered additional expenses for properties that they reported on their tax returns. For other taxpayers, unreported expenses were related to properties for which they received rent that they did not report on their tax returns—and that IRS subsequently identified. For these properties, IRS required the taxpayers to report the rent that they received and allowed them to deduct related expenses that they could document. We could not estimate the frequency of these scenarios because we could not always determine whether the unreported expenses were related to unreported rent. Taxpayers misreported depreciation expenses in a variety of ways. We estimate that about 166,000 taxpayers included the value of their land within the depreciable basis of their properties. Other types of misreported depreciation included taxpayers deducting depreciation for properties that they had already fully depreciated or miscalculating depreciation by using an incorrect length of time (useful life) over which to depreciate property. Additional ways in which taxpayers misreported rental real estate expenses included the following: Deducting expenses in full that should have been depreciated. For example, taxpayers deducted expenses related to improving a property or deducted the full expense of buying an item, such as a washing machine, instead of depreciating these costs. Improperly deducting personal expenses. Some taxpayers deducted expenses that were completely personal in nature while others made errors in how they divided expenses that were for both personal and rental real estate purposes. Other types of misreporting of expenses. These included taxpayers deducting unallowable expenses, such as penalties or interest related to real estate taxes, or making mathematical errors on their tax returns. Misreported Rent Received We identified three scenarios for taxpayers who misreported rent received. Taxpayers who did not report any rental activity. Taxpayers who reported receiving rent for some properties but not for others. Taxpayers who reported receiving rent for all of their properties but reported incorrect rent amounts. Some examples of misreporting rent in this manner included taxpayers failing to count certain items as rent, such as expenses paid by tenants or kept security deposits, or having inadequate records. We could not estimate the frequency of these scenarios because we could not always determine the exact nature of the misreported rent. Reported Activity on an Incorrect Part of the Individual Tax Return This type of misreporting includes taxpayers who reported income or expenses as rental real estate activity on Part I of Schedule E that they should have reported elsewhere on their tax returns and taxpayers who reported an activity elsewhere on their tax returns that they should have reported on Part I of Schedule E. For example, some taxpayers reported business activities as rental real estate activities on Part I of Schedule E that IRS determined should have been reported on the schedule to the individual tax return for profit and loss from business (Schedule C). We also found taxpayers who reported income or expenses on Schedule C that should have been reported as rental real estate activity on Part I of Schedule E. This type of misreporting may not have affected the calculation of the amount of income tax owed. However, reporting activities on the wrong schedule could have affected the amount of self- employment tax these taxpayers owed, as net income from a trade or business is subject to self-employment tax whereas net income from rental real estate reported on Schedule E generally is not. IRS estimated that underreported self-employment tax accounted for $39 billion of the 2001 tax gap. Misreported Loss from Rental Real Estate The most common reason why taxpayers misreported loss from their rental real estate activities was because they also used their rental property as a residence—including taxpayers who rented their properties at less than a fair rental price—and claimed a loss to which they were not entitled. Other reasons why taxpayers misreported loss were because they were not actively participating in their rental real estate activities or exceeded applicable income limitations for deducting a loss from rental real estate. Limited Information Reporting and Complexity Hinder Compliance, and Various Options Exist for Improving Compliance Limited information reporting, complexity, and the number of taxpayers misreporting are challenges IRS faces in ensuring compliance with rental real estate reporting. IRS receives information returns for a relatively small number of taxpayers with rental real estate activity. For tax year 2001, for example, IRS received Forms 1099-MISC reporting rent received for about 327,000 taxpayers who reported rent on their tax returns. By comparison, there were about 8.2 million taxpayers who reported rent on their tax returns for whom IRS did not receive a corresponding Form 1099- MISC from a third party reporting the rent. For taxpayers who deduct rental real estate expenses, IRS generally receives information returns from third parties only for mortgage interest that taxpayers pay. For tax year 2001, about 55 percent of taxpayers who reported rental real estate activity deducted mortgage interest, accounting for about 36 percent of the total amount of all rental expenses, including depreciation, that taxpayers deducted for that year. As a result, about 64 percent of the total amount of all rental real estate expenses taxpayers reported may not have been subject to information reporting. IRS enforcement officials cited limited information reporting as a major challenge in ensuring compliance for the reporting of rental real estate activities because without third-party information reporting it is difficult for IRS to systematically detect taxpayers who fail to report any rent or determine whether the rent and expense amounts taxpayers report are accurate. The officials also told us that because third parties are not required to include on Form 1098 the address of the property for which they are reporting mortgage interest, IRS is less able to determine if the interest is for a property used for rental or personal purposes. In addition, limited information reporting on rental real estate activities results in lower levels of taxpayer voluntary compliance in reporting rental real estate activities when compared to other types of income or activities covered by more extensive information reporting. Taxpayers tend to more accurately report income that third parties report on information returns—such as Forms 1099-MISC and 1098—because the income is transparent to taxpayers as well as to IRS. As shown in figure 1, individual taxpayers misreport receiving rent to a greater extent than they misreport income subject to more extensive information reporting. Although information reporting tends to lead to high rates of compliance, requiring all individuals who pay rent to report to IRS the annual amount of rent they pay to property owners or their intermediaries is not practical. Officials at IRS and representatives from the tax return preparation industry told us that although such a requirement would likely improve compliance, it would place a substantial burden on taxpayers, who may not have any incentive to comply. Likewise, the requirement would be very difficult for IRS to enforce given the large number of potential information return filers. IRS compliance officials and representatives of the paid tax return preparer industry told us that the complexity involved in reporting rental real estate activities also hampers compliance. For example, the rules surrounding whether to file Forms 1099-MISC or how to accurately depreciate property may be challenging for some taxpayers to understand. Due to limited information reporting and the complexity of reporting for rental real estate activity, IRS primarily addresses rental real estate misreporting through field and office examinations. IRS supplements examination coverage for rental real estate misreporting through AUR, to a limited extent, based on rental income amounts reported on Form 1099- MISC or mortgage interest reported on Form 1098. Given the limited extent of information reporting, it can be difficult for IRS to identify misreporting taxpayers. Also, field examinations are resource intensive and on average address relatively large amounts of misreporting compared to the misreported amounts for most taxpayers who misreported rental real estate activity. For example, for fiscal year 2006, the average tax assessment IRS recommended for field examinations of individual taxpayers was about $18,000 per taxpayer. We found that taxpayers who underreported their net income from rental real estate misreported an average of $4,055. As a consequence, field examinations may not be a cost-effective tool for targeting most rental real estate misreporting. Further, the number of individual taxpayers misreporting rental real estate activity (4.8 million) is large relative to the approximately 300,000 field examinations of individual taxpayers IRS conducted in fiscal year 2006. According to IRS’s examination program, relatively small amounts of misreporting are more likely to be addressed through correspondence examinations. Various Options Exist for Improving Rental Real Estate Reporting Compliance Although as previously discussed, implementing broad, new third-party information reporting requirements for rental real estate activities is not practical, changing existing requirements is one of various options that could improve rental real estate reporting compliance. One change to current information reporting requirements that could improve rental real estate income reporting compliance is to require third parties to include mortgaged property addresses when reporting taxpayers’ mortgage interest payments on Form 1098. As previously noted, not having property addresses on Form 1098 hinders IRS’s efforts to enforce rental real estate reporting compliance. IRS officials told us that having third parties consistently report property addresses on Form 1098 would help them in their enforcement efforts, for example, by allowing them to better distinguish between owner-occupied and rental properties. Also, taxpayers who receive Forms 1098 that include the addresses of their rental properties could be deterred from failing to report activity for their properties. Representatives of the mortgage banking industry told us that it would be feasible to report property address information on Form 1098 because mortgage lenders maintain this information. The representatives also told us that reporting this information would involve costs because lenders would have to change their reporting systems. They said that such costs would be lessened if lenders were only required to send property address information to IRS, which they would send electronically, and not to taxpayers, for whom the lenders may send Forms 1098 on paper, as changing systems for electronic submissions is less costly than changing systems for printed forms. However, excluding property address information from Forms 1098 sent to taxpayers might eliminate any deterrent effect. Another potential change to existing information reporting requirements is to expand the requirement for taxpayers to file Forms 1099-MISC for certain payments they deduct as expenses, for example, when taxpayers pay contractors to perform repair work on their rental properties. Existing law on whether taxpayers must file information returns on selected rental real estate expenses they incur requires a case-by-case analysis that depends on the facts and circumstances for each taxpayer. Currently, only taxpayers whose rental real estate activity is considered a trade or business are required to report payments on Form 1099-MISC. However, the law for filing information returns does not clearly spell out how to determine whether taxpayers’ rental real estate activity should be considered a trade or business, and IRS must make this determination on a case-by-case basis. Without concrete statutory language, it may be difficult for taxpayers who report rental real estate activity to determine if they are required to file Forms 1099-MISC for certain expense payments they make. As a result, it is possible that some taxpayers who should file Forms 1099- MISC for payments they make in the course of renting out real estate may not file the forms. IRS does not have data on the number of taxpayers who file Form 1099-MISC reporting expense payments from rental real estate activities. Taxpayers are not required to indicate on Form 1099-MISC the type of activity for which they are filing the form (e.g., business activity on Schedule C versus rental real estate activity on Schedule E). Therefore, under current statutory and regulatory guidance, it is not possible for IRS to determine the activities for which Forms 1099-MISC are filed. Although it would be a departure from the trade or business requirement, making all taxpayers with rental real estate activity subject to the Form 1099-MISC filing requirement would provide clear guidance for who must file Forms 1099-MISC. Such clarity could benefit both IRS and taxpayers. Taxpayers would have clear direction on whether they had to file the form. As previously discussed, we found through our file review that a large amount of misreported net income from rental real estate was from taxpayers for whom IRS disallowed expenses that the taxpayers could not substantiate. It is likely that some of these taxpayers reported on their tax returns expenses that they did not incur. Requiring taxpayers to file information returns for certain rental real estate expense payments could deter these taxpayers from reporting expenses they did not incur because IRS would have a record of expenses it could use as part of an enforcement action. Given the magnitude of misreporting for taxpayers who could not substantiate some rental real estate expenses they deducted, even small improvements in compliance could yield substantial revenue. Also, a change to the Form 1099-MISC filing requirement would put taxpayers reporting rental real estate on par with other individual taxpayers, such as sole proprietors of other types of trades or businesses, who are generally required to file Forms 1099-MISC. Currently taxpayers reporting rental real estate expenses whose activities are not considered a trade or business and sole proprietors of other types of business who generate similar amounts of gross income from their activities are treated differently based on the information return statutes. For example, a taxpayer with rental real estate activities not considered a trade or business would not have to file an information return reporting a payment made to an individual contractor for repair services whereas a sole proprietor engaged in a trade or business would have to file for a similar service, assuming the payments exceeded the minimum reportable amount threshold. It is questionable whether, without additional statutory authority, IRS can require all taxpayers with rental real estate activities to report expense payments on Forms 1099-MISC regardless of whether their activities are trades or businesses. An expansion of the Form 1099-MISC filing requirement could have the added benefit of improving compliance among payment recipients, such as contractors who are sole proprietors, because additional payments would be transparent to IRS and the payment recipients. IRS estimated that sole proprietors misreport at a relatively high rate and accounted for a significant portion—$68 billion—of the tax gap for tax year 2001. Extending the Form 1099-MISC filing requirement to additional taxpayers who report rental real estate involves costs and burdens for taxpayers. Many taxpayers who were not required to file in the past or were unaware of the filing requirement would have to learn the reporting rules and file Form 1099-MISC. However, not all individual taxpayers with rental activity would have to file Form 1099-MISC because not all will have paid $600 or more to a single individual during the tax year, which is the threshold for the reporting requirement. One way to further limit the number of taxpayers with rental real estate activity that would be required to file Form 1099-MISC would be to increase the exemption amount for payments to a single person. The $600 or more threshold for reporting payments made in the course of a trade or business has not been updated since the 1950s, and as such a greater percentage of payments are likely subject to reporting than when the requirement was first put in place because of inflation. One obstacle for taxpayers in determining if they are required to file Form 1099-MISC is that generally taxpayers do not have to file the forms for payments made to corporations. As such, taxpayers must figure out whether the persons or businesses to which they make payments are incorporated to determine whether they need to file Form 1099-MISC. A way to remove this obstacle is to expand information reporting to include payments made to corporations. In the past, we have identified requiring information reporting on payments made to corporations as a way to improve compliance. Also, the administration in its fiscal year 2009 budget proposed requiring information reporting on payments to corporations. Additionally, IRS would need to inform taxpayers and others of the expanded Form 1099-MISC filing requirement. Communicating the requirement could be complicated by the different deadlines for filing Form 1099-MISC and the individual tax return. Taxpayers must send Form 1099-MISC to payment recipients by January 31, whereas the deadline to file tax returns is April 15. Filing Form 1099-MISC past the deadline may result in a penalty. Taxpayers who are newly required to file Form 1099- MISC may not learn of the requirement until they begin to prepare their tax returns, which some may not begin to do until after the January 31 Form 1099-MISC filing deadline. IRS compliance officials told us that taxpayers who realize that they are late in filing required Forms 1099-MISC may choose not to file them rather than run the risk of incurring penalties from filing late. One way to lessen the impact of potential penalties on taxpayers who are newly required to file Form 1099-MISC would be to waive penalties for late filers the first year they are required to file. We are examining issues involved with filing Form 1099-MISC in a forthcoming report. As such, we did not examine in depth some of the issues involved with filing Form 1099-MISC that we highlighted in this report, such as increasing the reporting threshold, requiring reporting for payments made to corporations, and penalties. Requiring Taxpayers to Report Additional Information on Tax Returns for Rental Real Estate Activity Could Improve Rental Real Estate Reporting Compliance Requiring taxpayers to report additional information on their tax returns for their rental real estate activity could improve rental real estate reporting compliance by eliciting more accurate information from taxpayers and providing IRS with additional information to detect misreporting. As previously discussed, the requirements for reporting rental real estate activities are complex. Such complexity may be why individual taxpayers reporting this activity use paid preparers more frequently than other individual taxpayers. For example, for tax year 2001, an estimated 77 percent of individual taxpayers reporting rental real estate activity used a paid tax return preparer. By comparison, an estimated 56 percent of all individual taxpayers used a paid preparer for that tax year.As we have said in past reports, paid preparers are a critical quality- control checkpoint for the tax system and the quality of service they provide is important. However, taxpayers with rental real estate activity who used a paid preparer were statistically as likely to have misreported as taxpayers with rental real estate activity who prepared their returns themselves. One of the challenges that paid preparers encounter when preparing returns for taxpayers with rental real estate activity is that the preparers do not always receive complete or accurate information from taxpayers. Requiring preparers to verify taxpayers’ documentation of rental real estate expenses or perform increased due diligence of taxpayers’ rental real estate activities could improve the accuracy of what taxpayers report. However, both of these requirements would involve substantial costs and burdens for paid preparers, taxpayers, and IRS that could outweigh any related compliance benefits. Requiring taxpayers to report additional information for their rental real estate activities on their tax returns is an alternative way to elicit more accurate and complete information from taxpayers who use paid preparers—as well as taxpayers who self-prepare. These additions would add a level of due diligence because paid preparers would have to obtain additional information on taxpayers’ income and expenses in order to complete the taxpayers’ returns. For example, requiring taxpayers to report on Part I of Schedule E if they filed a Form 1099-MISC for one or more deducted expenses, such as through a check-the-box question, could increase taxpayer awareness of the requirement and prompt paid preparers to ask taxpayers questions about the nature of their clients’ expenses, which could improve the accuracy of the expenses taxpayers report. IRS compliance officials were uncertain if having this information for use in its enforcement efforts would provide additional compliance benefits. As we are examining issues involved with filing Form 1099-MISC in a forthcoming report, we did not examine whether requiring taxpayers to indicate on their tax returns if they filed a Form 1099-MISC would be a cost-effective way to improve compliance. As previously mentioned, about 166,000 taxpayers improperly included the value of land when depreciating their rental properties. Requiring taxpayers to report on their tax returns the basis amount attributed to land versus structure when depreciating rental property could further improve compliance. With such a requirement, IRS could identify taxpayers who depreciated land or may have undervalued their land when calculating depreciation. This requirement would not be unprecedented, as IRS currently requires taxpayers to report the value of their land when determining the amount of depreciation to deduct for the business use of their homes. Also, through our file review, we found that taxpayers do not always report on their tax returns the type of property they rented out. The instructions to Schedule E currently provide the example of “townhouse” as the type of information taxpayers should report on property type on Part I of Schedule E. The instructions do not ask the taxpayer to report if the house was rented for residential, vacation, commercial, or other purposes, for example. If IRS is to use information on property type in its enforcement efforts, it needs to receive consistent information, such as that which taxpayers could provide by answering a check-the-box question that included the various types of properties taxpayers might rent out. Although such a check-the-box question would provide IRS with more consistent information on property type, including such a question on Part I of Schedule E would involve challenges. For example, IRS would need to clearly define the different types of properties taxpayers might report, and some rental properties could be used for multiple purposes. Also, IRS compliance officials were uncertain to what extent IRS could use this information in its enforcement efforts. Finally, we found that about 36 percent of taxpayers reporting rental real estate activity did not include the complete address of their rental properties on their tax returns for tax year 2001. Although the instructions to Schedule E direct taxpayers to report the street address, city or town, and state for their properties (the instructions state that taxpayers do not have to report zip codes), the language on the actual form directs taxpayers to list the “location” of their rental real estate properties. Specifically directing taxpayers to report complete property addresses could increase the number of taxpayers reporting addresses, rather than general location information. According to IRS examination officials, having complete address information would allow IRS examiners to use commercially available data sources on property values to help determine fair rental prices for taxpayers whose returns are selected for examination. Whether having complete address information could also cost effectively enhance IRS’s examination selection process is less clear, according to the officials. Providing Taxpayers with Additional Guidance on Rental Real Estate Reporting Requirements Could Improve Compliance, Although to What Extent Is Difficult to Measure Another way to help taxpayers to more accurately report their rental real estate activities is by providing them with additional guidance to help them complete their tax returns. The impact on the tax gap from providing additional guidance may be difficult to measure, as IRS researchers have found it difficult to determine the extent to which taxpayer services, such as tax form instructions, improve compliance among taxpayers who want to comply. Likewise, providing taxpayers with additional guidance does not guarantee that the taxpayers will actually read the guidance, especially those who use paid preparers, and would not affect taxpayers who willfully misreported. Regardless, providing taxpayers with additional guidance could produce compliance benefits among taxpayers who want to comply that exceed the related implementation costs, which may be low relative to other actions IRS could take to improve compliance, such as increased enforcement efforts. Providing additional guidance could be particularly helpful for taxpayers who use tax return preparation software to prepare their returns if the software included or is based on the guidance. For example, regardless of whether the requirement for who must file a Form 1099-MISC changes in the future, at least some taxpayers who reported rental real estate activity are currently required to file. However, the instructions to Schedule E do not discuss this requirement. Including guidance on the Form 1099-MISC filing requirement in the instructions to Schedule E could inform taxpayers of the requirement and could result in more taxpayers filing the form. Likewise, IRS’s publication on residential rental property discusses that land cannot be depreciated and provides guidance on how taxpayers can determine the value of their land. Although the instructions to Schedule E also state that land cannot be depreciated, they do not provide guidance on how to determine the value of land. Representatives from the tax return preparation industry told us that taxpayers are more likely to review tax form instructions than IRS publications. As such, providing guidance on resources available to taxpayers for determining how to distinguish between the cost of land versus the cost of structures in the instructions to Schedule E could help taxpayers more accurately determine the value of their land, which could help them more accurately report depreciation. With regard to recordkeeping, IRS produces a publication with general guidance for individual taxpayers. However, recordkeeping requirements for expense deductions are only discussed in the instructions to Part I of Schedule E with regard to deducting mortgage or other interest. Providing general guidance on recordkeeping requirements in the instructions to Part I of Schedule E could encourage taxpayers to keep better records, which could lead to more accurate reporting. The instructions could also include language similar to that in IRS’s publication on recordkeeping, explaining that taxpayers who, upon IRS examination, cannot produce records to substantiate items they report on their tax returns may be subject to additional taxes and penalties. Including this language could increase voluntary compliance by deterring taxpayers from reporting expenses they did not incur. Outreach to Taxpayers and Other Stakeholders Could Improve Understanding of Reporting Requirements and Common Types of Misreporting Given that some taxpayers may not read IRS’s guidance, an additional way to inform taxpayers of the reporting requirements is to send them notices covering key requirements and common mistakes taxpayers make with regard to reporting rental real estate activity. In addition to making taxpayers aware of the reporting requirements, such notices could serve as a reminder to taxpayers that IRS is aware that they have rental real estate activity, which in turn could serve as a deterrent to intentional misreporting. These notices could be sent to some taxpayers, such as taxpayers reporting rental real estate for the first time, or all taxpayers who report rental real estate activity. Sending these notices to taxpayers could also help inform paid preparers of the requirements and common types of misreporting given that taxpayers who received such notices would likely share them with their paid preparers, according to representatives of the tax return preparation industry. However, the cost-effectiveness of sending notices like these is not clear. IRS would have to dedicate resources to take calls from taxpayers who receive the notices and have questions, which would be a likely scenario according to IRS communications officials and representatives of the tax return preparation industry. Producing and sending the notices would also involve costs. Also, IRS would not necessarily be able to target taxpayers who potentially misreported their rental real estate activities when sending the notices. Given these limitations, it would be important for IRS to test the effectiveness of sending notices to taxpayers to determine if they can increase compliance in a cost-effective manner. A way to indirectly inform taxpayers of the reporting requirements and common types of misreporting for rental real estate is for IRS to enhance its focus on rental real estate in its outreach efforts to paid preparers and other external stakeholders. Outreach to paid preparers could be particularly important given that about 80 percent of individual taxpayers who report rental real estate use a paid preparer, but these taxpayers were as likely to have misreported as taxpayers who self-prepared their tax returns. As previously discussed, IRS produced and disseminated a fact sheet on the requirements for reporting rental real estate activity, although the fact sheet did not include information on common types of misreporting. Providing additional information on the common types of misreporting, such as those we found that IRS identified through NRP examinations, in outreach efforts could help paid preparers assist individual taxpayers to comply with the reporting requirements. An IRS examination official told us that outreach, such as contacting national paid preparer groups or providing information at IRS Nationwide Tax Forums and other conferences that paid preparers attend to fulfill their continuing professional education requirements, could be a good approach to improving paid preparer due diligence. An IRS official involved with outreach to external stakeholders told us that although in the past IRS had targeted its outreach efforts primarily to individual paid preparers, IRS is starting to reach out to tax return preparation companies and tax preparation software vendors as well. These outreach efforts could include providing information on common types of rental real estate misreporting. Likewise, IRS compliance and stakeholder liaison officials suggested including property managers within IRS outreach efforts for this area of compliance. Conclusions The disparity in individual taxpayer reporting compliance between net income from sources subject to minimal information reporting, such as rental real estate, and income that is subject to extensive information reporting, such as wages, results in substantial revenue loss and undermines the fairness of our tax system. However, significant obstacles stand in the way of improving tax compliance by owners of rental real estate. There are few practical opportunities for additional third-party information reporting. Some taxpayers may not fully understand the complex rules governing the reporting of rental real estate activity, in part because their heavy reliance on paid preparers means that many taxpayers may not read IRS guidance. Detecting misreporting often requires face-to- face examinations, which are costly to IRS and reach relatively few individual taxpayers. Nevertheless, opportunities exist to improve compliance by individual taxpayers who own rental real estate. First, existing information reporting requirements could be improved. For example, under current law, only taxpayers whose rental real estate activity is considered a trade or business are required to report expense payments on Form 1099-MISC, but the law for filing the form does not clearly spell out how to determine whether taxpayers’ rental real estate activity should be considered a trade or business. To hold taxpayers with rental real estate to the same requirements for filing Form 1099-MISC as taxpayers whose activities are considered a trade or business would provide clarity about who is required to file. Likewise, requiring property addresses on Forms 1098 that report mortgage interest could provide IRS with additional information to identify taxpayers who may not have reported rental real estate activity and could deter taxpayers from failing to report. Second, requiring taxpayers to report additional information on their tax returns for their rental real estate activities could force taxpayers to pay more attention to IRS guidance or seek advice from paid preparers, act as a deterrent to intentional misreporting, and compel paid preparers to obtain more accurate income and expense information from taxpayers. Third, although it is unclear the extent to which individual taxpayers read guidance on reporting requirements, especially taxpayers who use paid preparers, enhancements to IRS’s guidance on rental real estate reporting requirements could reduce unintentional misreporting. Such enhancements might also be useful to paid preparers and could be reflected in tax preparation software. Fourth, improving outreach efforts could also improve compliance. Given the uncertainty about whether individual taxpayers read IRS guidance, researching the effectiveness of outreach to taxpayers could be beneficial. Also, given the heavy reliance on paid preparers by owners of rental real estate, additional outreach to paid preparers, among others, could be an effective way to indirectly reach large numbers of taxpayers. Matter for Congressional Consideration To provide clarity for which taxpayers with rental real estate activity must report expense payments on information returns and to provide greater information reporting, Congress should consider amending the Internal Revenue Code to make all taxpayers with rental real estate activity subject to the same information reporting requirements as other taxpayers operating a trade or business. Recommendations for Executive Action We are making nine recommendations to the Commissioner of Internal Revenue. To help IRS identify taxpayers who may have misreported their rental real estate activity, we recommend that the Commissioner of Internal Revenue require third parties to report mortgaged property addresses on Form 1098 mortgage interest statements. To elicit more accurate information from taxpayers on their rental real estate activities, we recommend that the Commissioner of Internal Revenue require taxpayers to report on the individual tax return the basis amount attributed to land versus structure when depreciating rental real estate; determine if IRS uses property type information that taxpayers currently report on Schedule E in its efforts to enforce rental real estate reporting compliance, and if it is determined that IRS uses the information, the Commissioner should require taxpayers to provide specific information on Part I of Schedule E about the type of properties for which they are reporting activity, for example by answering a check-the-box question, and if it is determined that IRS does not use the information, the Commissioner should not require taxpayers to report any property type information; and require taxpayers to report the exact “address” of their rental real estate properties on Part I Schedule E instead of property “location,” as currently worded, and require taxpayers to report property zip codes. To help taxpayers understand the requirements related to certain aspects of reporting rental real estate activities, we recommend that the Commissioner of Internal Revenue include guidance within the instructions to Part I of Schedule E on the requirement for some taxpayers with rental real estate activity to report on Form 1099-MISC certain payments made in the course of renting out real estate; resources available to taxpayers for determining how to distinguish between the cost of land versus the cost of structures; and recordkeeping requirements and the potential for disallowed expenses and penalties if taxpayers cannot produce documentation for reported expenses upon examination by IRS. To enhance IRS’s outreach efforts, we recommend that the Commissioner of Internal Revenue evaluate whether sending notices to some or all taxpayers who report rental real estate activity would be a cost-effective way to reduce misreporting of some types of rental real estate activity and expand outreach efforts to external stakeholders, such as paid tax return preparers, tax return preparation software providers, and industry groups related to rental real estate, to include common types of misreporting for rental real estate activity, such as those identified in this report. Agency Comments and Our Evaluation In written comments on a draft of this report, which are reprinted in appendix II, IRS agreed with seven of our nine recommendations. However, IRS agreed only to consider implementing our recommendation to require third parties to report mortgaged property addresses on Form 1098 mortgage interest statements citing the burden the requirement could place on third parties. Although it is important to take third-party burden into account when considering whether to require information reporting, representatives of the mortgage banking industry told us that it would be feasible to report property address information on Form 1098 because mortgage lenders already maintain this information. Also, IRS disagreed with our recommendation to require taxpayers to report on the individual tax return the basis amount attributed to land versus structure when depreciating rental real estate properties. Specifically, IRS stated that taxpayers report depreciation on Form 4562 and not on Schedule E to the individual tax return, and that the basis of land is not used in calculating depreciation. However, taxpayers are not required to report the basis amount attributable to land on Form 4562. Further, we did not specifically recommend on which form or schedule taxpayers should be required to report the value of their land for properties they depreciate. Also, we believe that, in effect, the basis of land is used in calculating depreciation because taxpayers must subtract the value of land from the overall basis of their properties. We found that for tax year 2001, about 166,000 individual taxpayers included the value of land when calculating depreciation for their rental properties. We believe that requiring taxpayers to report the value of land in conjunction with reporting depreciation calculations— whether on Form 4562, Schedule E, or elsewhere—would serve to reduce the number of taxpayers who improperly include the value of land when calculating depreciation for their rental properties. We agree with IRS that more guidance to taxpayers is needed about allocating basis to land. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Chairman and Ranking Member, House Committee on Ways and Means; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Scope and Methodology To provide information on the extent and primary types of individual taxpayer misreporting of rental real estate activities, we relied on data and examination case files from the Internal Revenue Service’s (IRS) most recent National Research Program (NRP) study of individual taxpayers. Through NRP, IRS selected and reviewed a stratified random sample of 45,925 individual income tax returns from tax year 2001. We selected a sample of cases that included taxpayers with rental real estate activity from this NRP sample. The NRP sample is divided across 30 strata by the type of individual tax return filed and income levels. IRS accepted as filed some of the NRP returns, accepted others with minor adjustments, and examined the remainder of returns either through correspondence or face- to-face meetings with taxpayers. If IRS examiners determined that taxpayers misreported any aspect of the selected tax returns, they adjusted the taxpayers’ income accordingly and assessed additional taxes. IRS captured data from tax returns and examination results in the NRP database, including data for rental real estate activities. However, because taxpayers report expenses from both rental real estate and royalty activities on the same lines of Part I of Schedule E, it is not possible to determine with certainty whether adjustments examiners made in the NRP database to these expense lines were for rental real estate or royalty activities. Likewise, the data do not include detailed information on why examiners made adjustments to rental real estate activities. Therefore, to distinguish between misreporting for rental real estate versus royalty expenses and to identify the primary types of misreporting of rental real estate activities, we selected a statistical sample of NRP examination case files to review. We selected a sample of 1,202 cases from the tax returns IRS examined from its NRP sample. We selected tax returns for taxpayers who reported rental real estate or royalty activity on Part I of Schedule E. Our sample was made up of four groups: (1) taxpayers for whom IRS made adjustments to rental real estate activity, (2) taxpayers who reported rental real estate activity for whom IRS did not make adjustments to rental real estate activity, (3) taxpayers for whom IRS made adjustments to royalty activity, and (4) taxpayers who reported royalty activity for whom IRS did not make adjustments to royalty activity. We included cases for taxpayers who accurately reported rental real estate activity in order to make comparisons between taxpayers for whom IRS did and did not make adjustments to rental real estate activity. We selected cases with royalty activity to estimate misreporting from royalties, which IRS combines with misreporting from rental real estate to estimate the tax gap for these two activities. The first group of cases consisted of 752 cases of taxpayers for whom IRS made an adjustment to rents received, expenses, or loss reported on Part I of Schedule E. Within this group we included, where possible, the 10 cases in each stratum for which the adjustments examiners made had the largest impact on the total amount of these adjustments for all taxpayers when weighted for the entire population of individual taxpayers (a total of 204 cases). We focused on cases with the largest adjustments, in weighted terms, because including these cases would improve the level of confidence of any estimates of the total amount of adjustments to taxpayers’ rental real estate activities. We selected the remaining 548 cases in this first group of cases at random and in proportion to the number of NRP returns for which IRS made adjustments to taxpayers’ rent, expenses, or loss reported on Part I of Schedule E. Because our sample is a subsample of the NRP sample and is subject to sampling error, we added cases, where applicable, to ensure that each stratum contained a minimum of 5 randomly selected cases. For the second group of cases we selected 248 cases for taxpayers who reported rents received, expenses, or loss on Part I of Schedule E that IRS did not adjust. We selected these cases at random and in proportion to the NRP sample through an iterative process ensuring, where possible, that a minimum of 5 cases was included in each stratum. The third group of cases included 102 cases where IRS made an adjustment to taxpayers’ royalties received. The aggregate of these 102 cases and 30 cases with adjustments to royalties received selected as part of the first group of cases account for all 132 cases in the NRP database where examiners made adjustments to taxpayers’ royalties received. The fourth group consisted of 100 cases, selected at random and in proportion to the NRP sample, for taxpayers who did not report rents received and reported royalties received that IRS did not adjust. We ensured, where possible, that a minimum of 5 cases were included in each stratum. Of the 1,202 cases we selected for our sample, we reviewed 1,000 cases. We did not review the remaining 202 cases because either IRS did not provide the files in time to include in our review (185 cases) or the files did not contain examination workpapers essential to determine if or why examiners made adjustments to taxpayers’ rental real estate activities (17 cases). We requested the cases at two points, in late-May 2007 and late- June 2007, and periodically checked on the status of our requests with IRS. We were only able to review cases that arrived by January 11, 2008, in order to meet our agreed-upon issue date for the report. We recorded information from the case files using a data collection instrument (DCI) that we developed. To ensure that our data collection efforts conformed to GAO’s data quality standards, each DCI entry that a GAO analyst completed was reviewed by another GAO analyst. The reviewers compared the data recorded within the DCI entry to the data in the corresponding case file to determine whether they agreed on how the data were recorded. When the analysts’ views on how the data were recorded differed, they met to reconcile any differences. The estimates we included in this report were based on the NRP database and the data we collected through our file review and were generated using statistical software. All computer programming for the resulting statistical analyses were checked by a second, independent analyst. Our final sample size was large enough to generalize the results of our review for the entire population of individual taxpayers with rental real estate activity or had margins of error small enough to produce meaningful estimates, unless otherwise noted in the report. Because we followed a probability procedure based on random selection, our sample is only one of a large number of samples that we might have selected. Since each sample could have resulted in different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval, plus or minus the margin of error. These intervals would contain the actual population value for 95 percent of the samples we could have selected. Unless otherwise noted, all percentage estimates have a margin of error of less than 5 percentage points; value estimates have a margin of error of less than 8 percent. We assessed whether the examination results and data contained in the NRP database were sufficiently reliable for the purposes of our review. For this assessment, we interviewed IRS officials about the data, collected and reviewed documentation about the data and the system used to capture the data, and compared the information we collected through our case file review to corresponding information in the NRP database to identify inconsistencies. Based on our assessment, we determined that the NRP database was sufficiently reliable for the purposes of our review. We also used IRS’s Statistics of Income (SOI) file for individual taxpayers from tax years 2001 through 2005, which relies on a stratified probability sample of individual income tax returns, to develop estimates on characteristics for taxpayers who reported rental real estate activity. Where possible, we compared our analyses against published IRS data to determine that the SOI database was sufficiently reliable for the purposes of our review. To identify challenges IRS faces in ensuring rental real estate reporting compliance and to assess options for increasing compliance, we reviewed IRS forms, publications, and other taxpayer guidance related to reporting rental real estate activity and documents from IRS’s enforcement programs. We also reviewed data from the Automated Underreporter program and published data on examinations to determine the extent of IRS’s enforcement efforts. We also examined data on individual taxpayers from SOI to determine the extent to which (1) individual taxpayers use paid tax return preparers and (2) third parties report information on rental real estate activity on Form 1099-MISC. In addition, we interviewed officials from IRS’s Small Business/Self-Employed; Wage and Investment; and Research, Analysis, and Statistics divisions who have knowledge of rental real estate compliance issues. We also spoke with representatives of the American Institute of Certified Public Accountants, National Association of Enrolled Agents, National Association of Residential Property Managers, and Mortgage Bankers Association to get their perspectives on issues related to rental real estate reporting compliance. We conducted this performance audit from May 2007 through August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Internal Revenue Service Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Charlie Daniel, Assistant Director; Jeff Arkin; Ellen Grady; Laura Henry; Shirley Jones; Winchee Lin; John Mingus; Karen O’Conor; Ellen Rominger; Jeff Schmerling; Andrew Stephens; and Elwood White made key contributions to this report. Related GAO Products Highlights of the Joint Forum on Tax Compliance: Options for Improvement and Their Budgetary Potential. GAO-08-703SP. Washington, D.C.: June 2008. Tax Administration: The Internal Revenue Service Can Improve Its Management of Paper Case Files. GAO-07-1160. Washington, D.C.: September 28, 2007. Tax Gap: A Strategy for Reducing the Gap Should Include Options for Addressing Sole Proprietor Noncompliance. GAO-07-1014. Washington, D.C.: July 13, 2007. Using Data from the Internal Revenue Service’s National Research Program to Identify Potential Opportunities to Reduce the Tax Gap. GAO-07-423R. Washington, D.C.: March 15, 2007. Tax Compliance: Multiple Approaches Are Needed to Reduce the Tax Gap. GAO-07-488T. Washington, D.C.: February 16, 2007. Tax Compliance: Multiple Approaches Are Needed to Reduce the Tax Gap. GAO-07-391T. Washington, D.C.: January 23, 2007. Tax Compliance: Opportunities Exist to Reduce the Tax Gap Using a Variety of Approaches. GAO-06-1000T. Washington, D.C.: July 26, 2006. Tax Compliance: Challenges to Corporate Tax Enforcement and Options to Improve Securities Basis Reporting. GAO-06-851T. Washington, D.C.: June 13, 2006. Capital Gains Tax Gap: Requiring Brokers to Report Securities Cost Basis Would Improve Compliance if Related Challenges Are Addressed. GAO-06-603. Washington, D.C.: June 13, 2006. Tax Gap: Making Significant Progress in Improving Tax Compliance Rests on Enhancing Current IRS Techniques and Adopting New Legislative Actions. GAO-06-453T. Washington, D.C.: February 15, 2006. Tax Gap: Multiple Strategies, Better Compliance Data, and Long-Term Goals Are Needed to Improve Taxpayer Compliance. GAO-06-208T. Washington, D.C.: October 26, 2005. Tax Compliance: Better Compliance Data and Long-term Goals Would Support a More Strategic IRS Approach to Reducing the Tax Gap. GAO- 05-753. Washington, D.C.: July 18, 2005. Tax Compliance: Reducing the Tax Gap Can Contribute to Fiscal Sustainability but Will Require a Variety of Strategies. GAO-05-527T. Washington, D.C.: April 14, 2005. Tax Administration: IRS Is Implementing the National Research Program as Planned. GAO-03-614. Washington, D.C.: June 16, 2003. Tax Administration: New Compliance Research Effort Is on Track, but Important Work Remains. GAO-02-769. Washington, D.C.: June 27, 2002. Tax Administration: Status of IRS’ Efforts to Develop Measures of Voluntary Compliance. GAO-01-535. Washington, D.C.: June 18, 2001.
As part of its most recent estimate of the tax gap, for tax year 2001, the Internal Revenue Service (IRS) estimated that individuals underreported taxes related to their rental real estate activities by as much as $13 billion. Given the magnitude of underreporting, even small improvements in taxpayer compliance could result in substantial revenue. GAO was asked to provide information on rental real estate reporting compliance. This report (1) provides information on the extent and primary types of taxpayer misreporting of rental real estate activities and (2) identifies challenges IRS faces in ensuring compliance and assesses options for increasing compliance. For estimates of taxpayer misreporting, GAO analyzed a probability sample of examination cases for tax year 2001 from IRS's most recent National Research Program (NRP) study of individual taxpayer compliance. At least an estimated 53 percent of individual taxpayers with rental real estate misreported their rental real estate activities for tax year 2001, resulting in an estimated $12.4 billion of net misreported income. This amount of misreporting is understated because IRS knows it does not detect all misreporting during its NRP examinations and adjusts the amount of misreporting it detects to estimate the tax gap. Also, the rate of misreporting of rental real estate activity was substantially higher than for some other sources of income, such as wages, a disparity that undermines the fairness of the tax system. Misreporting of rental real estate expenses was the most common type of rental real estate misreporting. Limited third-party information reporting for rental real estate activity is among the challenges IRS faces in ensuring compliance for rental real estate reporting. While information reporting, such as financial institutions sending information to IRS about taxpayers' mortgage interest payments, improves compliance, it is not practical to implement and enforce broad, new information reporting requirements for rental real estate activities. However, improving existing information reporting requirements is one of various options that could improve compliance. For example, based on current law, whether rental real estate property owners must file information returns for certain expenses they incur depends on whether the owners' rental activities are considered a trade or business, but the law does not define how to make this determination. Another approach to improving compliance is to require taxpayers to report additional detail about their rental real estate activities on tax returns. For example, requiring taxpayers to report complete property address information, which GAO found that some taxpayers did not report, could help IRS address misreporting. Requiring additional detail on tax returns could also compel paid tax return preparers, used by about 80 percent of individual taxpayers who report rental real estate activity, to obtain more accurate information from taxpayers. Enhanced IRS guidance, such as on required recordkeeping, and additional IRS outreach to paid preparers and others about rental real estate misreporting could also improve compliance.
Background When the WTC buildings collapsed on September 11, 2001, an estimated 250,000 to 400,000 people in the vicinity were immediately exposed to a noxious mixture of dust, debris, smoke, and potentially toxic contaminants, such as pulverized concrete, fibrous glass, particulate matter, and asbestos. Those affected included people residing, working, or attending school in the vicinity of the WTC and emergency responders. In the days, weeks, and months that followed the attack, tens of thousands of responders were involved in some capacity. These responders included personnel from many federal, state, and NYC government agencies and private organizations, as well as volunteers. Health Effects A wide variety of physical and mental health effects have been observed and reported among people who were involved in rescue, recovery, and cleanup operations and among those who lived and worked in the vicinity of the WTC buildings. Physical health effects included injuries and respiratory conditions, such as sinusitis, asthma, and a new syndrome called WTC cough, which consists of persistent coughing accompanied by severe respiratory symptoms. Almost all firefighters who responded to the attack experienced respiratory effects, including WTC cough. One study suggested that exposed firefighters on average experienced a decline in lung function equivalent to that which would be produced by 12 years of aging. Commonly reported mental health effects among responders and other affected individuals included symptoms associated with post- traumatic stress disorder (PTSD), depression, and anxiety. Behavioral health effects such as alcohol and tobacco use have also been reported. Some health effects experienced by responders have persisted or worsened over time, leading many responders to begin seeking treatment years after September 11, 2001. Clinicians involved in screening, monitoring, and treating responders have found that many responders’ conditions—both physical and psychological—have not resolved and have developed into chronic disorders that require long-term monitoring. For example, findings from a study conducted by clinicians at the NY/NJ WTC Consortium show that at the time of examination, up to 2.5 years after the start of the rescue and recovery effort, 59 percent of responders enrolled in the program were still experiencing new or worsened respiratory symptoms. Experts studying the mental health of responders found that about 2 years after the WTC attack, responders had higher rates of PTSD and other psychological conditions compared to others in similar jobs who were not WTC responders. Clinicians also anticipate that other health effects, such as immunological disorders and cancers, may emerge over time. Clinicians at the FDNY WTC program found an increased incidence of sarcoid-like pulmonary disease involving inflammation of the lungs. Of 26 cases of this sarcoid-like pulmonary disease, 13 cases were identified during the first year after the WTC attack and 13 cases were found during the next 4 years. Overview of WTC Health Programs There are six key programs that currently receive federal funding to provide voluntary health screening, monitoring, or treatment at no cost to responders. The six WTC health programs, shown in table 1, are (1) the FDNY WTC Medical Monitoring and Treatment Program; (2) the NY/NJ WTC Consortium, which comprises five clinical centers in the NY/NJ area; (3) the WTC Federal Responder Screening Program; (4) the WTC Health Registry; (5) Project COPE; and (6) the POPPA program. The programs vary in aspects such as the HHS administering agency or component responsible for administering the funding; the implementing agency, component, or organization responsible for providing program services; eligibility requirements; and services. Each program uses a variety of approaches, such as Web sites, toll-free numbers, and community forums, to conduct outreach to eligible populations. The WTC health programs that are providing screening and monitoring are tracking thousands of individuals who were affected by the WTC disaster. As of June 2007, the FDNY WTC program had screened about 14,500 responders and had conducted follow-up examinations for about 13,500 of these responders, while the NY/NJ WTC Consortium had screened about 20,000 responders and had conducted follow-up examinations for about 8,000 of these responders. Some of these responders include nonfederal responders residing outside the NYC metropolitan area. As of June 2007, the WTC Federal Responder Screening Program had screened 1,305 federal responders and referred 281 responders for employee assistance program services or specialty diagnostic services. In addition, the WTC Health Registry, a monitoring program that does not provide in-person screening or monitoring, but consists of periodic surveys of self-reported health status and related studies, collected baseline health data from over 71,000 people who enrolled in the registry. In the winter of 2006, the Registry began its first adult follow-up survey, and as of June 2007, over 36,000 individuals had completed the follow-up survey. In addition to providing medical examinations, FDNY’s WTC program and the NY/NJ WTC Consortium have collected information for use in scientific research to better understand the health effects of the WTC attack and other disasters. The WTC Health Registry is also collecting information to assess the long-term public health consequences of the disaster. Clinicians who evaluate and treat responders to the WTC disaster told us they expect that research on health effects from the disaster will not only help researchers understand the health consequences, but also provide information on appropriate treatment options for affected individuals. Federal Funding and Coordination of WTC Health Programs Beginning in October 2001 and continuing through 2003, FDNY’s WTC program, the NY/NJ WTC Consortium, the WTC Federal Responder Screening Program, and the WTC Health Registry received federal funding to provide services to responders. This funding primarily came from appropriations to the Department of Homeland Security’s Federal Emergency Management Agency (FEMA), as part of the approximately $8.8 billion that the Congress appropriated to FEMA for response and recovery activities after the WTC disaster. FEMA entered into interagency agreements with HHS agencies to distribute the funding to the programs. For example, FEMA entered into an agreement with NIOSH to distribute $90 million appropriated in 2003 that was available for monitoring. FEMA also entered into an agreement with ASPR for ASPR to administer the WTC Federal Responder Screening Program. A $75 million appropriation to CDC in fiscal year 2006 for purposes related to the WTC attack resulted in additional funding for the monitoring activities of the FDNY WTC program, NY/NJ WTC Consortium, and the Registry. The $75 million appropriation to CDC in fiscal year 2006 also provided funds that were awarded to the FDNY WTC program, NY/NJ WTC Consortium, Project COPE, and the POPPA program for treatment services for responders. An emergency supplemental appropriation to CDC in May 2007 included an additional $50 million to carry out the same activities provided for in the $75 million appropriation made in fiscal year 2006. The President’s proposed fiscal year 2008 budget for HHS includes $25 million for treatment of WTC-related illnesses for responders. In February 2006, the Secretary of HHS designated the Director of NIOSH to take the lead in ensuring that the WTC health programs are well coordinated, and in September 2006 the Secretary established a WTC Task Force to advise him on federal policies and funding issues related to responders’ health conditions. The chair of the task force is HHS’s Assistant Secretary for Health, and the vice chair is the Director of NIOSH. The task force has two subcommittees, one examining finance issues (cost and financing of WTC-related health programs) and the other examining the scientific evidence on the health effects of the WTC disaster. The task force reported to the Secretary of HHS in early April 2007. WTC Federal Responder Screening Program Has Had Difficulties Ensuring the Availability of Screening Services and Is Not Designed to Provide Monitoring HHS’s WTC Federal Responder Screening Program has not ensured the uninterrupted availability of screening services for federal responders. Since the beginning of the program, the provision of screening examinations has been intermittent (see fig. 1). After the program resumed screening examinations in December 2005 and conducted them for about a year, HHS again placed the program on hold in January 2007. From January to May 2007, FOH, the program’s implementing agency, did not schedule screening examinations for federal responders. This interruption in service occurred because there was a change in the administration of the WTC Federal Responder Screening Program, and certain interagency agreements were not established in a timely way to keep the program fully operational. In late December 2006, ASPR and NIOSH signed an interagency agreement giving NIOSH $2.1 million to administer the WTC Federal Responder Screening Program. Subsequently, NIOSH and FOH needed to sign a new interagency agreement to allow FOH to continue to be reimbursed for providing screening examinations. It took several months for the agreement between NIOSH and FOH to be negotiated and approved. After both agencies signed the agreement, FOH resumed scheduling screening examinations for federal responders in May 2007. At that time, there were 28 federal responders waiting to be scheduled for screening examinations. The WTC Federal Responder Screening Program’s provision of specialty diagnostic services has also been intermittent. The health effects experienced by responders often result in a need for diagnostic services by ear, nose, and throat doctors; cardiologists; and pulmonologists. When these diagnostic services are needed after the initial screening examination, FOH refers responders to these specialists and pays for the services. The WTC Federal Responder Screening Program stopped scheduling and paying for these specialty diagnostic services for almost a year, from April 2006 to March 2007. This occurred because in April 2006, FOH contracted with a new provider network to provide various services for federal employees, such as immunizations and vision tests. The contract with the new provider network did not cover specialty diagnostic services by ear, nose, and throat doctors; cardiologists; and pulmonologists. Although the previous provider network had provided these services, the new provider network and the HHS contract officer interpreted the statement of work in the new contract as not including these specialty diagnostic services. FOH was therefore unable to pay for these services for federal responders and stopped scheduling them in April 2006. Almost a year later, in March 2007, FOH modified its contract with the provider network and resumed scheduling and paying for specialty diagnostic services for federal responders. FOH estimated that at that time, 104 responders were waiting for appointments for these services. The WTC Federal Responder Screening Program was designed to provide a onetime screening examination; however, NIOSH officials told us they want to expand the program to offer monitoring examinations—that is, follow-up physical and mental health examinations—to federal responders. Clinicians involved in the monitoring of responders have noted the need for long-term monitoring because some possible health effects, such as cancer, may not appear until many years after a person has been exposed to a harmful agent. NIOSH officials have said that to expand the WTC Federal Responder Screening Program to include monitoring, NIOSH would need to secure funding and determine who would provide the monitoring services. A NIOSH official told us that one option for funding would be for NIOSH to use some of the $2.1 million of the existing FEMA-ASPR funding to have the WTC Federal Responder Screening Program include monitoring. For this to happen, the NIOSH official said, FEMA, which originally provided the funding to ASPR to establish the program, would have to agree to change the scope of the program. In February 2007, NIOSH sent a letter to FEMA asking whether the funding for the program could be provided directly to NIOSH and whether the funding could be used to support monitoring in addition to the onetime screening examination the program currently offers, but as of June 2007, NIOSH had not received a response from FEMA. NIOSH officials told us that if FEMA does not agree to this arrangement, NIOSH will consider using other funding to pay for monitoring. According to a NIOSH official, if NIOSH either reaches a new agreement with FEMA or decides to pay for monitoring of federal responders by itself, NIOSH would have to either negotiate a new agreement with FOH to provide monitoring, which FOH officials said they would consider doing, or it would have to make arrangements with another program, such as the NY/NJ WTC Consortium, to provide monitoring. NIOSH Has Not Ensured the Availability of Services for Nonfederal Responders Residing outside the NYC Metropolitan Area NIOSH has not ensured the availability of screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area, although it recently took steps toward expanding the availability of these services. NIOSH made two initial efforts to provide screening and monitoring services for these responders. The first effort, in which NIOSH arranged for AOEC to provide screening services, began in late 2002 and ended in July 2004. From August 2004 until June 2005, NIOSH did not fund any organization to provide services to nonfederal responders outside the NYC metropolitan area. In June 2005, NIOSH began its second effort by awarding funds to Mount Sinai’s DCC to provide both screening and monitoring services. However, DCC had difficulty establishing a network of providers that could serve nonfederal responders residing throughout the country. In early 2006, NIOSH began exploring how to establish a broader national program that would provide screening and monitoring services, as well as treatment, for nonfederal responders residing outside the NYC metropolitan area. However, these efforts are incomplete. In May 2007, NIOSH and DCC arranged for a national network of providers to screen and monitor nonfederal responders, and a pilot program consisting of 20 examinations was scheduled to begin in summer 2007. NIOSH’s Initial Efforts to Provide Screening and Monitoring Services for Nonfederal Responders Residing outside the NYC Area Did Not Ensure Availability of These Services In November 2002, NIOSH began its first effort to provide services for nonfederal responders outside the NYC metropolitan area. The exact number of these responders is unknown. NIOSH awarded a contract for about $306,000 to the Mount Sinai School of Medicine to provide screening services for nonfederal responders residing outside the NYC metropolitan area and directed it to establish a subcontract with AOEC. AOEC then subcontracted with 32 of its member clinics across the country to provide screening services. For its part, AOEC was responsible for establishing a network of providers nationwide through its member clinics, referring nonfederal responders to the AOEC member clinics for screening examinations, working with Mount Sinai to determine responders’ program enrollment eligibility, ensuring proper billing, and reimbursing its member clinics for services. From February 2003 to July 2004, the 32 AOEC member clinics screened 588 nonfederal responders nationwide. An AOEC official told us AOEC experienced challenges in providing the screening services nationwide through its member clinics. This official said, for example, that many nonfederal responders—especially those residing in rural areas—did not enroll in the program because they did not live near an AOEC member clinic. In addition, the process to reimburse AOEC member clinics for clinical examinations required substantial coordination among AOEC, AOEC member clinics, and Mount Sinai. After a nonfederal responder was examined by an AOEC member clinic, Mount Sinai had to review the responder’s medical records and determine that all aspects of the examination were completed before AOEC could issue a payment to its member clinic. From August 2004 until June 2005, NIOSH did not fund any organization to provide screening or monitoring services outside the NYC metropolitan area for nonfederal responders. Mount Sinai’s subcontract with AOEC to provide screening services ended in July 2004 when NIOSH was establishing cooperative agreements to provide both screening and monitoring services for nonfederal responders nationwide. A NIOSH official told us that from July 2004 until June 2005, NIOSH focused on providing screening and monitoring services for nonfederal responders in the NYC metropolitan area because the majority of nonfederal responders reside there. NIOSH had requested applications from organizations to provide both screening and monitoring services for nonfederal responders and awarded funds to the FDNY WTC program and NY/NJ WTC Consortium to provide these services in the NYC metropolitan area. AOEC applied to use its national network of member clinics to provide screening and monitoring for nonfederal responders residing outside the NYC metropolitan area, but NIOSH rejected AOEC’s application. AOEC was the only organization that applied to provide screening and monitoring services to these responders. In June 2005, NIOSH began its second effort to provide services for nonfederal responders residing outside the NYC metropolitan area. Specifically, NIOSH awarded about $776,000 to DCC to coordinate the provision of screening and monitoring services for these responders. DCC spent about $387,000 of these funds on providing screening and monitoring services for these responders. In June 2006, NIOSH awarded an additional $788,000 to DCC to provide screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area. According to a NIOSH official, DCC budgeted about $393,000 of the $788,000 for providing these services, and received approval from NIOSH to redirect the remaining amount ($395,000) for other purposes. NIOSH officials told us that they assigned DCC the task of providing screening and monitoring services to nonfederal responders outside the NYC metropolitan area because the task was consistent with DCC’s responsibilities for the NY/NJ WTC Consortium, which include data monitoring and coordination. DCC, however, had difficulty establishing a network of providers that could serve nonfederal responders residing throughout the country—ultimately contracting with only 10 clinics in 7 states to provide screening and monitoring services. DCC officials said that as of June 2007, the 10 clinics were monitoring 180 responders. According to a NIOSH official, there have been several challenges involved in establishing a network of providers to screen and monitor nonfederal responders nationwide. These include establishing contracts with clinics that have the occupational health expertise to provide services nationwide, establishing patient data transfer systems that comply with applicable privacy laws, navigating the institutional review board process for a large provider network, and establishing payment systems with clinics participating in a national network of providers. NIOSH Has Recently Taken Steps to Establish a National Program for Nonfederal Responders to Provide Screening, Monitoring, and Treatment Services, but Its Efforts Are Incomplete Since 2006, NIOSH has been exploring how to establish a national program that would expand the availability of screening and monitoring services, as well as provide treatment services, to nonfederal responders residing outside the NYC metropolitan area. NIOSH officials have indicated that they would like to expand the availability of screening and monitoring services by establishing a network of providers with locations convenient to all nonfederal responders. NIOSH officials have also indicated that they would like to offer the same set of services to these responders that is offered to nonfederal responders in the NYC metropolitan area— screening, monitoring, and treatment services. NIOSH has considered different approaches for this national program. For example, in early 2006, NIOSH officials considered funding AOEC and its network of 50 member clinics to administer a national program and instructed DCC to discontinue efforts to establish new contracts with clinics nationwide. However, in February 2007, NIOSH officials decided that AOEC would not administer the national program. On March 15, 2007, NIOSH issued a formal request for information from organizations that have an interest in and the capability of developing a national program for responders residing outside the NYC metropolitan area. In this request, NIOSH described the scope of a national program as offering screening, monitoring, and treatment services to about 3,000 nonfederal responders through a national network of occupational health facilities. NIOSH also specified that the program’s facilities should be located within reasonable driving distance to responders and that participating facilities must provide copies of examination records to DCC. In May 2007, NIOSH took steps toward establishing the national program, but its efforts are incomplete. NIOSH approved a request from DCC to redirect about $125,000 from the June 2006 award to establish a contract with a company to provide screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area. Subsequently, DCC contracted with QTC Management, Inc., one of the four organizations that had responded to NIOSH’s request for information. QTC has a network of providers located across all 50 states and the District of Columbia and will use internal medicine and occupational medicine doctors in its network to provide these services. In addition, QTC will identify and subcontract with providers outside of the QTC network to screen and monitor nonfederal responders who do not reside within 25 miles of a QTC provider. In June 2007, NIOSH awarded $800,600 to DCC for coordinating the provision of screening and monitoring examinations, and QTC will receive a portion of this award from DCC to provide about 1,000 screening and monitoring examinations through May 2008. According to DCC officials, they are working with QTC to establish examination protocols and administrative systems needed to begin conducting screening and monitoring examinations, and they will begin a pilot program consisting of 20 examinations in summer 2007. DCC’s contract with QTC does not include treatment services, and NIOSH officials are still exploring how to provide and pay for treatment services for nonfederal responders residing outside the NYC metropolitan area. CDC’s NIOSH Awarded Funding for Treatment Services to Four WTC Health Programs, but Does Not Have a Reliable Estimate of Service Costs In fall 2006, CDC’s NIOSH awarded $44 million to four programs in the NYC metropolitan area for providing outpatient treatment services to responders. Officials from the FDNY WTC program and NY/NJ WTC Consortium used some of the funds to provide full coverage for prescription medications. NIOSH also set aside $7 million for the FDNY WTC program and NY/NJ WTC Consortium to provide inpatient hospital care. Officials from these programs expect that the funds they received from NIOSH for outpatient services will be spent by the end of fiscal year 2007. NIOSH has worked with two of its grantees to estimate the cost of monitoring and treating responders; however, the most recent effort, in 2007, has not produced reliable results because the estimate included potential costs for certain program changes that may not be implemented as well as some costs that reduced the estimate’s accuracy. In addition, in the absence of actual treatment cost data, the estimate was based in part on questionable assumptions. To improve the reliability of future cost estimates, HHS officials have required some of the WTC health programs to report detailed cost and treatment data. NIOSH Awarded $44 Million in Outpatient Treatment Funding, Which Is Expected to Be Spent by End of Fiscal Year 2007, and Set Aside $7 Million for Hospital Care In fall 2006, NIOSH awarded and set aside funds totaling $51 million from its $75 million appropriation for four WTC health programs in the NYC metropolitan area to provide treatment services to responders enrolled in these programs. Of the $51 million, NIOSH awarded about $44 million for outpatient services to the FDNY WTC program, the NY/NJ WTC Consortium, Project COPE, and the POPPA program. NIOSH made the largest awards to the two programs from which almost all responders receive medical services, the FDNY WTC program and NY/NJ WTC Consortium (see table 2). Officials from the FDNY WTC program and NY/NJ WTC Consortium expect funds they received from NIOSH for outpatient treatment services to be expended by the end of fiscal year 2007. In addition to the $44 million it awarded for outpatient services, NIOSH set aside about $7 million for the FDNY WTC program and NY/NJ WTC Consortium to pay for responders’ WTC-related inpatient hospital care as needed. The FDNY WTC program and NY/NJ WTC Consortium used their awards from NIOSH to continue providing treatment services to responders and to expand the scope of available treatment services. Before NIOSH made its awards for treatment services, the treatment services provided by the two programs were supported by funding from private philanthropies and other organizations. According to officials of the NY/NJ WTC Consortium, this funding was sufficient to provide only outpatient care and partial coverage for prescription medications. The two programs used NIOSH’s awards to continue to provide outpatient services to responders, such as treatment for gastrointestinal reflux disease, upper and lower respiratory disorders, and mental health conditions. They also expanded the scope of their programs by offering responders full coverage for their prescription medications for the first time. A NIOSH official told us that some of the commonly experienced WTC conditions, such as upper airway conditions, gastrointestinal disorders, and mental health disorders, are frequently treated with medications that can be costly and may be prescribed for an extended period of time. According to an FDNY WTC program official, prescription medications are now the largest component of the program’s treatment budget. The FDNY WTC program and NY/NJ Consortium also expanded the scope of their programs by paying for inpatient hospital care for the first time, using funds from the $7 million that NIOSH had set aside for this purpose. According to a NIOSH official, NIOSH pays for hospitalizations that have been approved by the medical directors of the FDNY WTC program and NY/NJ WTC Consortium through awards to the programs from the funds NIOSH set aside for this purpose. As of June 1, 2007, there were 15 hospitalizations of responders, 13 of whom were referred by the NY/NJ WTC Consortium’s Mount Sinai clinic and 2 by the FDNY WTC program. Responders have received inpatient hospital care to treat, for example, asthma, pulmonary fibrosis, and severe cases of depression or PTSD. If not completely used by the end of fiscal year 2007, funds set aside for hospital care could be used for outpatient services. After receiving NIOSH’s funding for treatment services in fall 2006, the NY/NJ WTC Consortium ended its efforts to obtain reimbursement from health insurance held by responders with coverage. Consortium officials told us that efforts to bill insurance companies involved a heavy administrative burden and were frequently unsuccessful, in part because the insurance carriers typically denied coverage for work-related health conditions on the grounds that such conditions should be covered by state workers’ compensation programs. However, according to officials from the NY/NJ WTC Consortium, responders trying to obtain workers’ compensation coverage routinely experienced administrative hurdles and significant delays, some lasting several years. Moreover, according to these program officials, the majority of responders enrolled in the program either had limited or no health insurance coverage. According to a labor official, responders who carried out cleanup services after the WTC attack often did not have health insurance, and responders who were construction workers often lost their health insurance when they became too ill to work the number of days each quarter or year required to maintain eligibility for insurance coverage. NIOSH and Its Grantees Have Estimated Costs of Providing Monitoring and Treatment Services, but These Efforts Have Not Produced a Reliable Estimate NIOSH has worked with two of its grantees—the FDNY WTC program and NY/NJ WTC Consortium—to estimate the annual cost of monitoring and treating responders. In December 2006, the agency and its grantees estimated that the annual cost of monitoring and treating responders enrolled in the FDNY WTC program and NY/NJ WTC Consortium, including associated program costs, was about $257 million. In January 2007, NIOSH revised the estimate to also include the cost of monitoring and treating responders enrolled in the WTC Federal Responder Screening Program and nonfederal responders residing outside the NYC metropolitan area who participate in the WTC health programs. The estimate did not include the cost of providing mental health treatment services through Project COPE and the POPPA program. The January 2007 estimate projected that aggregate annual costs for providing monitoring and treatment services, along with associated program expenses, could be approximately $230 million or $283 million, depending on the number of responders who receive treatment services. To develop an estimate of outpatient treatment costs, which are generally higher than monitoring costs, NIOSH and its grantees projected the incidence of WTC-related health conditions among responders and the number of responders who would likely obtain treatment. Based on this number, they projected that in a given year, 25 to 30 percent of participating responders will have aerodigestive (combined pulmonary and gastrointestinal) disorders that require treatment, 25 to 35 percent of participating responders will have mental health disorders that require treatment, and 1 to 4 percent of participating responders will have musculoskeletal disorders that require treatment. To estimate treatment costs for these conditions, NIOSH and its grantees multiplied the estimated per patient cost of providing outpatient services by the number of responders projected to need these services in a given year. They did not have actual cost data on these services because the WTC health programs had not been required to report such data when private organizations were funding the programs’ treatment services. In the absence of actual cost data, NIOSH and its grantees relied on workers’ compensation reimbursement rates for specific services as a proxy for outpatient treatment costs. They adjusted the proxy rates to reflect different treatment utilization levels—routine, moderate, or extensive outpatient care—and used their best judgment, based on experience, for the distribution of responders into the three treatment utilization levels. Specifically, they used the proxy rates to represent moderate utilization, reduced the proxy rates by one-third to represent routine utilization, and increased the proxy rates by one-third to represent extensive outpatient care. Outpatient treatment costs were further adjusted to account for the differences in treatment protocols and medication costs at the FDNY WTC program and NY/NJ WTC Consortium. After estimating the cost of providing outpatient services, NIOSH and its grantees estimated other treatment-related expenses—inpatient care, medical monitoring, indirect costs, language translation, data analysis, and expenses incurred by NIOSH such as for travel and telephone service. They added these estimated expenses to the estimate for outpatient services to arrive at a total annual cost amount. Several factors reduced the reliability of the January 2007 estimate. It is unclear whether the overall estimate overstated or understated the costs of monitoring and treating responders. First, the estimate included potential costs that reflect certain program changes that may not be implemented. For example, when NIOSH and its grantees projected the cost of medically monitoring responders, the estimate assumed a more frequent monitoring interval, which has been discussed by program officials but has not been adopted. Similarly, they included costs for providing monitoring and treatment services to federal responders, who are not now eligible for such services. Second, NIOSH mistakenly included certain costs in the estimate. According to NIOSH officials, the estimate included a calculation for indirect costs associated with monitoring and treating responders. However, NIOSH officials later learned that the workers’ compensation reimbursement rates that were used as a proxy for outpatient treatment costs already contained an adjustment for indirect costs. As a result, total indirect costs were overstated. In addition, the estimate included the cost of monitoring services provided by the FDNY WTC program and NY/NJ WTC Consortium without taking into account that these services were already funded through mid-2009 by other NIOSH funds. Finally, in the absence of actual data on the cost of providing treatment services, the estimate was based in part on two questionable assumptions. First, when NIOSH and its grantees used the assumption that adjusting the proxy rates up or down by one-third would account for the differences in treatment utilization levels, there were no data to support the accuracy of such adjustments. As a result, it is unclear whether the projections of treatment costs have resulted in an overestimate or underestimate of treatment costs. Second, the assumption used to estimate the cost of medical monitoring was not consistent with the historical participation rates reported by the NY/NJ WTC Consortium. NIOSH and its grantees based the estimate on the assumption that every responder would keep his or her appointment for periodic medical monitoring. However, NY/NJ WTC Consortium officials told us that the rate at which responders have kept scheduled appointments is 50 to 60 percent. HHS Officials Have Taken Steps to Develop More Reliable Cost Estimates To improve the reliability of future efforts to estimate the cost of providing services to responders, NIOSH officials and the Assistant Secretary for Health—in his capacity as chairman of the HHS WTC Task Force—have required the FDNY WTC program and NY/NJ WTC Consortium to report detailed demographic, service utilization, and cost information. The information requested from each program includes the number of responders monitored and treated, diagnoses of responders monitored and treated, medical services provided and the cost of those services, and responders’ occupations and insurance coverage status. These data are to be reported on a quarterly basis, and the first reports were received from the NY/NJ WTC Consortium in late February 2007 and from the FDNY WTC program in March 2007. These reports included data covering 2 quarters—July through September 2006, when treatment funding was provided by the American Red Cross, and October through December 2006, when treatment funding was provided by NIOSH and the American Red Cross. According to an HHS official who is a member of the HHS WTC Task Force, some of the cost reports submitted in February and March were incomplete and therefore did not provide sufficient information to support a reliable estimate of the annual cost of medical services provided by the WTC health programs. For example, some clinical centers submitted expense reports for only 1 quarter instead of 2. Furthermore, a NIOSH official told us that some of the data that were compiled manually were not accurate. According to the task force member, HHS will need at least 4 quarters of complete and accurate data before it can make reliable estimates. This would mean that HHS may not have data needed to develop a reliable estimate of costs until October 2008. NIOSH officials told us, however, that as they, the FDNY WTC program, and the NY/NJ WTC Consortium gain experience and as report data are automated, the quality of the data they develop and the reliability of cost estimates will improve. Conclusions Screening and monitoring the health of the people who responded to the September 11, 2001, attack on the World Trade Center are critical for identifying health effects already experienced by responders or those that may emerge in the future. In addition, collecting and analyzing information produced by screening and monitoring responders can give health care providers information that could help them better diagnose and treat responders and others who experience similar health effects. While some groups of responders are eligible for screening and follow-up physical and mental health examinations through the federally funded WTC health programs, other groups of responders are not eligible for comparable services or may not always find these services available. Federal responders are eligible only for the initial screening examination provided through the WTC Federal Responder Screening Program and are not eligible for federally funded follow-up monitoring examinations. In addition, many responders who reside outside of the NYC metropolitan area have not been able to obtain screening and monitoring services because available services are too distant. Moreover, HHS has repeatedly interrupted the programs it established for federal responders and nonfederal responders outside of NYC, resulting in periods when no services were available to them. HHS continues to fund and coordinate the WTC health programs and has key federal responsibility for ensuring the availability of services to responders. HHS and its agencies have recently taken steps to move toward providing screening and monitoring services to federal responders and to nonfederal responders living outside of the NYC area. However, these efforts are not complete, and the stop-and-start history of the department’s efforts to serve these groups does not provide assurance that the latest efforts to extend screening and monitoring services to these responders will be successful and will be sustained over time. Therefore, it is important for HHS to make a concerted effort, without further delay, to ensure that health screening and monitoring services are available to all people who responded to the attack on the World Trade Center, regardless of who their employer is or where they reside. Recommendations for Executive Action To ensure that comparable screening and monitoring services are available to all responders, we are recommending that the Secretary of HHS expeditiously take two actions: (1) ensure that screening and monitoring services are available for federal responders and (2) ensure that screening and monitoring services are available for nonfederal responders residing outside of the NYC metropolitan area. Agency Comments and Our Evaluation HHS reviewed a draft of this report and provided comments, which are reprinted in appendix I. HHS also provided technical comments, which we incorporated as appropriate. HHS commented that overall, our report is an accurate and appropriate account of its activities and accomplishments concerning health services for responders to the WTC disaster. However, HHS stated that an inaccurate understanding of our findings would likely result if a reader read only the summary information about the WTC Federal Responder Screening Program and services for nonfederal responders residing outside the NYC area in the Highlights and Results in Brief. Where appropriate, we revised the language in the Highlights and Results in Brief to be consistent with the findings in our report. HHS also stated that our description of the services available to nonfederal responders residing outside the NYC metropolitan area did not acknowledge that over 60 percent of these responders have been examined by the DCC network or by AOEC. However, because the total number of nonfederal responders residing outside the NYC metropolitan area is unknown, we believe it is not possible to determine what percentage of these responders has been examined. In its comments, HHS raised concerns about our use of the terms HHS, CDC, and NIOSH with respect to their role in particular activities. We modified the report where appropriate to clarify respective agency responsibilities. Finally, HHS acknowledged that the estimate of the costs of monitoring and treating WTC responders was imprecise. HHS also noted, as we have reported, that the clinical centers of the NY/NJ WTC Consortium and the FDNY WTC program have begun submitting quarterly cost and treatment reports and that this information will be used to improve cost estimates. We believe this is an important step toward the development of a reliable estimate. HHS did not comment on our recommendations. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time we will send copies of this report to the Secretary of Health and Human Services, congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Health and Human Services Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Helene F. Toiv, Assistant Director; George Bogart; Hernan Bozzolo; Frederick Caison; Anne Dievler; and Krister Friday made key contributions to this report.
Responders to the World Trade Center (WTC) attack were exposed to many hazards, and concerns remain about long-term health effects of the disaster and the availability of health care services for those affected. In 2006, GAO reported on problems with the Department of Health and Human Services' (HHS) WTC Federal Responder Screening Program and on the Centers for Disease Control and Prevention's (CDC) distribution of treatment funding. GAO was asked to update its 2006 testimony. GAO assessed the status of (1) services provided by the WTC Federal Responder Screening Program, (2) efforts by CDC's National Institute for Occupational Safety and Health (NIOSH) to provide services for nonfederal responders residing outside the New York City (NYC) area, and (3) NIOSH's awards to grantees for treatment services and efforts to estimate service costs. GAO reviewed program documents and interviewed HHS officials, grantees, and others. HHS's WTC Federal Responder Screening Program has had difficulties ensuring the uninterrupted availability of services for federal responders. From January 2007 to May 2007, the program stopped scheduling screening examinations because there was a change in the administration of the WTC Federal Responder Screening Program, and certain interagency agreements were not established in a timely way to keep the program fully operational. In April 2006 the program also stopped scheduling and paying for specialty diagnostic services because a contract with the program's new provider network did not cover these services. Almost a year later, the contract was modified, and the program resumed scheduling and paying for these services in March 2007. NIOSH is considering expanding the WTC Federal Responder Screening Program to include monitoring--follow-up physical and mental health examinations--and is assessing options for funding and service delivery. If federal responders do not receive monitoring, health conditions that arise later may not be diagnosed and treated, and knowledge of the health effects of the WTC disaster may be incomplete. NIOSH has not ensured the availability of screening and monitoring services for nonfederal responders residing outside the NYC area, although it recently took steps toward expanding the availability of these services. In late 2002, NIOSH arranged for a network of occupational health clinics to provide screening services. This effort ended in July 2004, and until June 2005, NIOSH did not fund screening or monitoring services for nonfederal responders outside the NYC area. In June 2005, NIOSH funded the Mount Sinai School of Medicine Data and Coordination Center (DCC) to provide screening and monitoring services; however, DCC had difficulty establishing a nationwide network of providers and contracted with only 10 clinics in 7 states. In 2006, NIOSH began to explore other options for providing these services, and in May 2007, it took steps toward expanding the provider network. However, these efforts are incomplete. NIOSH has awarded treatment funds to four NYC-area programs, but does not have a reliable cost estimate of serving responders. In fall 2006, NIOSH awarded $44 million for outpatient treatment and set aside $7 million for hospital care. The New York/New Jersey WTC Consortium and the New York City Fire Department WTC program, which received the largest awards, used NIOSH's funding to continue outpatient services, offer full coverage for prescriptions, and cover hospital care. Program officials expect that NIOSH's outpatient treatment awards will be spent by the end of fiscal year 2007. NIOSH lacks a reliable estimate of service costs because the estimate that NIOSH and its grantees developed included potential costs for certain program changes that may not be implemented, and in the absence of actual treatment cost data, they relied on questionable assumptions. It is unclear whether the estimate overstates or understates the cost of serving responders. To improve future cost estimates, HHS officials have required the two largest grantees to report detailed cost data.
Background The distribution of and payment for prescription drugs involves interactions among multiple entities. These entities include drug wholesalers, independent pharmacies, PSAOs, and third-party payers and their PBMs. Interactions among these entities facilitate the flow of and payment for drugs from manufacturers to consumers. Drug Wholesalers Drug wholesalers (hereafter referred to as wholesalers) purchase bulk quantities of drugs from pharmaceutical manufacturers and then distribute them to pharmacies, including independent pharmacies. For example, a wholesaler may fill an order from an independent pharmacy for a specified quantity of drugs produced by manufacturers and deliver the order to the pharmacy. In addition to supplying drugs, some wholesalers offer ancillary services to independent pharmacies such as helping them manage their inventory. Three wholesalers—AmerisourceBergen Corporation, McKesson Corporation, and Cardinal Health Inc.— accounted for over 80 percent of all drug distribution revenue in the United States in 2011. Independent Pharmacies and PSAOs Independent pharmacies are a type of retail pharmacy with a store-based location—often in rural and underserved areas—that dispense medications to consumers, including both prescription and over-the- counter drugs. In this report, we define independent pharmacies as one to three pharmacies under common ownership. Approximately 21,000 independent pharmacies constituted almost 34 percent of the retail pharmacies operating in the United States in 2010. Although independent pharmacies offer other products such as greeting cards and cosmetics, prescription drugs account for the majority of independent pharmacy sales. These sales accounted for almost 17 percent of the $266 billion in prescription drug sales in the United States in 2010. In addition to products, independent pharmacies provide patient-care services such as patient education to encourage patients’ appropriate use of medications. According to a 2009 survey of pharmacists, independent pharmacies spend the majority of their time dispensing prescription drugs and providing patient-care services. Independent pharmacies primarily purchase drugs from wholesalers (although they may also purchase them directly from manufacturers) and represented slightly over 15 percent of wholesalers’ total sales to retail pharmacies in 2010. Independent pharmacies are an important part of a wholesaler’s customer portfolio because, in addition to purchasing drugs, independent pharmacies may also pay the wholesaler to provide logistical functions and ancillary services such as direct delivery of drugs to individual stores and inventory management. Thus, a wholesaler’s relationship with an independent pharmacy may result in multiple business opportunities for the wholesaler and administrative efficiencies for the pharmacy. After receipt of drugs from a wholesaler or manufacturer, pharmacies then fill and dispense prescriptions to consumers, such as health plan enrollees. These latter prescriptions are dispensed according to contractual terms agreed upon with each enrollee’s health plan, that is, with each third-party payer or its PBM. According to the National Community Pharmacists Association (NCPA), payments based on the contractual terms of third-party payers or their PBMs significantly affect the financial viability of independent pharmacies. Consequently, these pharmacies must carefully choose which contracts to accept or reject. Accordingly, most independent pharmacies rely on PSAOs to negotiate directly, or to make recommendations for negotiating contracts on their behalf with third-party payers or their PBMs. When a PSAO enters into a contract with a third-party payer or its PBM, the pharmacies in its network gain access to the third-party payer or PBM contract—and the individuals it covers—by virtue of belonging to the PSAO’s network. Third-Party Payers and PBMs Third-party payers accounted for almost 80 percent of drug expenditures in 2010, which represents a significant shift from 30 years ago when payment from individual consumers accounted for the largest portion of expenditures. Third-party payers include private and public health plans such as those offered by large corporations and the federal government through Medicare and the FEHBP, many of which use PBMs to help them manage their prescription drug benefits. As part of the management of these benefits, PBMs assemble networks of retail pharmacies, including independent pharmacies, where the health plan’s enrollees can fill prescriptions.or its PBM’s network by entering into an agreement with the third-party payer or its PBM. It does so either directly or through a PSAO that has negotiated with that third-party payer or its PBM on the pharmacy’s behalf. Contract terms and conditions may include specifics about A pharmacy becomes a member of a third-party payer’s reimbursement rates (how much the pharmacy will be paid for dispensed drugs), payment terms (e.g., the frequency with which the third-party payer or its PBM will reimburse the pharmacy for dispensed drugs), and audit provisions (e.g., the frequency and parameters of audits conducted by the third-party payer, its PBM, or designee), among other things. The reimbursement rate that third-party payers or their PBMs pay pharmacies significantly affects pharmacy revenues. Retail pharmacies participating in a PBM’s network are reimbursed for prescriptions below the level paid by cash-paying customers (those whose prescriptions are not covered by a third-party payer). In addition, pharmacies must undertake additional administrative tasks related to transactions for customers who are covered by third-party payers that are not required for cash-paying customer transactions. For example, for customers covered by third-party payers, pharmacy staff must file claims electronically and may be required to counsel them on their health plan’s benefits. However, most retail pharmacies participate in PBM networks because of the large market share PBMs command, which represents potential pharmacy customers. The five largest PBMs operating in the first quarter of 2012 represented over 330 million individuals.benefit from the prescription and nonprescription sales generated by customers that PBMs help bring into their stores. (See fig. 1 for a diagram of the network of entities in the distribution of and payment for pharmaceuticals.) At Least 22 PSAOs Contracted with over 20,000 Pharmacies in 2011 or 2012, the Majority of Which Were Independent Pharmacies At least 22 PSAOs, which varied in the number and location of pharmacies to which they provided services, were in operation in 2011 or 2012. In total, depending on different data sources, these 22 PSAOs represented or provided other services to between 20,275 and 28,343 pharmacies in 2011 or 2012. (See table 1.) The number of pharmacies contracted with each PSAO across these sources ranged from 24 to 5,000 pharmacies; however, according to NCPDP data most contracted with fewer than 1,000 pharmacies. The largest 5 PSAOs combined contracted with more than half of all pharmacies that were represented by a PSAO in 2011 or 2012. Because pharmacies may change their PSAO, the number of pharmacies contracting with each PSAO fluctuates as For example, according to one PSAOs enroll and disenroll pharmacies.PSAO, member pharmacies will change PSAOs whenever they think that another PSAO can negotiate better contract terms with third-party payers or their PBMs. Some PSAOs contracted primarily with pharmacies located in a particular region. These PSAOs generally represented fewer pharmacies than PSAOs representing pharmacies across the United States. For example, the Northeast Pharmacy Service Corporation represented 250 independent pharmacies while the RxSelect Pharmacy Network represented from 451 to 569 independent pharmacies. According to NCPDP data, PSAOs provide services primarily to independent pharmacies. Of the 21,511 pharmacies associated with PSAOs in the 2011 NCPDP database, 18,103 were identified as independent pharmacies. These independent pharmacies represent nearly 75 percent of the total number of independent pharmacies in the 2011 NCPDP database. This is close to an estimate reported by NCPA and the HHS OIG, both of which conducted surveys in which approximately 80 percent of responding independent pharmacies were represented by PSAOs. In addition to independent pharmacies, some PSAOs also contracted with small chains and franchise pharmacy members.small chain pharmacies ranging in size from 25 to 150 pharmacies under For example, Managed Care Connection provides services to common ownership, and the Medicine Shoppe only offers its PSAO services to its franchise pharmacies. PSAOs Provide Independent Pharmacies with a Range of Services Intended to Achieve Administrative Efficiencies, and Most PSAOs Are Paid a Monthly Fee for These Services PSAOs provide a broad range of services to independent pharmacies including negotiating contractual agreements and providing communication and help-desk services. These and other services are intended to achieve administrative efficiencies for both independent pharmacies and third-party payers or their PBMs. Most PSAOs charge a monthly fee for a bundled set of services and separate fees for additional services. PSAOs Provide a Range of Services to Independent Pharmacies Including Contract Negotiation, Communication, and Help- Desk Services While PSAOs provide a broad range of services to independent pharmacies and vary in how they offer these services, we found that PSAOs consistently offer contract negotiation, communication, and help- desk services. Several entities, including industry experts, trade associations, and PSAOs we spoke with, referred to one or all of these services as a PSAO’s “key service(s)”—meaning that a PSAO can be distinguished from other entities in the pharmaceutical industry by its provision of these services. In addition, PSAOs may provide many other services that assist their member pharmacies—the majority of which are independent pharmacies—in interacting with third-party payers or their PBMs, although those PSAOs we spoke with did not provide these other services as consistently as their key services. On behalf of pharmacies, PSAOs may negotiate and enter into contracts with third-party payers or their PBMs. Both the HHS OIG and an industry study reported that small businesses such as independent pharmacies generally lack the legal expertise and time to adequately review and negotiate third-party payer or PBM contracts, which can be lengthy and complex.independent pharmacies that we reviewed indicated, and all of the PSAOs we spoke with stated, that the PSAO was explicitly authorized to negotiate and enter into contracts with third-party payers on behalf of member pharmacies. By signing the agreement with the PSAO, a member pharmacy acknowledges and agrees that the PSAO has the right to negotiate contracts with third-party payers or their PBMs on its behalf. All of the model agreements between PSAOs and PSAOs we spoke with had different processes for negotiating and entering into contracts with third-party payers or their PBMs. These processes included following guidance or parameters established by a governing body such as a board of directors composed partially or entirely of representatives from the PSAO’s member pharmacies. In addition, some PSAOs’ decisions about entering into contracts are made by their contracting department or executive staff that base the decision on factors such as analyses of the contract’s proposed reimbursement rate and the efficiencies and value that the PSAO’s member pharmacies would provide to the particular market in which the contracts are offered. Decisions about entering into contracts may also include consultation with a PSAO’s advisory board composed of representatives from the PSAO’s member pharmacies. While PSAOs may review and negotiate a wide range of contract provisions, PSAOs we spoke with reported negotiating a variety of provisions including reimbursement rates, payment terms, audits of pharmacies by third-party payers or their PBMs, price updates and appeals, and administrative requirements.areas, PBMs and PSAOs we spoke with reported that audits and reimbursement rates were of particular concern to pharmacies. One Regarding these contract PSAO reported that its negotiations about a contract’s audit provisions were intended to minimize member pharmacies’ risks and burdens as audit provisions can include withholding reimbursement on the basis of audit findings. In addition, according to some PSAOs that we spoke with, reimbursement rates to pharmacies have decreased over time, and PSAOs and other sources we spoke with reported that PSAOs’ ability to negotiate reimbursement rates has also decreased over time. Over half of the PSAOs we spoke with reported having little success in modifying certain contract terms as a result of negotiations. This may be due to PBMs’ use of standard contract terms and the dominant market share of the largest PBMs. Many PBM contracts contain standard terms and conditions that are largely nonnegotiable. According to one PSAO, this may be particularly true for national contracts, in which third-party payers or their PBMs have set contract terms for all pharmacies across the country that opt into the third-party payer’s, or its PBM’s network. For example, a national contract exists for some federal government programs, such as TRICARE. In addition, several sources told us that the increasing consolidation of entities in the PBM market has resulted in a few PBMs having large market shares, which has diminished the ability of PSAOs to negotiate with them, particularly over reimbursement rates. In contrast, PBMs we spoke with reported that PSAOs can and do negotiate effectively. PBMs and PSAOs reported that several factors may affect negotiations in favor of PSAOs and their members, including the number and location of pharmacies represented and the services provided by those pharmacies in relation to the size and needs of the third-party payer or its PBM. For example, a third-party payer or its PBM may be more willing to modify its contract terms in order to sign a contract with a PSAO that represents pharmacies in a rural area in order to expand the PBM’s network in that area. In addition, a third-party payer or its PBM may be more willing to negotiate in order to add pharmacies in a PSAO’s network that offer a specialized service such as diabetes care needed by a health plan’s enrollees. One PSAO also reported that small PBMs wishing to increase their network’s size may be more willing to negotiate contract terms. We found that PSAOs vary in their requirements for their member pharmacies. Two PSAO-pharmacy model agreements that we reviewed stated that member pharmacies must participate in all contracts in which the PSAO entered on behalf of members. These PSAOs and six additional PSAOs we spoke with reported that their member pharmacies must participate in all contracts between the PSAO and third-party payers or their PBMs. The remaining two PSAOs we spoke with reported that they build a portfolio of contracts from which member pharmacies can choose. These PSAOs negotiate contracts with various third-party payers or their PBMs and member pharmacies review the terms and conditions of each contract and select specific contracts to enter into. Most of the PSAO-pharmacy model agreements we reviewed contained provisions expressly authorizing member pharmacies to contract with a third-party payer independent of the PSAO. Two additional PSAOs we spoke with confirmed that they do not restrict member pharmacies from entering into contracts independent of the PSAO. All three PBMs we spoke with confirmed that pharmacies may contract with them if their PSAO did not sign a contract with them on the pharmacies’ behalf. PSAOs serve as a communication link between member pharmacies and third-party payers or their PBMs. Such communication may include information regarding contractual and regulatory requirements as well as general news and information of interest to pharmacy owners. All of the PSAOs we spoke with provided communication services to pharmacies such as reviewing PBMs’ provider manuals to make member pharmacies Communication with pharmacies was provided aware of their contents.by means of newsletters and the PSAOs’ Internet sites. In addition to communicating contractual requirements, PSAOs may also communicate applicable federal and state regulatory updates. For example, one PSAO we spoke with told us that it provides its member pharmacies with regulatory updates from the Centers for Medicare & Medicaid Services by publishing this information in its newsletter. Another PSAO we spoke with provided regulatory analyses that included examining and briefing its member pharmacies on durable medical equipment accreditation requirements, and fraud, waste, and abuse training requirements. According to the PSAO, this was to ensure that its member pharmacies were taking the right steps to comply with applicable regulations. PSAOs provide general assistance to pharmacies and assistance with issues related to third-party payers and their PBMs such as questions about claims, contracting, reimbursement, and audits. PSAOs may provide such assistance by means of a help-desk (or customer service department) or a dedicated staff person. For example, one PSAO we spoke with reported that it provides general pharmacy support services to help pharmacies with any needs they may have in the course of operating their businesses. This PSAO also had a staff person responsible for providing support services to member pharmacies including answering their questions about claims and each contract’s reimbursement rate or payment methodology. A PSAO may also help a pharmacy identify why a certain claim was rejected. PSAOs provide many other services that assist member pharmacies in interacting with third-party payers or their PBMs. For example, PSAOs may provide services that help the pharmacy with payment from a third- party payer or its PBM, comply with third-party payer requirements, or develop services that make the pharmacy more appealing to third-party payers or their PBMs. (See table 2 for a list and description of these services.) The PSAOs we spoke with varied in their provision of these other services although 9 of the 10 PSAOs we spoke with provided central payment and reconciliation services or access to reconciliation vendors that provided the service. However, other services were not provided as consistently across PSAOs. For example, only 1 PSAO reported that it provided inventory management or front store layout assistance. PSAO services have changed over time to meet member pharmacies’ interests. In some cases, this has meant adding new services, while in other cases PSAOs have expanded existing services. Several PSAOs we spoke with reported adding services intended to increase cost efficiencies and member pharmacies’ revenues. For example, three PSAOs we spoke with reported that they began offering central pay services and two of these PSAOs and an additional PSAO reported that they began offering reconciliation services. PSAOs we spoke with also reported expanding existing services. For example, one PSAO reported adding electronic funds transfers, while two other PSAOs reported that although they were already providing electronic funds transfers, they increased the frequency of transfers to five days per week. This increase was made to improve pharmacies’ cash flow by giving them quicker access to funds owed them by third-party payers or their PBMs. Two PSAOs we spoke with reported adding certification programs, particularly vaccination/immunization certification programs, because of the needs of third-party payers or their PBMs for this service to be provided through their network pharmacies. PSAOs Provide Services Intended to Achieve Administrative Efficiencies for Independent Pharmacies and Third- Party Payers or Their PBMs PSAOs provide services intended to achieve administrative efficiencies for both independent pharmacies and third-party payers or their PBMs. PSAO services enable pharmacy staff, including pharmacists, to focus on patient-care services rather than administrative issues that pharmacists may not have the time to address. PSAO services also reduce the number of resources that PBMs must direct toward developing and maintaining relationships with multiple independent pharmacies. PSAO services are intended to help independent pharmacies achieve efficiencies particularly in contract negotiation. For example, independent pharmacies and PBMs we spoke with told us that PSAO contract negotiation services eased their contracting burden and allowed them to expand the number of entities with which they contracted. As a member of a PSAO, pharmacies may no longer have to negotiate contracts with multiple third-party payers or their PBMs operating in any given market. Independent pharmacies also told us that PSAOs provide other services that create both administrative and cost efficiencies for them. For example, one pharmacist told us that the marketing services provided by his PSAO relieved him of advertising costs because the PSAO provided advertising circulars to its PSAO-franchise members. Another independent pharmacy reported that its PSAO provides services such as claims reconciliation less expensively than the pharmacy could perform on its own. Similar to independent pharmacies, PBMs we spoke with reported that PSAO services create administrative efficiencies for them, including efficiencies in contracting, payment, and their call centers. PSAO services create contracting efficiencies because they provide PBMs with a single point through which they can reach multiple independent pharmacies. For example, the PBMs we spoke with each had over 20,000 independent pharmacies in their networks, however, each PBM only negotiated contracts with 15 to 19 PSAOs, representing a majority of the pharmacies in its network. PBMs also reported that PSAO services create payment efficiencies when PSAOs provide central payment services. One PBM reported that instead of mailing checks to hundreds of individual pharmacies, the PBM made one electronic funds transfer to the PSAO, which then distributed the payments to its members. Finally, PBMs benefit from reduced call center volume because PSAOs often provide similar support directly to member pharmacies. For example, a call that may have gone to the PBM about a claim that was not paid may instead go to the pharmacy’s PSAO, which will help the pharmacy understand any issues with the claim. PSAOs may also aggregate member pharmacies’ issues and contact the PBM to discuss issues on behalf of multiple pharmacies and relay pertinent information back to those pharmacies. While creating efficiencies by acting on behalf of multiple pharmacies, PSAOs must ensure that their arrangements do not unreasonably restrain trade, thereby raising antitrust concerns. The FTC and the Antitrust Division of the Department of Justice (DOJ) are the federal agencies responsible for determining whether a particular collaborative arrangement may be unlawful and for enforcing applicable prohibitions. According to FTC officials, such a determination is dependent on multiple factors including the geographic region that a PSAO is operating in and the health care program (e.g., Medicare Part D) with which a PSAO is contracting. These factors affect the PSAO’s ability (and the abilities of the pharmacies the PSAO represents) to affect the terms of a contract or the pricing of a good. For example, a group of rural pharmacies may more effectively influence contract negotiations than a single pharmacy operating in an urban area with many competitors. PSAOs we spoke with were aware of potential antitrust issues and reported taking measures to minimize them. For example, two of the PSAOs we spoke with reported developing their PSAO’s organizational structure to ensure compliance with antitrust laws. Most PSAOs Charge a Monthly Fee for Bundled Services and Additional Fees for Other Services Although PSAOs’ charges to member pharmacies for their services may vary depending on how the services are provided, 8 of the 10 PSAOs we spoke with charged a monthly fee for a bundled set of services. For example, 1 PSAO charged $40 to $80 per month for a bundle of services that included contract negotiation, communication with member pharmacies, help-desk services, business advice, and limited audit support. In comparison, another PSAO’s monthly fee ranged from $59 to $149 per month depending on the combination of services that the pharmacy requested. One of the remaining PSAOs we spoke with charged an annual fee rather than a monthly fee, while the other PSAO did not charge any fees for its PSAO services. The latter PSAO provided PSAO services as a value-added service to members of its group purchasing organization, for which it charged a monthly fee. Other services that are offered by most, but not all, PSAOs we spoke with are either provided within the bundle or as separate add-on services. PSAOs may also charge fees for individual services that are based on the type or value of that service. Virtually all of the fees for PSAO services are paid for by member pharmacies. All of the PSAOs we spoke with reported that they did not receive any type of fees from other entities such as an administrative fee from a third-party payer or its PBM. Similarly, all of the PBMs we spoke with told us that they did not pay PSAOs for their services. However 1 of the 3 PBMs we spoke with reported that it paid part of a pharmacy’s dispensing fee to 1 of the 16 PSAOs with which it contracted rather than to the pharmacy. Wholesalers and Independent Pharmacy Cooperatives Owned the Majority of PSAOs; Requirements to Use Non-PSAO Services Varied by Owner The majority of PSAOs in operation in 2011 or 2012 were owned by wholesalers and independent pharmacy cooperatives.PSAO owners varied as to whether they require member pharmacies to also use the non-PSAO services they offer. Wholesalers and Independent Pharmacy Cooperatives Owned the Majority of PSAOs Wholesalers and independent pharmacy cooperatives owned the majority of the PSAOs in operation in 2011 or 2012. Specifically, of the 22 PSAOs we identified, 9 PSAOs were owned by wholesalers, 6 were owned by independent pharmacy cooperatives (“member-owned”), 4 were owned by group purchasing organizations, and 3 were stand- alone PSAOs owned by other private entities. (See table 3.) Three of the 5 largest PSAOs were owned by the 3 largest wholesalers in the U.S.: AmerisourceBergen Corporation, Cardinal Health Inc., and McKesson Corporation. Across all sources included in our review, the PSAOs owned by these wholesalers represented 9,575 to 12,080 pharmacies, and PSAOs owned by independent pharmacies represented 4,883 to 8,882 pharmacies. According to one industry report, the services provided by member-owned PSAOs are similar to those offered by wholesaler-owned PSAOs. One PBM we spoke with noted that because of the financial backing of wholesaler-owned PSAOs, their PSAOs generally offer central payment services more often than PSAOs owned by other types of entities. This strong financial backing is necessary to offer these services because there is a considerable liability and risk in providing a central payment service. PSAO owners may operate PSAOs for a number of reasons, including to benefit another, non-PSAO line of their business. The wholesalers we spoke with provided various reasons for offering PSAO services, such as wanting to assist independent pharmacies in gaining access to third-party payer or PBM contracts, or to help pharmacies operate more efficiently. Additionally, one wholesaler noted that it created its PSAO because third- party payers and independent pharmacies indicated there was a need for PSAO services in the market. These third-party payers wanted a sole source for reaching multiple pharmacies, while independent pharmacies wanted a facilitator to assist them with reviewing third-party payer contracts. Other pharmaceutical entities noted that wholesalers may have an interest in developing relationships with independent pharmacies, which are potential customers of the wholesaler’s drug distribution line of business. By obtaining multiple services from a wholesaler, an independent pharmacy may be less likely to switch wholesalers. Additionally, by having their PSAOs assist independent pharmacies with entering multiple third-party payer or PBM contracts, wholesalers may benefit from the increased drug volume needed by independent pharmacies to serve the third-party payer’s or PBM’s enrollees. Other types of PSAO owners also provided a number of reasons for providing PSAO services, for instance, a number of these owners stated that they began offering PSAO services as a market-driven response to the growth of third-party payers and PBMs. In fact, one PSAO owner we spoke with stated that it reluctantly began offering these services at the request of customers of its group purchasing services, who wanted help navigating the issues and complexities of third-party payer and PBM contracting. PSAO Owners Vary As to Whether They Require Member Pharmacies to Use Their Non-PSAO Services The owners of PSAOs we spoke with varied as to whether they require PSAO member pharmacies to also use services from a separate non- PSAO line of their business. Of the nine PSAO owners we spoke with that had a separate non-PSAO line of business (e.g., drug distribution or group purchasing), six did not require their PSAO member pharmacies to use services from that non-PSAO line of business. Of the remaining three, one wholesaler-owned PSAO limited its offer of services to pharmacies that were existing customers of its drug distribution line of business, while two member-owned PSAOs reported requiring their PSAO member pharmacies to join their group purchasing organizations. Officials from the wholesaler-owned PSAO stated that their limiting the availability of PSAO services to existing customers ensures that their PSAO already has basic information about their member pharmacies and a salesperson who serves as each member pharmacy’s point of contact. While most PSAO owners do not require their member pharmacies to use services from their primary line of business, member pharmacies may choose to do so. In this case, a pharmacy must contract and pay for PSAO and other services separately. In fact, according to one wholesaler we spoke with, approximately 36 percent of its drug distribution customers were also members of its PSAOs. required to submit an application to join a PSAO network. PSAO applications we reviewed requested information about the pharmacy’s licensing, services provided, and insurance. Additionally, applications asked pharmacies to indicate whether they had been investigated by the HHS OIG, had filed for bankruptcy, or had their pharmacy’s state license limited, suspended, or revoked. PSAOs stated they used the information provided in applications to verify that the pharmacies applying for membership in their network are licensed by their state and in good standing. Pharmacies may choose to obtain PSAO services from their wholesaler, but not all do so. Instead, some pharmacies may choose to join another PSAO. Most PSAOs we spoke with operated their PSAO separately from any separate non-PSAO line of business. For example, most wholesalers we spoke with stated their PSAO staff and drug distribution staff are distinct and do not interact. One of these wholesalers reported its PSAO has a distinct corporate structure, management team, sales organization and financial component from its drug distribution line of business. However, nearly all of the PSAO owners we spoke with operate their PSAO as a subsidiary of their non-PSAO line of business. For example, two PSAOs were organized as subsidiaries of a member-owned buying group, while another PSAO operated as a branded service under the owner’s non- PSAO line of business. Most PSAO owners reported that PSAO services are not a profitable line of business. Only 1 of the 10 PSAO owners we spoke with stated that its PSAO service was profitable. Other PSAO owners reported little to no profit earned from the PSAO services they provided. For those PSAOs that are not profitable, the cost of operating them may be subsidized by the owner’s non-PSAO lines of business. As previously noted, it may be the case that offering PSAO services may benefit the owner’s non-PSAO line of business even if the PSAO service itself is not profitable. For example, one member-owned PSAO we spoke with also owned a group purchasing organization to which its members must belong in order to obtain PSAO services. The group purchasing organization may benefit from increased membership driven by pharmacies that want to obtain its PSAO services. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Chairman of the Federal Trade Commission, and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact John E. Dicken at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Rashmi Agarwal and Robert Copeland, Assistant Directors; George Bogart; Zhi Boon; Jennel Lockley; Laurie Pachter; and Brienne Tierney made key contributions to this report.
Independent pharmacies dispensed about 17 percent of all prescription drugs in the United States in 2010. To obtain, distribute, and collect payment for drugs dispensed, pharmacies interact with a network of entities, including drug wholesalers and third-party payers. With limited time and resources, independent pharmacies may need assistance in interacting with these entities, particularly with third-party payers that include large private and public health plans. Most use a PSAO to interact on their behalf. PSAOs develop networks of pharmacies by signing contractual agreements with each pharmacy that authorizes them to interact with third-party payers on the pharmacy's behalf by, for example, negotiating contracts. While specific services provided by PSAOs may vary, PSAOs can be identified and distinguished from other entities in the pharmaceutical distribution and payment system by their provision of intermediary or other services to assist pharmacies with third-party payers. GAO was asked to review the role of PSAOs. In this report, GAO describes: (1) how many PSAOs are in operation and how many pharmacies contract with PSAOs for services; (2) the services PSAOs offer and how they are paid for these services; and (3) entities that own PSAOs and the types of relationships that exists between owners and the pharmacies they represent. GAO analyzed data on PSAOs in operation in 2011 and 2012, reviewed literature on PSAOs and model agreements from 8 PSAOs, and interviewed federal agencies and entities in the pharmaceutical industry. At least 22 pharmacy services administrative organizations (PSAO), which varied in the number and location of the pharmacies to which they provided services, were in operation in 2011 or 2012. In total, depending on different data sources, these PSAOs represented or provided other services to between 20,275 and 28,343 pharmacies in 2011 or 2012, most of which were independent pharmacies. While the number of pharmacies with which each PSAO contracted ranged from 24 to 5,000 pharmacies, most PSAOs represented or provided other services to fewer than 1,000 pharmacies. Additionally, some PSAOs contracted with pharmacies primarily located in a particular region rather than contracting with pharmacies located across the United States. While PSAOs provide a broad range of services to independent pharmacies, and vary in how they offer these services, PSAOs consistently provide contract negotiation, communication, and help-desk services. All of the model agreements between PSAOs and independent pharmacies that GAO reviewed stated that the PSAO will negotiate and enter into contracts with third-party payers on behalf of member pharmacies. PSAOs may also contract with pharmacy benefit managers (PBM), which many third-party payers use to manage their prescription drug benefit. In addition to contracting, PSAOs also communicate information to members regarding contractual and regulatory requirements, and provide general and claims-specific assistance to members by means of a help-desk or a dedicated staff person. They may also provide other services to help member pharmacies interact with third-party payers or their PBMs, such as managing and analyzing payment and drug-dispensing data to identify claims unpaid or incorrectly paid by a third-party payer. PSAO services are intended to achieve administrative efficiencies, including contract and payment efficiencies for both independent pharmacies and third-party payers or their PBMs. Most PSAOs charge a monthly fee for a bundle of services and may charge additional fees for other services provided to its member pharmacies. Virtually all of the fees paid for PSAO services are paid by member pharmacies, with PSAOs receiving no administrative fees from other entities such as third-party payers or their PBMs. The majority of PSAOs in operation in 2011 or 2012 were owned by drug wholesalers and independent pharmacy cooperatives. Of the 22 PSAOs we identified, 9 PSAOs were owned by wholesalers, 6 were owned by independent pharmacy cooperatives, 4 were owned by group purchasing organizations, and 3 were stand-alone PSAOs owned by other private entities. These owners varied in their requirements for PSAO member pharmacies to also use services from their separate, non-PSAO line of business. Three PSAO owners GAO spoke with required PSAO members to also use their non-PSAO services. For example, one wholesaler-owned PSAO limited its offer of PSAO services to existing customers of its drug distribution line of business. All but one PSAO owner GAO spoke with reported that their PSAO line of business earned little to no profit. However, PSAO owners may operate PSAOs for a number of reasons, including helping pharmacies gain access to third-party payer contracts and to provide benefits to the owner's non-PSAO line of business.
Preliminary Observations on the Proposed Human Capital Regulations DHS’s and OPM’s proposed regulations would establish a new human resources management system within DHS that covers pay, classification, performance management, labor relations, adverse action, and employee appeals. These changes are designed to ensure that the system aligns individual performance and pay with the department’s critical mission requirements and to protect the civil service rights of its employees. However, it is important to note at the outset that the proposed regulations do not apply to nearly half of all DHS civilian employees, including nearly 50,000 screeners in the Transportation Security Administration (TSA). DHS officials have noted that additional employees can be included through further administrative action, but that legislation would be needed to include other employees such as the screeners and the uniformed division of the Secret Service. We have found that having one performance management system framework facilitates unifying an organizational culture and is a key practice to a successful merger and transformation. Based on the department’s progress in implementing the system and any appropriate modifications made based on their experience, DHS should consider moving all of its employees under the new human capital system. Pay and Performance Management Today, Mr. Chairman and Madam Chairwoman, you are releasing a report that we prepared at your request that shows the variety of approaches that OPM’s personnel demonstration projects took to design and implement their pay for performance systems. Their experiences provide insights into how some organizations in the federal government are implementing pay for performance and thus can guide DHS as it develops and implements its own approach. These demonstration projects illustrate that understanding how to link pay to performance is very much a work in progress in the federal government and that additional work is needed to ensure that performance management systems are tools to help them manage on a day-to-day basis and achieve external results. As we testified last spring when the Department of Defense (DOD) proposed its civilian personnel reform, from a conceptual standpoint, we strongly support the need to expand pay for performance in the federal government. Establishing a better link between individual pay and performance is essential if we expect to maximize the performance and ensure the accountability of the federal government for the benefit of the American people. However, how it is done, when it is done, and the basis on which it is done can make all the difference in whether such efforts are successful. The DHS proposal reflects a growing understanding that the federal government needs to fundamentally rethink its current approach to pay and better link pay to individual and organization performance. To this end, the DHS proposal takes another valuable step towards results- oriented pay reform and modern performance management. My comments on specific provisions follow. Linking Organizational Goals to Individual Performance Under the proposed regulations, the DHS performance management system must, among other things, align individual performance expectations with the mission, strategic goals, or a range of other objectives of the department or of the DHS components. The proposed guidelines do not detail how such an alignment is to be achieved, a vital issue that will need to be addressed as DHS’s efforts move forward. Our work looking at public sector performance management efforts here in the United States as well as abroad have underscored the importance of aligning daily operations and activities with organizational results. We have found that organizations often struggle with clearly understanding how what they do on a day-to-day basis contributes to overall organizational results. High performing organizations, on the other hand, understand how the products and services they deliver contribute to results by aligning performance expectations of top leadership with organizational goals and then cascading those expectations to lower levels. As an organization undergoing its own merger and transformation, DHS’s revised performance management system can be a vital tool for aligning the organization with desired results and creating a “line of sight” showing how team, unit, and individual performance can contribute to overall organizational results. To help DHS merge its various originating components into a unified department and transform its culture to be more results oriented, customer focused, and collaborative in nature, we reported at your request, Mr. Chairman and Madam Chairwoman, how a performance management system that defines responsibility and assures accountability for change can be key to a successful merger and transformation. While aligning individual performance expectations with DHS’s mission and strategic goals will be key to DHS’s effective performance management, it is important to note that DHS has not yet released its strategic plan which may hamper creating the formal linkage to the performance management system and make it difficult to ensure that the proposed regulations support and facilitate the accomplishment of the department’s strategic goals and objectives. Establishing Pay Bands Under the proposed regulations, DHS would create broad pay bands for much of the department in place of the fifteen-grade General Schedule (GS) system now in place for much of the civil service. Specifically, DHS officials have indicated that they will form ten to fifteen occupational pay clusters of similar job types, such as a management or science and technology cluster. Most of these occupational clusters would have four pay bands ranging from entry level to supervisor. Within each occupational cluster, promotion to another band (such as from full performance to senior expert) would require an assessment and/or competition. Under the proposed regulations, DHS is not to reduce employees’ basic rate of pay when converting to pay bands. In addition, the proposed regulations would allow DHS to establish a “control point” within a band, beyond which basic pay increases may be granted only for meeting criteria established by DHS, such as an outstanding performance rating. The use of control points can be a valuable tool because managing progression through the bands can help to ensure that employees’ performance coincides with their salaries and can help to prevent all employees from eventually migrating to the top of the band and thus increasing salary costs. Both demonstration projects at China Lake and the Naval Sea Systems Command Warfare Center’s (NAVSEA) Dahlgren Division have checkpoints or “speed bumps” in their pay bands designed to ensure that only the highest performers move into the upper half of the pay band. For example, when employees’ salaries at China Lake reach the midpoint of the pay band, they must receive a performance rating equivalent to exceeding expectations, before they can receive additional salary increases. Pay banding and movement to broader occupational clusters can both facilitate DHS’s movement to a pay for performance system, and help DHS to better define occupations, which can improve the hiring process. We have reported that the current GS system as defined in the Classification Act of 1949 is a key barrier to comprehensive human capital reform and the creation of broader occupational job clusters and pay bands would aid other agencies as they seek to modernize their personnel systems. The standards and process of the current classification system is a key problem in federal hiring efforts because they are outdated and not applicable to the occupations and work of today. Many employees in agencies that are now a part of DHS responding to OPM’s 2002 Federal Human Capital Survey (FHCS) believe that recruiting is a problem – only 36 percent believe their work unit is able to recruit people with the right skills. Setting Employee Performance Expectations The DHS performance management system is intended to promote individual accountability by communicating performance expectations and holding employees responsible for accomplishing them and by holding supervisors and managers responsible for effectively managing the performance of employees under their supervision. While supervisors are to involve employees as far as practicable in developing their performance expectations and employees seek clarification if they do not understand them, the final decision on an employee’s expectations is the supervisor’s sole and exclusive discretion. Supervisors must monitor the performance of their employees and provide periodic feedback, including one or more formal interim performance reviews during the appraisal period. The proposed regulations provide a general description of DHS’s performance management system with many important details to be determined. Under the proposed regulations, performance expectations may take the form of goals or objectives that set general or specific performance targets at the individual, team, and/or organizational level; a particular work assignment, including characteristics such as quality, accuracy, or timeliness; or competencies an employee is expected to demonstrate on the job; and/or the contributions an employee is expected to make, among other things. As DHS’s system design efforts move forward, it will need to define in further detail than currently provided how performance expectations will be established, including the degree to which DHS components, managers, and supervisors will have flexibility in setting those expectations. Nevertheless, the range of expectations that DHS will consider in setting individual employee performance expectations are generally consistent with those we see used by leading organizations. In addition, DHS appropriately recognizes that given the vast diversity of work done in the Department, managers and employees need flexibility in crafting specific expectations. However, the experiences of leading organizations suggest that DHS should reconsider its position to merely allow, rather than require the use of core employee competencies as a central feature of DHS’s performance management efforts. Based on our review of others’ efforts and our own experience at GAO, core competencies can help reinforce employee behaviors and actions that support the department’s mission, goals, and values and can provide a consistent message to employees about how they are expected to achieve results. For example, the Civilian Acquisition Workforce Personnel Demonstration Project (AcqDemo), which covers various organizational units of the Air Force, Army, Navy, Marine Corps, and the Office of the Under Secretary of Defense, applies organizationwide competencies for all employees such as teamwork/cooperation, customer relations, leadership/supervision, and communication. More specifically and consistent with leading practices for successful mergers and organizational transformation, DHS should use its performance management system to serve as the basis for setting expectations for individual roles in its transformation process. To be successful, transformation efforts, such as the one underway at DHS, must have leaders, managers, and employees who have the individual competencies to integrate and create synergy among multiple organizations involved in the transformation effort. Individual performance and contributions can be evaluated on competencies such as change management, cultural sensitivity, teamwork and collaboration, and information sharing. Leaders, managers, and employees who demonstrate these competencies are rewarded for their success in contributing to the achievement of the transformation process. DHS, by including such competencies throughout its revised performance management system, would create a shared responsibility for organizational success and help assure accountability for change. Translating Employee Performance Ratings into Pay Increases and Awards A stated purpose of DHS’s performance management system is to provide for meaningful distinctions in performance to support adjustments in pay, awards, and promotions. All employees who meet organizational expectations are to receive pay adjustments, generally to be made on an annual basis. In coordination with OPM, the pay adjustment is to be based on considerations of mission requirements, labor market conditions, availability of funds, pay adjustments received by other federal employees, and other factors. The pay adjustment may vary by occupational cluster or band. Employees that meet or exceed expectations are also eligible to receive a performance-based pay increase, either as an increase to base pay or a one-time award, depending on the employee’s performance rating. Employees with unacceptable ratings are not to receive the pay adjustment or a performance-based pay increase. The proposed regulations provide managers with a range of options for dealing with poor performers, such as remedial training, reassignment, an improvement period, among other things. In coordination with OPM, DHS may additionally set the boundaries of locality pay areas. Participants in the DHS focus groups expressed concerns regarding the shortcomings of the current locality pay system, including its impact on recruitment and retention. While the DHS proposal does not provide additional detail on how it would consider labor market conditions, its proposed approach is broadly consistent with the experiences of some of the demonstration projects that considered the labor market or the fiscal condition of the organization in determining how much to budget for pay increases. For example, NAVSEA’s Newport Division considers the labor market and uses regional and industry salary information compiled by the American Association of Engineering Societies when determining how much to set aside for pay increases and awards. In addition, the Newport Division is financed in part through a working capital fund and thus must take into account fiscal condition when budgeting for pay increases and awards. Responding to higher salaries in the labor market, the Newport Division funded pay increases at a higher rate in fiscal year 2001 than in 2000. Conversely, in fiscal year 2002, the performance pay increase and award pools were funded at lower levels than in 2001 because of fiscal constraints. Under the proposed regulations, DHS would establish performance pay pools by occupational cluster and by band within each cluster, and may further divide them by unit and/or location. Performance-based pay would be based on “performance points” whereby points correspond to a rating level. In an example used by DHS, for a four-level system, the point value pattern may be 4-2-1-0, where 4 points are assigned to the highest rating and 0 points to an unacceptable rating. While each pay pool has the option to use this point value pattern or another, DHS is to determine the value of a performance point. The proposed regulations do not provide more detailed information on how ratings will be used for pay and promotions. Under the proposed regulations, DHS may not impose a quota on any rating level or a mandatory distribution of ratings. DHS would create a Performance Review Board (PRB) to review ratings in order to promote consistency and provide general oversight of the performance management system to ensure it is administered in a fair, credible, and transparent manner. DHS may, in turn, appoint as many review boards within the departmental components as it deems necessary to effectively carry out these intended functions and, when practicable, may include employees outside the organizational unit, occupation, and/or location of employees subject to review by the PRB. The proposed regulations do not offer additional details on other matters such as the selection process for the members nor their qualifications. Where circumstances warrant, the PRB may remand individual ratings for additional review and/or modify a rating. While much remains to be determined about how the DHS PRB will operate, we believe that the effective implementation of such a board is important to assuring that predecisional internal safeguards exist to help achieve consistency and equity, and assure nondiscrimination and nonpolitization of the performance management process. The key will be to create a PRB that is independent of line management and review such matters as the establishment and implementation of the performance appraisal system and later, performance rating decisions, pay determinations, and promotion actions before they are finalized to ensure they are merit based. Several of the demonstration projects consider an employee’s current salary when making decisions on permanent pay increases and one-time awards – a procedure that is worth additional consideration in the proposed DHS regulations. By considering salary in such decisions, the projects intend to make a better match between an employee’s compensation and his or her contribution to the organization. Thus, two employees with comparable contributions could receive different pay increases and awards depending on their current salaries. For example, at AcqDemo, supervisors recommend and pay pool managers approve employees’ “contribution scores.” Pay pool managers then plot contribution scores against the employees’ current salaries and a “standard pay line” to determine if employees are “appropriately compensated,” “under-compensated” or “over-compensated,” given their contributions. As a result of this system, AcqDemo has reported that it has made progress in matching employees’ compensation to their contributions to the organization. From 1999 to 2002, appropriately compensated employees increased from about 63 percent to about 72 percent, under-compensated employees decreased from about 30 percent to about 27 percent and over- compensated employees decreased from nearly 7 percent to less than 2 percent. A recent evaluation of AcqDemo by Cubic Applications, Inc. found that employees’ perceptions of the link between pay and contribution increased, from 20 percent reporting that pay raises depend on their contribution to the organization’s mission in 1998 to 59 percent in 2003. Providing Adequate Safeguards to Ensure Fairness and Guard Against Abuse According to the proposed regulations, the DHS performance management system must comply with the merit system principles and avoid prohibited personnel practices; provide a means for employee involvement in the design and implementation of the system; and overall, be fair, credible, and transparent. Last spring, when commenting on the DOD civilian personnel reforms, we testified that Congress should consider establishing statutory standards that an agency must have in place before it can implement a more performance-based pay program and developed an initial list of possible safeguards to help ensure that pay for performance systems in the government are fair, effective, and credible. While much remains to be defined, DHS is proposing taking actions that are generally consistent with these proposed safeguards. For example, as I noted previously, DHS plans to align individual performance management with organizational goals and provide for reasonableness reviews of performance management decisions through its PRB. Moreover, employees and their union representatives played a role in shaping the design of the proposed systems, as we previously reported. DHS should continue to build in safeguards into its revised performance management system. For example, we noted that agencies need to assure reasonable transparency and provide appropriate accountability mechanisms in connection with the results of the performance management process. This can include publishing overall results of performance management and individual pay decisions while protecting individual confidentiality and reporting periodically on internal assessments and employee survey results relating to the performance management system. DHS should commit to publishing the results of the performance management process. Publishing the results in a manner that protects individual confidentiality can provide employees with the information they need to better understand the performance management system. Several of the demonstration projects publish information for employees on internal Web sites about the results of performance appraisal and pay decisions, such as the average performance rating, the average pay increase, and the average award for the organization and for each individual unit. Adverse Actions and Appeals The DHS proposal is intended to streamline the employee adverse action process, while maintaining an independent third-party review of most adverse actions. It is designed to create a single process for both performance-based and conduct-based actions, and shortens the adverse action process by removing the requirement for a performance improvement plan and reducing other timeframes. The proposed regulations also adopt the lower standard of proof for adverse actions in DHS, requiring the agency to meet a standard of “substantial evidence” instead of a “preponderance of the evidence.” An independent review is to be retained by allowing employees to appeal to the Merit Systems Protection Board (MSPB). The appeals process at MSPB is, however, to be streamlined by shortening the time for filing and processing appeals. The proposal also encourages the use of Alternative Dispute Resolution (ADR). Retention of a qualified and independent third-party to address employee appeals may be especially important in light of OPM’s FHCS results. Specifically, 38 percent of DHS respondents believe that complaints, disputes, or grievances are resolved fairly – lower than the governmentwide response of 44 percent; and 38 percent of DHS respondents perceive that arbitrary action, personal favoritism, and coercion for partisan political purposes are not tolerated – lower than the governmentwide response of 45 percent. Providing an avenue for an independent appeal can enhance employee trust of the entire human capital system. The point was echoed during the DHS focus groups, in which employees and managers believed it was important to maintain a neutral third-party reviewer in the appeals process. In a separate survey that we administered (GAO survey), members of the field team identified the presence of a neutral third-party in the process as the most critical challenge in terms of the discipline and appeals system, while others identified options retaining a third-party reviewer as most likely to address the department’s challenges in discipline and appeals. DHS’s commitment to use ADR is a very positive development. To resolve disputes in a more efficient, timely, and less adversarial manner, federal agencies have been expanding their human capital programs to include ADR approaches. These approaches include mediation, dispute resolution boards and ombudsmen. Ombudsmen are typically used to provide an informal alternative to addressing conflicts. We reported on common approaches used in ombudsmen offices, including (1) broad responsibility and authority to address almost any workplace issue, (2) their ability to bring systemic issues to management’s attention, and (3) the manner in which they work with other agency offices in providing assistance to employees. The proposed regulations note that the department will use ADR, including an ombudsman, where appropriate. The proposal authorizes the Secretary of DHS to identify specific offenses for which removal is mandatory. Employees alleged to have committed these offenses will have the right to a review by an adjudicating official and a further appeal to a newly created panel. Members of this three- person panel are to be appointed by the Secretary for three-year terms and qualifications for these members are articulated in the proposed regulations. Members of the panel may be removed by the Secretary “only for inefficiency, neglect of duty, or malfeasance.” Qualifications for the adjudicating officials, who are designated by the panel, are not specified. One potential area of caution is the authority given to the Secretary to identify specific offenses for which removal is mandatory. I believe that the process for determining and communicating which types of offenses require mandatory removal should be explicit and transparent and involve a member of key players. Such a process should include an employee notice and comment period before implementation, collaboration with relevant Congressional stakeholders, and collaboration with employee representatives. We also would suggest that DHS exercise caution when identifying specific removable offenses and the specific punishment. When developing these proposed regulations, DHS should learn from the experience of the Internal Revenue Service’s (IRS) implementation of its mandatory removal provisions. We reported that IRS officials believed this provision had a negative impact on employee morale and effectiveness and had a “chilling” effect on IRS frontline enforcement employees who are afraid to take certain appropriate enforcement actions. Careful drafting of each removable offense is critical to ensure that the provision does not have unintended consequences. Moreover, the independence of the panel that will hear appeals of mandatory removal actions deserves further consideration. Removal of the panel members by the Secretary may potentially compromise the real or perceived independence of the panel’s decisions. As an alternative, the department should consider having members of the panel removed only by a majority decision of the panel. DHS may also wish to consider staggering the terms of the members to ensure a degree of continuity on the board. Labor Management Relations The DHS proposed regulations recognize the right for employees to organize and bargain collectively. However, the proposal reduces the scope of bargaining by removing the requirement to bargain on matters traditionally referred to as “impact and implementation,” which include the processes used to deploy personnel, assign work, and use new technology, for example, and redefining what are traditionally referred to as the “conditions of employment.” A DHS Labor Relations Board is proposed that would be responsible for determining appropriate bargaining units, resolving disagreements on the scope of bargaining and the obligation to bargain, and resolving impasses, and would be separate and independent from the Federal Labor Relations Authority (FLRA). The Labor Relations Board would have three members selected by the Secretary. No member could be a current DHS employee and one member would be from FLRA. The FLRA is retained to resolve complaints concerning certain unfair labor practices and to supervise or conduct union elections. Regardless of whether it is as a part of collective bargaining, involving employees in such important decisions as how they are deployed and how work is assigned is critical to the successful operations of the department. During the course of the design process, DHS has recognized the importance of employee involvement and has been involving multiple organizational components and its three major employee unions in designing the new human capital system. This is consistent with our finding that leading organizations involve unions and incorporate their input into proposals before finalizing decisions. Engaging employee unions in major changes, such as redesigning work processes, changing work rules, or developing new job descriptions, can help achieve consensus on the planned changes, avoid misunderstandings, speed implementation, and more expeditiously resolve problems that occur. These organizations engaged employee unions by developing and maintaining an ongoing working relationship with the unions, documenting formal agreements, building trust over time, and participating jointly in making decisions. DHS employees’ comments can prove instructive when determining the balance in labor management relations. In the DHS focus groups, employees suggested having informal mechanisms in place to resolve issues before the need to escalate them to the formal process and holding supervisors accountable for upholding agreements. Supervisors and employees also expressed a need for increased training in roles and responsibilities in the labor process and an interest in training in ADR. Respondents to the GAO survey said the most critical challenge in terms of labor relations will be to maintain a balance between the mission of the agency and bargaining rights. DHS Faces Multiple Implementation Challenges Once DHS issues final regulations for the human capital system, the department will be faced with multiple implementation challenges. While we plan to provide further details to the Congress on some of these challenges in the near future, they include the following. Implementing the system using a phased approach. The DHS proposed regulations note that the labor relations, adverse actions, and appeals provisions will be effective 30 days after issuance of the interim final regulations later this year. DHS plans to implement the job evaluation, pay, and performance management system in phases to allow time for final design, training, and careful implementation. We strongly support a phased approach to implementing major management reforms. A phased implementation approach recognizes that different organizations will have different levels of readiness and different capabilities to implement new authorities. Moreover, a phased approach allows for learning so that appropriate adjustments and midcourse corrections can be made before the regulations are fully implemented organizationwide. Providing adequate resources for additional planning, implementation, and evaluation. The administration recognizes the importance of funding this major reform effort and has requested for fiscal year 2005 over $10 million for a performance pay fund in the first phase of implementation (affecting about 8,000 employees) to recognize those who meet or exceed expectations and about $100 million to fund training and the development of the performance management and compensation system. In particular, DHS is appropriately anticipating that its revised performance management system will have costs related to both development and implementation – a fact confirmed by the experience of the demonstration projects. In fact, OPM reports that the increased costs of implementing alternative personnel systems should be acknowledged and budgeted for up front. DHS is recognizing that there are up front costs and that its components are starting from different places regarding the maturity and capabilities of their performance management systems. At the same time, DHS is requesting a substantial amount of funding that warrants close scrutiny by Congress. In addition, certain costs are one-time in nature and therefore should not be built into the base of DHS’s budget for future years. Furthermore, presumably most of any performance-based pay will be funded from what otherwise would be used from automatic across the board adjustments and step increases under the existing GS system. The DHS proposal correctly recognizes that a substantial investment in training is a key aspect of implementing a performance management system. The demonstration projects’ experiences show that while training costs are generally higher in the year prior to implementation, the need for in-depth and varied training continues as the system is implemented. We have reported that agencies will need to invest resources, including time and money, to ensure that employees have the information, skills, and competencies they need to work effectively in a rapidly changing and complex environment. Evaluating the impact of the system. High-performing organizations continually review and revise their human capital management systems based on data-driven lessons learned and changing needs in the environment. DHS indicates that it is committed to an ongoing comprehensive evaluation of the effectiveness of the human capital system, including the establishment of human capital metrics and the use of employee surveys. Collecting and analyzing data is the fundamental building block for measuring the effectiveness of these approaches in support of the mission and goals of the agency. DHS should consider doing evaluations that are broadly modeled on the evaluation requirements of the OPM demonstration projects. Under the demonstration project authority, agencies must evaluate and periodically report on results, implementation of the demonstration project, cost and benefits, impacts on veterans and other equal employment opportunity groups, adherence to merit system principles, and the extent to which the lessons from the project can be applied governmentwide. A set of balanced measures addressing a range of results, customer, employee, and external partner issues may also prove beneficial. An evaluation such as this would facilitate congressional oversight; allow for any midcourse corrections; assist DHS in benchmarking its progress with other efforts; and provide for documenting best practices and sharing lessons learned with employees, stakeholders, other federal agencies, and the public. Building a DHS-wide workforce plan. DHS has recently begun drafting a departmental workforce plan, using the draft strategic plan as a starting point. Workforce plans of different levels of sophistication are used in the five legacy agencies we studied. Despite their efforts, DHS headquarters has not yet been systematic or consistent in gathering relevant data on the successes or shortcomings of legacy human capital approaches or current and future workforce challenges—a deficiency that will make workforce planning more difficult. The strategic workforce plan can be used, among other things, as a tool for identifying core competencies for staff for attracting, developing, and rewarding contributions to mission accomplishment. Involving employees and other stakeholders in designing the details of the system. We reported last fall that DHS’s and OPM’s effort to design a new human capital system were collaborative and facilitated participation of employees from all levels of the department. We recommended that the Secretary of DHS build on the progress that has been made and ensure that the communication strategy used to support the human capital system maximize opportunities for employee involvement through the completion of the design process, the release of the system options, and implementation, with special emphasis on seeking the feedback and buy-in of frontline employees. Moving forward, employee perspectives can provide insights on areas that deserve particular attention while implementing the new performance management system. For example, DHS employees responding to the OPM FHCS reported that 37 percent indicated that high-performing employees are recognized or rewarded on a timely basis, which is lower than the governmentwide average of 41 percent; 60 percent believe that their appraisals are fair reflections of their performance, which is lower than the governmentwide average of 65 percent; 23 percent believe that steps are taken to deal with a poor performer who cannot or will not improve, which is lower than the governmentwide average of 27 percent; and 28 percent perceive that selections for promotions in their work units are based on merit, which is lower than the governmentwide average of 37 percent. In the GAO survey, members of the field team said that the most critical challenge in terms of performance management will be to create a system that is fair. Such data underscore the continuing need to involve employees in the design and implementation of the new system to obtain their buy-in to the changes being made. More specifically, employee involvement in the validation of core competencies is critical to ensure that the competencies are both appropriate and accepted. Summary Observations As we testified on the DOD civilian personnel reforms, the bottom line for additional performance-based pay flexibility is that an agency should have to demonstrate that it has a modern, effective, credible, and as appropriate, validated performance management system in place with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, to ensure fairness and prevent politicalization and abuse of employees. To this end, DHS’s proposed regulations take another valuable step towards results-oriented pay reform and modern performance management. DHS’s performance management system is intended to align individual performance to DHS’s success; hold employees responsible for accomplishing performance expectations; provide for meaningful distinctions in performance through performance- and market-based payouts; and be fair, credible, and transparent. However, the experiences of leading organizations suggest that DHS should require core, and as appropriate, validated competencies in its performance management system. The core competencies can serve to reinforce employee behaviors and actions that support the DHS mission, goals, and values and to set expectations for individuals’ roles in DHS’s transformation, creating a shared responsibility for organizational success and ensuring accountability for change. DHS should also continue to build safeguards into its revised human capital system. DHS’s overall effort to design a strategic human capital management system can be particularly instructive for future human capital management and reorganization efforts within specific units of DHS. Its effort can also prove instructive as other agencies design and implement new authorities for human capital management. Mr. Chairman, Madam Chairwoman, and Members of the Subcommittees, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. Contacts and Acknowledgments For further information, please contact J. Christopher Mihm, Managing Director, Strategic Issues, at (202) 512-6806 or [email protected]. Major contributors to this testimony include Edward H. Stephenson, Jr., Lisa Shames, Ellen V. Rubin, Lou V. B. Smith, Tina Smith, Masha Pasthhov- Pastein, Marti Tracy, Ron La Due Lake, Karin Fangman, Michael Volpe, and Tonnye Conner-White. Appendix I: Methodology In presenting our preliminary observations on the Department of Homeland Security’s (DHS) regulations, we reviewed the proposed human capital regulations issued jointly by DHS and the Office of Personnel Management (OPM) on February 20, 2004, in the Federal Register. Additional documents reviewed include relevant laws and regulations, the 52 DHS human capital system options released in October 2003, and testimony presented by leaders of DHS employee unions and the Merit Systems Protection Board (MSPB). Interviews with experts in federal labor relations and the federal adverse actions and appeals system provided additional insights. The official transcripts and report summarizing the proceedings of the Senior Review Advisory Committee meetings in October 2003 were also examined. A draft of the report summarizing the proceedings of the Senior Review Advisory Committee meetings in October 2003 was reviewed by members of the committee to ensure its reliability. Additionally, we attended the committee’s October 2003 meetings. Relevant GAO reports on human capital management were used as criteria against which the proposals were evaluated. To respond to your particular interest in seeking out and incorporating employee perspectives on the human capital system, we gathered information on employee perceptions from a variety of sources and presented these findings throughout the statement. Insights to employee opinions were gathered from the OPM Federal Human Capital Survey (FHCS), a GAO-administered survey of the field team used to inform the human capital system design effort (GAO survey), and a report summarizing findings from the DHS focus groups held during the summer of 2003. OPM Federal Human Capital Survey To assess the strengths and weaknesses of selected provisions of DHS’s proposed human capital system, we reviewed the analysis of the DHS component agencies’ responses to relevant questions on OPM’s FHCS of 2002 for those legacy components that are now within DHS: the Animal and Plant Health Inspection Service (APHIS); the U.S. Coast Guard; the U.S. Customs Service; the Federal Emergency Management Agency; the Immigration and Naturalization Service; Federal Law Enforcement Training Center; U.S. Secret Service; Office of Emergency Preparedness and National Disaster Medical System; and the Federal Protective Service. This governmentwide survey was conducted from May through August 2002. It was administered to employees of 24 major agencies represented on the President’s Management Council, which constitute 93 percent of the executive branch civilian workforce. There were 189 subelement/organizational components of the 24 agencies that participated. The sample was stratified by employee work status: supervisory, nonsupervisory, and executive. Of the more than 200,000 employees contacted, a little over 100,000 employees responded to the survey, resulting in a 51 percent response rate. OPM reported that the margin of error for the percentages of respondents governmentwide was plus or minus 1 percent at a 95 percent confidence interval. Likewise, it reported that the margin of error for the percentages of respondents for individual agencies was somewhat higher but less than plus or minus 5 percent. The OPM survey was conducted during the same time frame that the administration proposed legislation to form DHS; thus, the opinions expressed by the respondents to the survey were before the formation of DHS. For reporting purposes, OPM compiled the DHS responses by combining the various subentities cited above. The responses approximate the views of some, but not all, employees now at DHS. For example, the Transportation Security Administration (TSA) screeners were not hired at the time of the survey. Also, APHIS employees were divided between DHS and the Department of Agriculture (USDA), so the APHIS respondents included some employees who remained at USDA. Because OPM did not provide us with a copy of the full survey data set that included all records or the strata weights for any of the records, we could not perform our own analyses of the data or calculate the confidence intervals that would be associated with such analyses. OPM did, however, provide us with access to a Web site that provided reports with weighted data analyses for the FHCS 2002. We addressed the reliability of the survey analyses by (1) reviewing existing information about the survey data collection and analysis processes and (2) interviewing OPM agency officials who were knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this testimony. We reviewed the analyses of the DHS component agencies presented on the Web site in four areas (pay and performance management, classification, labor relations, and adverse actions and appeals) that compared DHS-wide data to governmentwide data. GAO Field Team Survey We were interested in obtaining the views of the field team participants who served as a key source of information for DHS’s Core Design Team. The field team consisted of DHS managers and staff. Members were selected by departmental management or the three major unions. From October through December 2003, we surveyed the 31 members of the team to obtain their insights into the DHS design process and proposed human capital system options. The survey, administered by e- mail and fax, contained two parts. The first part addressed their views on how effectively the field team was utilized throughout the design process. The second part addressed their views about human capital challenges and the proposed policy options in four areas: (1) pay and classification, (2) performance management, (3) labor relations, and (4) discipline and appeals. Prior to distribution, the questionnaire was reviewed by DHS and OPM officials and pretested with a field team member to ensure clarity of the questions and determine whether the respondent had the knowledge to answer the questions. The questionnaire was revised based on their input. We received completed questionnaires from 19 of 31 field team members. We aggressively followed up with nonrespondents by telephone and e-mail. Because many of the field team members were either not based in offices, on extensive travel, or difficult to reach, we extended our survey through December 2003. The views that we obtained are not representative of all the participants. DHS Focus Groups DHS conducted multiple focus groups and Town Hall meetings from the end of May through the beginning of July 2003 in 10 cities across the United States. Six focus group sessions were held in each city to obtain employee input and suggestions for the new human resource system. In most cities, five of the six sessions were devoted to hearing employees’ views while the remaining sessions heard the views of supervisors and managers. Each focus group was facilitated by a contractor. The contractor used a standard focus group facilitation guide to manage each session. Additionally, the contractor was responsible for recording the issues identified during each focus group session and compiling a summative report on the findings from all the focus groups. We did not attend any focus group sessions and were not able to review any original notes from the sessions to assess the accuracy of the summative report. Participation in the focus groups was not random nor was it necessarily representative of DHS employees. DHS reports that employee participation generally reflected the population in that location. For example, the level of bargaining unit representation at the focus groups was determined based on OPM data on bargaining unit membership. Bargaining unit employees were selected by union representatives to participate in the focus groups, while nonbargaining unit employees and supervisors were selected by DHS management. Union representatives and DHS managers were asked to select a diverse group of participants based on occupation, work location, gender, ethnicity, and age. This work was done in accordance with generally accepted government auditing standards from March 2003 through February 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The creation of the Department of Homeland Security (DHS) almost one year ago represents an historic moment for the federal government to fundamentally transform how the nation will protect itself from terrorism. DHS is continuing to transform and integrate a disparate group of agencies with multiple missions, values, and cultures into a strong and effective cabinet department. Together with this unique opportunity, however, also comes significant risk to the nation that could occur if this transformation is not implemented successfully. In fact, GAO designated this implementation and transformation as high risk in January 2003. Congress provided DHS with significant flexibility to design a modern human capital management system. GAO reported in September 2003 that the design effort to develop the system was collaborative and consistent with positive elements of transformation. Last Friday, the Secretary of DHS and the Director of the Office of Personnel Management (OPM) released for public comment draft regulations for DHS's new human capital system. This testimony provides preliminary observations on selected major provisions of the proposed system. The proposed human capital system is designed to be aligned with the department's mission requirements and is intended to protect the civil service rights of DHS employees. Many of the basic principles underlying the DHS regulations are consistent with proven approaches to strategic human capital management, including several approaches pioneered by GAO, and deserve serious consideration. However, some parts of the system raise questions that DHS, OPM, and Congress should consider. Pay and performance management: The proposal takes another valuable step towards results-oriented pay reform and modern performance management. For effective performance management, DHS should use validated core competencies as a key part of evaluating individual contributions to departmental results and transformation efforts. Adverse actions and appeals: The proposal would retain an avenue for employees to appeal adverse actions to an independent third party. However, the process to identify mandatory removal offenses must be collaborative and transparent. DHS needs to be cautious about defining specific actions requiring employee removal and learn from the Internal Revenue Service's implementation of its mandatory removal provisions. Labor relations: The regulations recognize employees' right to organize and bargain collectively, but reduce areas subject to bargaining. Continuing to involve employees in a meaningful manner is critical to the successful operations of the department. Once DHS issues final regulations for the human capital system, it will be faced with multiple implementation challenges. DHS plans to implement the system using a phased approach, however; nearly half of DHS civilian employees are not covered by these regulations, including more than 50,000 Transportation Security Administration screeners. To help build a unified culture, DHS should consider moving all of its employees under a single performance management system framework. DHS noted that it estimates that about $110 million will be needed to implement the new system in its first year. While adequate resources for program implementation are critical to program success, DHS is requesting a substantial amount of funding that warrants close scrutiny by Congress. The proposed regulations call for comprehensive, ongoing evaluations. Continued evaluation and adjustments will help to ensure an effective and credible human capital system. DHS has begun to develop a strategic workforce plan. Such a plan can be used as a tool for identifying core competencies for staff for attracting, developing, evaluating, and rewarding contributions to mission accomplishment. The analysis of DHS's effort to develop a strategic human capital management system can be instructive as other agencies request and implement new strategic human capital management authorities.
Background This section describes the legal framework for obligation accounting, the uranium market and how obligations on uranium may be added at various stages of the nuclear fuel cycle, the mechanics of obligation exchanges, and the national security need for unobligated LEU. Legal Framework for Obligation Accounting The United States has negotiated nuclear cooperation agreements under section 123 of the Atomic Energy Act of 1954, as amended, with nuclear trading partners worldwide. These agreements establish obligations governing how nuclear material and equipment subject to the agreements are to be used. The United States had 22 nuclear cooperation agreements in force as of June 2016. Once a nuclear cooperation agreement has been negotiated, U.S. government officials may negotiate an administrative arrangement that provides procedures for implementing the agreement, including details about accounting for foreign obligated material. Administrative arrangements with foreign partners may require DOE to produce annual reports on inventories of obligated material. For example, under the administrative arrangements for Australia, Canada, and EURATOM, the reports are to be conducted at the country level, not by facility. According to DOE documents, this is generally the case with its foreign partners. The annual obligation inventory reports summarize the import and export of foreign obligated material into and out of the United States in a given year. NMMSS produces summary information for these reports, which contain aggregated data for foreign obligation balances at all U.S. facilities by calendar year. After NMMSS generates data for the annual obligation inventory reports, NNSA provides these reports to U.S. foreign partners. These foreign partners use the reports to periodically reconcile their accounting records with those of the United States. Other regulations, orders, guidance and an international agreement govern the accounting and reporting of nuclear material as well. For example, a DOE order and NRC regulations establish requirements for nuclear material control and accounting and the reporting of nuclear materials to NMMSS. The DOE order provides direction on the procedures that DOE contractors are to use in submitting data on certain quantities of 17 DOE-owned reportable nuclear materials. NRC regulations require NRC licensees to submit data to NMMSS on certain quantities of some of these nuclear materials. In addition, the United States has an agreement with the International Atomic Energy Agency (IAEA) on the safeguard of nuclear material, which requires the United States to maintain a system of accounting and control over certain nuclear material. Through the application of safeguards, IAEA seeks to verify that nuclear material subject to safeguards is not diverted to nuclear weapons or other proscribed purposes. Uranium Market and How Obligations on Uranium May Be Added at Various Stages of the Nuclear Fuel Cycle Uranium is a commodity that is necessary for commercial nuclear power. The market is global, and the vast majority of the uranium used to fuel U.S. commercial nuclear reactors is mined abroad. According to a 2016 U.S. Energy Information Administration, report, in 2015, only 6 percent of the 57 million pounds of uranium delivered to fuel U.S. nuclear reactors was of U.S. origin. Of the remaining 94 percent, 47 percent originated in Australia or Canada; 37 percent originated in Kazakhstan, Russia, or Uzbekistan; and the remaining 10 percent originated in Bulgaria, the Czech Republic, Malawi, Namibia, Niger, or South Africa. After being mined, uranium undergoes a number of additional processing steps to become nuclear fuel for commercial reactors. These steps make up the nuclear fuel cycle. Obligations on nuclear material may be added at various stages in the cycle (see fig. 1). For example, if uranium is mined and milled in Australia and is then shipped to Canada, it may carry an Australian obligation. If the uranium goes through conversion at a plant using Canadian technology and is then shipped to Europe, it may carry a Canadian obligation. If the uranium is enriched at a plant using European technology and is then shipped to Japan, it may carry an obligation from EURATOM. Finally, if the uranium undergoes fuel fabrication at a plant using Japanese technology before export to a final user in the United States, it may carry a Japanese obligation. By the end of the process, the uranium may carry obligations to Australia, Canada, EURATOM, and Japan. Figure 1 illustrates how obligations may be added to uranium at various stages of the nuclear fuel cycle. Such obligations are tracked in NMMSS while the material remains in the United States. Mechanics of Obligation Exchanges To conduct an obligation exchange and record the transaction in NMMSS, both the facility “shipping” the obligation and the facility “receiving” the obligation must submit a DOE/NRC Form 741, “Nuclear Material Transaction Report,” to NMMSS. The shipper’s transaction report and the receiver’s transaction report should contain the same data. Following an obligation exchange, each facility will have the same total amount of a given material type, such as LEU, as it had before the exchange, but it will have different proportions of obligated and unobligated material (see fig. 2). National Security Need for Unobligated LEU Tritium is a key radioactive isotope used to enhance the power of nuclear weapons in the U.S. stockpile. Tritium is produced in nuclear reactors, and NNSA supports a program that produces tritium from LEU to help meet stockpile demands. NNSA’s tritium program requires that only unobligated LEU be used in reactors to produce tritium. The reactors that are under agreement to produce tritium for DOE belong to TVA, a government corporation, and, according to DOE, LEU fuel loaded into TVA reactors for tritium production must consist entirely of LEU that is unobligated. It is DOE’s responsibility to ensure that TVA is supplied with unobligated fuel to support tritium production. DOE maintains an inventory of unobligated LEU. However, the United States lost its sole supplier of unobligated LEU when USEC ceased uranium enrichment operations in 2013. In 2014, NNSA projected that unobligated LEU fuel for tritium production would last through 2027. NNSA has since identified several actions that could extend that projected date from 2038 to 2041. These actions include “downblending” highly enriched uranium (HEU) from dismantled weapons to produce unobligated LEU, as well as conducting obligation exchanges to preserve unobligated LEU. According to DOE, to meet defense mission requirements in the future, the United States will eventually need to reestablish the capability to produce unobligated LEU. In the meantime, NNSA and TVA have been working together to identify actions to preserve DOE’s remaining quantities of unobligated LEU. Over 800 Obligation Exchanges Have Taken Place Since October 2003, and Most Were Conducted for Commercial Reasons From October 1, 2003, through November 30, 2015, 817 obligation exchanges took place in the United States, and most were conducted to meet commercial customer demand. The majority (98 percent) of these exchanges involved NRC-licensed commercial facilities; the rest were conducted by DOE contractors. NRC licensees told us they generally conducted exchanges to meet customer demand for material with obligations from certain countries and to avoid the need to physically transport uranium. DOE contractors told us they conducted a number of obligation exchanges primarily to accommodate the closure of the vault where the material was being stored. Exchanges Typically Involved Commercial Nuclear Facilities, and Numbers of Exchanges Have Decreased in Recent Years According to NMMSS data, there were 817 exchanges of foreign obligated nuclear material in the United States from October 1, 2003, through November 30, 2015. The majority of these obligation exchanges were between commercial nuclear facilities and involved particular material types and certain types of obligations. Specifics on the exchanges follow. Of the 817 obligation exchanges, 802 (98 percent) were conducted by NRC-licensed commercial nuclear facilities; 14 were conducted by DOE contractors; and 1 was conducted by an NRC licensee that conducts work both for commercial purposes and for DOE. The majority (99 percent) of the obligations exchanged involved LEU or natural uranium, and the remaining 1 percent consisted of two obligation exchanges involving plutonium and two involving depleted uranium. The majority (99 percent) of obligation exchanges involved facilities exchanging obligated for unobligated material. The remaining 1 percent involved facilities exchanging obligated material for obligated material. The majority (92 percent) of the obligations exchanged involved obligations from either Australia or Canada. Five percent were obligations from Argentina, Brazil, Chile, China, EURATOM, or Japan, and 3 percent involved material with layered obligations—that is, obligations to multiple countries. The number of obligation exchanges peaked in 2007 with 107 exchanges and has generally declined since then, with steady declines since 2011. Figure 3 shows the number of obligation exchanges conducted in the United States from October 1, 2003, through November 30, 2015. According to officials from Centrus Energy Corp. (formerly USEC), the company that conducted the most obligation exchanges from 2003 to 2015—the peak in 2007 resulted from increased demand that year for LEU. The officials noted that the demand for LEU fluctuates and can be cyclical, as most nuclear power plants operate on an 18-month cycle and must be shut down at regular intervals to replace spent fuel rods with new ones. The officials attributed the decline in obligation exchanges to decreased demand for LEU and to the May 2013 closure of the Paducah Gaseous Diffusion Plant, which meant that less unobligated LEU was available. Most Exchanges Were Conducted to Meet Commercial Customer Demand for Uranium with Specific Obligations and to Avoid Physically Transporting Material According to NRC licensees, exchanges were generally conducted to meet commercial customer demand for material with obligations from certain countries and to avoid the need to physically transport uranium. In contrast, DOE contractors told us they conducted a number of obligation exchanges primarily to accommodate the closure of the vault where the material was being stored. Obligation Exchanges Conducted by NRC Licensees According to NRC licensees, obligation exchanges were conducted primarily to meet their customers’ demand for uranium with specific obligations. These customers were utility companies that may have had contracts that specified the delivery schedule and obligation requirements for LEU years in advance of delivery of material. NRC licensees told us that sometimes, as a result of these contracts, facilities needed to deliver uranium with specific obligations before they had physically obtained uranium with the required obligations, so they conducted obligation exchanges with other facilities to obtain and provide the material on the required schedule. Customers wanted either unobligated LEU or LEU with obligations from certain countries to help simplify their nuclear material accounting. For example, representatives from one facility stated that their customers prefer to obtain unobligated LEU to avoid having to maintain separate inventories of obligated and unobligated material, as each obligation type needs to be tracked separately in NMMSS. By having unobligated material, or limiting the number of different obligations, facilities were able to minimize the number of tracking steps. According to some NRC licensees, conducting obligation exchanges also allowed them to obtain the obligations their customers wanted without physically transporting uranium from another NRC-licensed facility. They stated that obligation exchanges allowed them to avoid the high transportation costs and safety risks associated with physically shipping nuclear material. Overall, NRC licensees told us that obligation exchanges are a customary and normal business practice in the nuclear fuel industry. Representatives from one NRC licensee told us that the facility “could not operate” without conducting obligation exchanges and that it is an essential industry-wide practice. In addition, we found that certain NRC licensees conducted obligation exchanges for national security purposes—that is, to obtain unobligated LEU for tritium production. Specifically, of the 802 obligation exchanges conducted by NRC licensees, 3 were conducted by licensees that supplied nuclear fuel to TVA, which produces tritium for NNSA. In 2014 and 2015, on behalf of TVA, three NRC-licensed facilities each conducted an obligation exchange to preserve unobligated LEU for tritium production at TVA’s Watts Bar 1 commercial nuclear power reactor. According to DOE officials, such obligation exchanges help extend the date when the United States will run out of unobligated LEU for tritium production. According to DOE officials, these obligation exchanges are necessary because the United States no longer has a domestic source of unobligated LEU for tritium production. TVA and DOE officials stated that additional exchanges for national security are anticipated in the future— perhaps one or two each year. Obligation Exchanges Conducted by DOE Contractors DOE contractors and on-site agency officials told us they conducted 14 obligation exchanges primarily to accommodate the closure of the vault where the material was being stored. Specifically, 13 of the 14 obligation exchanges were conducted to transfer obligations to accommodate DOE closure of a vault in a building at the Y-12 National Security Complex, which was scheduled for modernization. As part of the effort to empty the vault, contractors transferred the obligations on this material to other facilities, allowing them to relocate the now unobligated material from the vault. The other exchange was conducted to transfer obligated plutonium to a facility under IAEA safeguards at the Savannah River Site, according to DOE contractors. DOE and NRC Have Procedures for Accurate Tracking and Reporting of Obligation Exchanges, but Conditions Exist That May Affect Agencies’ Abilities to Use NMMSS to Demonstrate Compliance With Nuclear Cooperation Agreements DOE and NRC have three procedures designed to ensure accurate tracking and reporting of transaction data in NMMSS, including data on obligation exchanges. However, conditions exist that may affect agencies’ abilities to use NMMSS to demonstrate compliance with nuclear cooperation agreements in the future and monitor unobligated inventories effectively. DOE and NRC Have Procedures for Accurate Tracking and Reporting of Obligation Exchanges DOE and NRC have three procedures designed to ensure the accurate tracking and reporting of transaction data in NMMSS. These procedures apply to all transaction data in NMMSS, including data on obligation exchanges. The procedures are validation, reconciliation, and export records comparison. Validation. NMMSS completes automated validation checks for the completeness and accuracy of data on nuclear material transactions. There are two kinds of validation checks: (1) edit checks, which verify that the data are complete and properly formatted, and (2) compatibility checks, which verify that the shipper and receiver submit identical information about the obligation exchange. There are 550 separate types of edit checks and 74 types of compatibility checks. Validation checks control for data entry mistakes and omissions. Any errors identified through these checks can be corrected by NMMSS officials in consultation with authorized personnel at DOE sites and NRC-licensed facilities. Reconciliation. Reconciliation is the process of comparing NMMSS nuclear material inventory data with other records. Reconciliation occurs at two levels: at the facility level and at the country level. At the facility level, NMMSS officials compare facilities’ reported physical inventory records—including their obligation balances—with the data in NMMSS and either confirm that these records are in agreement or alert facilities to make adjustments, if necessary. A DOE order and NRC regulations require U.S. facilities to reconcile their nuclear material inventories—including their foreign obligation balances—at least once annually. DOE contractors must report their inventories to NMMSS by September 30 of each year and reconcile any discrepancies with NMMSS. NRC licensees must report their inventories to NMMSS at least annually, depending on the type of material held, and reconcile any discrepancies with NMMSS. At the country level, NMMSS officials annually compare the inventory data with the records of foreign partners, including obligation balances. According to DOE officials, this inventory reconciliation process is conducted in accordance with the terms of certain administrative arrangements. Like reconciliation at the facility level, the officials told us that this process confirms whether NMMSS records are in agreement with other records and alerts officials to follow up, if necessary. Export records comparison. A function in NMMSS compares foreign partners’ export records with NMMSS data. Foreign partners provide advance notification, as well as a confirmation of shipment, to the U.S. government when shipping nuclear material to the United States. NMMSS then compares these communications, as well as foreign partners’ export records obtained through IAEA, with data in NMMSS. Until January 2012, the communications were stored in documents, but since then, they have been recorded electronically in NMMSS. We tested elements of these procedures and analyzed certain NMMSS data and found the data to be generally accurate with little evidence of inaccuracies or other problems with NMMSS data on obligation exchanges. For instance, we tested several elements of the records of compatibility and edit checks for all 817 obligation exchanges to verify that the shipper and receiver submitted identical information to NMMSS for each obligation exchange. We found that nearly all information submitted by the shipper and receiver—such as material type, element weight, and obligation country code—was identical for each of the obligation exchanges conducted from October 1, 2003 through November 30, 2015. However, we found two issues in NMMSS obligation exchange data. We discussed these issues with DOE and NRC officials, who indicated that they planned to take steps to address one of the identified issues and explained that that other issue had already been addressed. First, we found that according to NMMSS data, 2 of the 817 obligation exchanges conducted by NRC licensees involved uranium with assay levels of slightly more than 5 percent, which is contrary to NRC guidance that obligation exchanges are restricted to uranium enriched to 5 percent or less. According to DOE and NRC officials, these exceeded levels were isolated anomalies and may have been due to rounding errors in NMMSS. Nonetheless, as a result of our audit work, NMMSS officials stated that they intend to more closely monitor the potential exceedance of the 5 percent threshold by implementing a new edit check in NMMSS to identify obligation exchanges at licensed facilities on material with assay levels greater than 5 percent and flag that transaction as an error. According to NRC officials, the edit check will identify an error, which will prompt actions to resolve the transaction in NMMSS. Specifically, the licensee will have to consult with NMMSS program staff to determine whether the transaction requires additional U.S. approval before a manual authorization is given to override the error. Second, we found five obligation exchanges where the shipping facility and receiving facility reported different activity dates. According to one of NMMSS’s compatibility checks, the date must be the same for the shipper and receiver of an obligation exchange. However, NMMSS did not detect the mismatched data in these five obligation exchanges, which occurred from 2004 to 2009. According to DOE officials, the five obligation exchanges with mismatched dates were not caught in NMMSS because they predated an edit check that was implemented in NMMSS in March 2013. DOE officials said that, since the edit check was implemented 3 years ago, there have been no obligation exchanges with mismatched activity dates. DOE officials told us that this issue has been addressed. Conditions Exist That May Affect Agencies’ Abilities to Use NMMSS to Demonstrate Compliance With Nuclear Cooperation Agreements and Monitor Unobligated Inventories Effectively Two conditions exist that may affect DOE’s and NRC’s abilities to use NMMSS to demonstrate compliance with nuclear cooperation agreements and effectively monitor unobligated inventories. First, the agencies have not documented the conditions under which facilities may carry negative obligation balances. Second, the United States has a declining domestic inventory of unobligated LEU for national security purposes that is projected to last until 2038 to 2041, but NMMSS does not have a monitoring capability that could alert DOE when the inventory of unobligated LEU is particularly low. Agencies Have Not Documented the Conditions under Which Facilities May Carry Negative Obligation Balances Nuclear cooperation agreements and the administrative arrangements that we reviewed do not address whether or to what extent facilities may carry negative obligation balances but while DOE officials told us that the practice is not prohibited, DOE and NRC have not documented the conditions under which it is allowed. Negative obligation balances occur when a facility conducts an obligation exchange without having enough of a given nuclear material in inventory—similar to writing a check for an amount that is not currently in one’s checking account, with the assumption that enough funds will be deposited in time to cover the cashing of the check. DOE and NRC have not documented the conditions under which facilities are allowed to carry negative obligation balances. Some NRC licensees stated that they carried negative balances for extended periods—including for several months or more than a year. DOE officials confirmed that certain facilities have carried negative obligation balances for brief periods, and others have carried them for extended periods of time. Specifically, according to DOE officials, 17 facilities in the United States have carried negative obligation balances between reconciliation dates. DOE officials attributed 6 of 17 instances to facility “business practices.” Specifically, they stated that certain facilities, such as uranium enrichment and fuel fabrication plants, need to deliver uranium with specific obligations to customers before they have physically obtained such uranium. As a result, the facilities may carry negative obligation balances for months at a time, with the assumption that sufficient obligations will be received on incoming shipments of nuclear material and that receipt of this material will eventually cover the negative obligation balance. DOE officials attributed the negative obligation balances at 11 of the 17 facilities to the brief time lag in reporting information to NMMSS. Specifically, the DOE/NRC Form 741 that officially records the obligation exchange must be sent by the “shipping” facility to NMMSS within 1 business day, and the form that officially records the receipt of the material must be sent by the “receiving” facility to NMMSS within 10 days. Thus, it is possible for a facility to temporarily carry a negative obligation balance within this 10-day window. However, we found that 1 of the 11 facilities had a negative obligation balance that was of significantly longer than 10 days: this negative obligation balance of LEU lasted 18 months and spanned multiple reconciliation periods. Specifically, an NRC-licensed facility carried a negative balance of foreign obligated LEU from February 2006 to August 2007. Representatives of the facility and DOE officials confirmed this negative obligation balance and that it lasted for about 18 months— spanning three reconciliation periods. According to officials from the facility, to remedy its negative balance, in August 2007 the facility conducted an obligation exchange with another facility to obtain foreign obligated LEU. DOE officials told us that since 2010, there have been no other instances of negative obligation balances persisting at any facility past reconciliation. Nevertheless, since one negative obligation balance persisted past multiple reconciliation periods, a risk remains that facilities may carry negative obligation balances for extended periods of time in the future. Under federal standards for internal control, agency management should design control activities to achieve objectives and respond to risk, including by clearly documenting internal control, and the documentation may appear in management directives, administrative policies, or operating manuals. While NNSA’s procedures for foreign obligation reporting state that both the shipping and receiving facilities should verify that there is sufficient inventory of the given material type to support an obligation exchange, they do not document the conditions under which carrying a negative obligation balance is allowed. Moreover, some NRC licensees told us that it is unclear whether or to what extent facilities may carry negative obligation balances. Representatives from one facility stated that it would be helpful for NMMSS officials to help clarify when the practice of carrying negative obligation balances is allowed and for what duration. DOE officials told us they strongly encourage facilities to avoid carrying negative obligation balances but acknowledge that they have not documented in guidance when the practice is allowed. Specifically, DOE and NRC officials said that they discourage this practice through informational presentations that they provide to NRC licensees and DOE contractors at trainings and conferences. However, DOE officials told us that the practice of carrying negative obligation balances is not prohibited and that they have not documented the conditions under which carrying a negative obligation balance is allowed. Instead, DOE officials stated that their main internal control for negative obligation balances is to review facilities’ inventory once each year as part of the annual reconciliation process. Through this process, NMMSS analysts expect facilities to correct any negative obligation balances by the time of reconciliation. However, as we noted, this process did not address the negative obligation balance carried by one facility through several reconciliation cycles. According to NNSA documents, additional internal control procedures have been implemented in NMMSS in the past 6 years—such as developing new edit checks to detect incidences of negative obligation balances at the time of reconciliation—which NNSA officials believe will prevent facilities in the future from carrying negative obligation balances past reconciliation. However, we were unable to test these internal controls to verify whether there have been any additional incidences of facilities carrying negative obligation balances past reconciliation. As the inventory of LEU continues to decline, there is a growing risk that facilities carrying negative obligation balances between reconciliation periods may not be able to correct their negative obligation balances at year end. Without DOE and NRC clarifying in guidance the conditions under which facilities can carry negative balances, facilities may be at risk in the future of conducting exchanges that they ultimately cannot fulfill, putting the U.S. government at risk of not complying with its nuclear cooperation agreements. NMMSS Does Not Have a Capability to Alert DOE When the Inventory of Unobligated LEU Is Particularly Low NMMSS does not have a capability to alert DOE when the inventory of unobligated LEU is low, placing facilities with negative obligation balances at risk of not being able to reconcile them. According to NMMSS data, there have been significant decreases in the inventory of unobligated LEU, particularly from 2014 to 2015, which DOE officials attributed in part to the closure of the Paducah Gaseous Diffusion Plant in 2013, when the United States lost its sole supplier of unobligated LEU. DOE officials project that the inventory of unobligated LEU will continue to decrease without a domestic source. DOE documents estimate that the inventory will run out at some point from 2038 to 2041. Federal standards for internal control state that management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. Ongoing monitoring is to be built into the entity’s operations, performed continually, and responsive to change and may include automated tools, which can increase objectivity and efficiency by electronically compiling evaluations of controls and transactions. DOE officials acknowledged that they do not have a specific capability in NMMSS that could serve as an early-warning system to alert them when the inventory of unobligated LEU becomes low enough to put facilities at risk of running negative obligation balances that they may not be able to reconcile. They also told us that no such capability is currently needed because the inventory of unobligated LEU is currently sufficient to limit such risk. Moreover, DOE officials added that they would step in to cover a negative obligation balance that could not otherwise be covered out of a facility’s own inventory, by transferring unobligated LEU from the U.S. national security inventory. Nevertheless, while the U.S. national security inventory of unobligated LEU could be used to correct any future negative obligation balances, if the decline in this U.S. inventory of unobligated LEU continues, and if it is possible to carry a negative obligation balance beyond reconciliation, there is a risk that, at some point in the future, a facility may conduct an exchange for nuclear material that it cannot fulfill—essentially bouncing a check. DOE officials acknowledged that the decline in the national security inventory of unobligated LEU may change the way senior DOE officials need to use NMMSS in the future and that additional monitoring may be needed to ensure compliance with nuclear cooperation agreements. Without DOE and NRC developing an early-warning monitoring capability in NMMSS to alert senior DOE officials when the inventory of unobligated LEU becomes low, DOE cannot know when supplies of unobligated LEU are no longer available to correct negative obligation balances, thereby putting the U.S. government at risk of noncompliance with its nuclear cooperation agreements. Conclusions Under terms of its nuclear cooperation agreements, the United States must account for foreign obligated nuclear material, and NMMSS has been designated as the system to track and report on that material as it enters, leaves, and moves within the country. DOE and NRC have developed procedures designed to ensure that the transaction data in NMMSS, including data on obligation exchanges, are accurate. With the exception of a few issues, which NNSA has fixed or plans to fix by developing new internal controls, we found that these procedures are working as expected and that NMMSS data on obligation exchanges appear to be reliable. Certain obligation exchange practices, combined with trends in the inventory of unobligated LEU, may change the information DOE and NRC officials need and the actions required to ensure compliance with nuclear cooperation agreements in the future. While DOE’s procedures for foreign obligation reporting state that facilities should verify that there is sufficient inventory of the given material type to support an obligation exchange, neither the procedures nor other guidance document the conditions under which carrying a negative obligation balance is allowed. In addition, the United States has a declining domestic inventory of unobligated LEU, but NMMSS does not have an early-warning monitoring capability to alert DOE when this inventory is particularly low. Without such an early- warning monitoring capability in NMMSS to alert senior DOE officials when the inventory of unobligated LEU is particularly low, facilities carrying negative balances could put the U.S. government in the position of potential non-compliance with its nuclear cooperation agreements. Recommendations for Executive Action We are making two recommendations to the Under Secretary for Nuclear Security, as the Administrator of the National Nuclear Security Administration, and the Nuclear Regulatory Commission to help ensure compliance with the United States’ nuclear cooperation agreements: 1. Clarify in guidance the conditions under which facilities may carry negative obligation balances. 2. Develop an early-warning monitoring capability in NMMSS to alert senior DOE officials when the inventory of unobligated LEU is particularly low. Agency Comments and Our Evaluation We provided drafts of this report to DOE, NRC, and TVA for review and comment. DOE and NRC provided written comments, which are summarized below and reproduced in appendix I and II, respectively. TVA did not comment on our findings and recommendations. In addition, all three agencies provided technical comments, which we incorporated as appropriate. In their written comments, DOE and NRC did not explicitly state whether they concur with the recommendations. DOE and NRC described actions they are implementing or planning to implement to address our recommendations. Concerning our first recommendation that DOE and NRC clarify in guidance the conditions under which facilities may carry negative obligation balances, NRC stated that it is currently reviewing its guidance on nuclear material reporting and will consider our recommendation when updating this guidance. We will continue to monitor NRC’s implementation of this recommendation. DOE said that it is deferring to NRC’s response on this recommendation. DOE also stated that NNSA has no authority to regulate negative balances that may occur as a result of commercial business practices. DOE’s response suggests that it believes our recommendation was limited to NRC licensees engaged in commercial transactions. However, we recommended both NRC and NNSA develop guidance to clarify when facilities with materials under their jurisdiction may carry negative obligation balances because the practice of carrying negative obligation balances at either a NRC-licensed facility or DOE facility could put the United States in a position of noncompliance with international agreements in the future if balances of unobligated LEU become particularly low. Developing guidance could mitigate this risk. Concerning our second recommendation that DOE and NRC develop an early-warning monitoring capability in NMMSS to alert senior DOE officials when the inventory of unobligated LEU is particularly low, DOE noted that it has been directed by Congress to biennially update its Tritium and Enriched Uranium Management Plan through 2060. DOE stated it will address our recommendation through these updates, which will assess the inventory of unobligated enriched uranium for national security applications. We agree that updating the plan will help DOE monitor its inventory of unobligated LEU, and we acknowledge that for now, biennial updates to the plan may be sufficient to monitor the declining inventory of unobligated LEU. However, the inventory of unobligated LEU has rapidly declined in recent years—about 10 percent in 1 year alone—as noted in this report. As the inventory of unobligated LEU continues to decline, more frequent monitoring may be necessary to ensure that facilities do not conduct obligation exchanges for nuclear material that they cannot fulfill, which could put the United States at risk of noncompliance with its nuclear cooperation agreements. We continue to believe that our recommendation for DOE to develop an early-warning monitoring system will help mitigate the chance of DOE officials being unable to address a sudden decline in the inventory unobligated LEU. In its written response to our second recommendation, NRC stated that DOE has notified it that DOE will complete biennial updates to the plan and that NRC will support DOE as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report’s date. At that time, we will send copies to the appropriate congressional committees, the NNSA Administrator, the Chairman of the Nuclear Regulatory Commission, TVA’s Board of Directors, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Comments from the Department of Energy Appendix II: Comments from the Nuclear Regulatory Commission Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Nathan Anderson (Assistant Director), Eric Bachhuber, and Tyler Kent made key contributions to this report. Also contributing to this report were Alison B. Bawden, Antoinette Capaccio, Julia Coulter, Kaitlin Farquharson, Ellen Fried, Cindy Gilbert, Mitch Karpman, Amanda K. Kolling, Jeff Philips, Dan C. Royer, and Vasiliki Theodoropoulos.
The United States must generally account for nuclear material it has obtained under nuclear cooperation agreements with foreign partners. The agreements generally impose certain conditions, including that the material be used for peaceful purposes. Material subject to such conditions is called “obligated.” The United States relies on NMMSS to track obligated material and to help demonstrate U.S. compliance with agreements. Material not subject to agreement conditions is called “unobligated.” Some forms of uranium, such as LEU, are used to maintain the nuclear weapons in the U.S. stockpile, but the U.S. inventory of unobligated LEU is declining. GAO was asked to review the practice of obligation exchanges and the reliability of certain NMMSS data. This report examines (1) the number of obligation exchanges in the United States since 2003, and the reasons for them, and (2) how DOE and NRC ensure such exchanges are accurately tracked and reported through NMMSS. GAO analyzed NMMSS data and agency documents and interviewed agency officials, DOE contractors, and NRC licensees, among other steps. In the United States, from October 1, 2003, through November 30, 2015, there were 817 exchanges of nuclear material that carried obligations to foreign partners under nuclear cooperation agreements. These exchanges allowed the obligated nuclear material to be transferred between U.S. facilities without physically moving it. For example, if a facility had a certain amount of obligated nuclear material and another facility had at least the same amount and type of unobligated material (which is not subject to the same conditions as obligated material), the facilities could exchange the obligations on their material so that each facility had a portion of both types of material without physically moving it. Numbers of exchanges. Of the 817 exchanges, 802 were conducted by Nuclear Regulatory Commission (NRC)-licensed facilities—private companies and other entities involved in commercially producing nuclear energy. Of the remaining exchanges, 14 were conducted by contractors that run Department of Energy (DOE) laboratories and weapons-production sites, and 1 by an NRC licensee that does both commercial and DOE work. Reasons for exchanges. NRC licensees said they conducted exchanges primarily to meet their utility customer demand, as well as to avoid the high costs and safety risks associated with physically transporting nuclear material. DOE contractors said they conducted exchanges primarily to avoid physically moving nuclear material stored at a specific site. DOE and NRC have procedures to ensure accurate tracking and reporting of data on obligation exchanges through the Nuclear Materials Management and Safeguards System (NMMSS). GAO tested elements of these procedures and generally found them to be reliable. But, GAO identified two issues that may impact the agencies' ability to effectively monitor nuclear material inventories. First, some facilities have carried negative obligation balances for extended periods. A negative obligation balance occurs when a facility conducts an exchange without having enough of a given material in its physical inventory to cover the exchange. In certain circumstances, negative balances may place the United States at risk of noncompliance with nuclear agreements. Negative balances have occurred because DOE and NRC have not addressed this issue in documented guidance on when facilities may carry such balances, which is inconsistent with federal internal control standards. Second, while unobligated low-enriched uranium (LEU) could be used to correct any future negative obligation balances, the U.S. inventory of it is declining and NMMSS does not have an early-warning monitoring capability to alert DOE when the inventory is particularly low. Federal internal control standards state that agencies should establish activities to monitor internal control systems and evaluate the results, but DOE officials said that the LEU inventory is currently sufficient and no early warning capability is needed. Without developing such a capability in NMMSS, DOE officials cannot know when the inventory of unobligated LEU becomes so low that supplies may not be available to correct negative obligation balances, thereby putting the United States at risk of not complying with its nuclear agreements.
Background Biomedical equipment, such as magnetic resonance imaging (MRI) systems, X-ray machines, cardiac monitoring systems, cardiac defibrillators, and various other tools for laboratory analysis, are critical to health and medical treatment and research in federal and private sector health care facilities. This equipment may use a computer for calibration or day-to-day operation. The computer could be either a personal computer that connects to the equipment remotely or a microprocessor chip embedded within the equipment. In either case, the controlling software may be susceptible to the Year 2000 problem if any type of date or time calculation is performed. This could range from the more benign—such as incorrect formatting of a printout—to the incorrect operation of the equipment with the potential to adversely affect patient care or safety. The degree of risk depends on the role of the biomedical equipment in the patient’s care. VHA manages health care delivery to veterans within 22 regional areas known as Veterans Integrated Service Networks (VISN). These VISNs encompass 172 VHA medical centers, 376 outpatient clinics, 133 nursing homes, and 30 domiciliaries—a total of 711 facilities. VHA’s biomedical equipment inventory—with its acquisition cost valued at almost $3 billion—can be found at these facilities. As the largest centrally directed civilian health care system in the United States, VHA is a key stakeholder in determining the Year 2000 compliance of biomedical equipment. VHA’s CIO has overall responsibility for planning and managing the Year 2000 compliance program. The CIO created a VHA Year 2000 Project Office, which directs and oversees the Year 2000 assessment and renovation activities in the VISNs. Another key player in determining the Year 2000 compliance of biomedical equipment is FDA. Under provisions of the Federal Food, Drug, and Cosmetic Act, as amended, FDA protects public health through oversight and regulation of medical devices. FDA regulates medical devices that use computers or software pursuant to applicable FDA medical device regulations. In September 1997, we testified that both VHA and FDA had just begun efforts to assess biomedical equipment for Year 2000 compliance. VHA had sent letters to approximately 1,600 biomedical equipment manufacturers that supply VHA, requesting compliance information for their products. We also testified that FDA had sent a letter to about 13,000 medical device manufacturers in July 1997, reminding them of their responsibility to ensure that their products will not be affected by the century change. Objective, Scope, and Methodology The objective of this review was to assess the status of VHA’s and FDA’s Year 2000 biomedical equipment programs. In performing this review, we applied criteria from our Year 2000 Assessment Guide and Year 2000 Business Continuity and Contingency Planning Guide. In assessing the status of VHA’s Year 2000 biomedical equipment program, we reviewed and analyzed VHA documents, including the March 25, 1998, VISN Assessment Feedback Reports; the January 30, 1998, Assessment Phase Report; the July 1997 Year-2000 Product Risk Program; the April 30, 1997, and October 31, 1997, versions of the Year-2000 Compliance Plan; and the May 15, 1998, and August 15, 1998, quarterly reports to OMB. We did not independently verify data contained in these documents. We met with Year 2000 project teams in three VISNs—VISN 4, VISN 5, and VISN 12—and in VHA medical facilities in Pittsburgh; Philadelphia; Wilmington, Delaware; Washington, D.C.; Baltimore; Martinsburg, West Virginia; and Chicago. We also discussed VA biomedical equipment assessment and renovation plans and efforts with members of the Year 2000 Project Office at VHA headquarters in Washington, D.C. To assess the status of FDA’s Year 2000 biomedical equipment program, we reviewed FDA documents on this issue, including those on its Internet World Wide Web site. We met with HHS’ Director of Policy and Evaluation in Washington, D.C., and the Director of FDA’s Division of Electronics and Computer Science at the Center for Devices and Radiological Health, located in Rockville, Maryland. We also met with biomedical engineers, who were attending the 1998 annual meeting of the Association for the Advancement of Medical Instrumentation. At this meeting, both VHA and FDA officials presented their respective Year 2000 biomedical equipment programs. We performed our work from July 1997 through June 1998, in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Secretary of Veterans Affairs and the Secretary of Health and Human Services. These comments are reprinted in appendixes I and II. VHA Has Made Progress in Implementing Its Year 2000 Strategy Since our September 1997 testimony, VHA has made progress implementing its Year 2000 strategy for biomedical equipment. This strategy, which depends on compliance information from the manufacturers, consists of five steps. These are (1) increase awareness and continually educate VHA CIOs, VISNs, and health care facilities on biomedical issues, (2) establish an expert working group to provide guidance, (3) develop a database of biomedical equipment manufacturers that supply equipment to VHA, (4) survey these manufacturers to identify the compliance status of biomedical equipment and solutions for noncompliance, and (5) communicate survey results to the field for use in determining the compliance status of biomedical equipment at the medical facilities. Each month, these facilities are expected to report to the VHA Year 2000 Project Office their strategies for dealing with noncompliant and conditional-compliant equipment in their inventories and the cost to accomplish this. To increase awareness, VHA has established an intranet web site containing compliance information from the manufacturers. This web site is also used to educate VHA CIOs, VISNs, and health care facilities on biomedical issues. VHA has also established an expert working group to assist the Year 2000 Project Office in identifying, assessing, and evaluating biomedical equipment at risk from the Year 2000 problem. VHA developed a database of biomedical equipment manufacturers by using an existing database, which tracks service manuals of both medical devices and scientific and research instruments purchased by its medical facilities. The expert working group reviewed the database to ensure that key manufacturers in specialty areas were included. To survey biomedical equipment manufacturers, the VHA Year 2000 Project Office sent a series of letters to them requesting information on the Year 2000 compliance status of their products. The first letter was sent to approximately 1,600 manufacturers on September 9, 1997. Two follow-up letters were sent to those that did not respond on October 6, 1997, and November 12, 1997. Upon receipt of responses to these letters, VHA categorized the compliance status provided by the manufacturers for the equipment, as illustrated in table 1. Of the nearly 1,600 manufacturers in VHA’s initial mailing, VHA determined that about 100 were no longer in business. Accordingly, VHA revised its list of manufacturers to 1,504 as of June 1, 1998, and reported that it received compliance information from 1,070, or 71 percent, of these manufacturers. Just under half of the 1,504 manufacturers reported that all of their devices are Year 2000 compliant. As shown in table 2, the manufacturers have provided VHA with compliance information on a wide range of biomedical equipment. VHA’s data, as of June 1, 1998, indicated that for those manufacturers that reported, at least 80 percent of the equipment types are compliant. According to VHA’s Year 2000 Project Manager, the expert working group reviews the information provided by the manufacturers for reasonableness. The Year 2000 Project Office has provided this information to its medical facilities through VHA’s intranet web site, and the facilities are to use the information to assess the compliance status of their equipment. According to VHA officials, most of the manufacturers that reported one or more of their biomedical equipment products as noncompliant cited incorrect display of date and/or time as problems. For example, a noncompliant electrocardiograph machine, used to monitor heart signals, would print charts with two-digit dates, showing the year 2000 as “00.” According to the Diagnostic Services Chief of VHA’s Technology Division, these cases do not generally lead to the equipment failing to operate and do not present a risk to patient safety because health care providers, such as physicians and nurses, are able to work around this problem. For example, a physician or technician would note the correct year on the printout from the electrocardiograph machine when the equipment imprints “1900” on the printout. However, VHA recognizes that incorrect date-time representation or use could pose a risk when the date is used in a calculation or when records generated by the equipment is sorted automatically to present a patient’s condition, over a period of time, to a physician for diagnosis and treatment. Specifically, when records are sorted by date of recording, the accuracy of such dates can be critical to a physician’s monitoring of patient progress in, for example, the case of blood sugar readings. If readings were taken on December 25, 27, and 30, 1999, and again on January 1, 2000, for example, the ordering might appear with the last entry first, if it were abbreviated as “00” and read as January 1, 1900. If the physician or other clinician did not pay close attention, a faulty diagnosis or treatment decision could be made based on a misreading of the data. VHA also recognizes that an equipment function that depends on a calculation involving a date and that is performed incorrectly as a result of a date problem, could present a risk to the patient. One example reported by a manufacturer is a product used for planning the delivery of radiation treatment using a radioactive isotope as the source. An error in the calculation of the radiation source’s strength could result in inappropriate treatment—either too low or too high a dosage—and could have an adverse effect on the patient on or after January 1, 2000. This noncompliant equipment is currently in the inventory of several VHA medical facilities. In commenting on a draft of this report, VA noted that VHA has identified three facilities that use this specific equipment, and the noncompliant equipment will be taken out of service. Given the above case scenarios, it is crucial that biomedical equipment manufacturers provide VHA with information on the compliance status of their equipment. This information is necessary for VHA medical facilities to formulate safe and effective solutions to address Year 2000 problems. Between November 1997 and January 1998, VHA’s medical facilities completed inventories of their biomedical equipment and reported the results to the Year 2000 Project Office. Using data on the facility’s biomedical equipment inventory from VHA’s equipment database, each facility was to conduct a physical inventory of its biomedical equipment and check this inventory against compliance information submitted by the manufacturers, which the Year 2000 Project Office had posted on the VHA intranet web site. According to VHA’s January 30, 1998, Year 2000 Assessment Phase Report, the medical facilities noted that based on the information from the manufacturers, some of the noncompliant biomedical equipment at VHA sites included defibrillator monitors, noninvasive blood pressure machines, vital signs monitors, and cardiology monitors. VHA officials have stressed that noncompliant equipment of one type reported by certain manufacturers does not indicate that all equipment of the same type in use at its medical facilities is noncompliant. VHA officials told us that there are other manufacturers of this equipment type that have reported that their equipment is compliant. The VHA Year 2000 Project Office has directed VHA medical facilities to regularly check the web site for updates on the compliance status of biomedical equipment reported by manufacturers. This is important for the medical facilities because, in some cases, the manufacturers have subsequently changed the compliance status of their equipment after their initial reports to VHA. The changes have ranged from some equipment previously reported as conditional-compliant that is now being reported as compliant to equipment previously reported as compliant that is now considered noncompliant. According to VHA’s Year 2000 Project Manager, the project office monitors the medical facilities’ Year 2000 activities through periodic reports and site visits. VHA officials have informed us that they will be relying on the biomedical equipment manufacturers to validate, test, and certify that replacement equipment is Year 2000 compliant. This is because some manufacturers have informed them that VHA should not attempt to conduct in-depth testing by manipulating the software embedded inside the equipment. According to the Diagnostic Services Chief of VHA’s Technology Division, such testing may void the manufacturer’s certification to FDA that the equipment is safe for use on patients, thereby exposing VHA to legal liability in the event that a patient’s health is harmed by equipment that malfunctions following VHA testing. VHA’s Year 2000 Project Manager told us that the medical facilities will perform limited functional testing of replacement equipment and of manufacturer modifications to conditional-compliant equipment. He stated that the medical facilities will test equipment performance in accordance with locally established acceptance testing procedures for new equipment. Uncertainty Over Year 2000 Compliance Status Increases Risk Despite VHA’s progress in implementing its Year 2000 strategy, as of July 29, 1998, it still did not know the full extent of the Year 2000 problem on its biomedical equipment because it has not received compliance and cost information from 27 percent of the manufacturers on its list of suppliers, as well as from nearly 100 additional manufacturers that are no longer in business. This situation impedes VHA’s medical facilities from promptly developing strategies to deal with equipment with potential patient safety problems. In addition, the current cost estimate of $40 million reported to OMB to replace or repair noncompliant equipment is incomplete. Also, given the uncertainties surrounding the compliance status of many VHA biomedical equipment items, it is critical that medical facilities develop contingency plans to ensure patient care in the event of Year 2000-related failures. However, the medical facilities have not completed such plans. Some Manufacturers Have Not Provided Compliance Information on Their Equipment VHA does not currently know how much of its biomedical equipment is Year 2000 compliant because, as shown in table 3, it has not yet received compliance information from 398 manufacturers. This information is critical to VHA because, like other health care providers, it relies on the manufacturers to validate, test, and certify that their equipment is compliant. Letters sent to more than half of the nonresponsive manufacturers—227 out of 398—were returned to VHA by the U.S. Postal Service marked with no forwarding addresses. In addition, as noted in table 3, an additional 47 manufacturers that did respond are in the pending category because they reported that they had not completed their assessments, and, therefore, did not yet know if their products were compliant. Among the manufacturers that had not yet responded or completed their assessments as of July 29, 1998, is one that supplies high-dollar value equipment, such as radiology systems and electronic imaging systems equipment, to VHA. According to the Year 2000 Project Manager, VHA will continue its efforts to obtain compliance information from nonresponding manufacturers. Consistent with this strategy, on June 24, 1998, VHA sent another letter to nonresponsive manufacturers requesting that they provide VHA with Year 2000 compliance information on their products. The Project Manager said VHA will continue to work through October 1998 to obtain compliance information from the manufacturers. Further, he said at that time, VHA’s medical facilities must be ready to put contingency plans into effect for noncompliant and conditional-compliant equipment and for that equipment, the status of which is unknown. Year 2000 Cost Estimate for Biomedical Equipment Is Incomplete VHA’s Year 2000 cost estimate for replacing and/or retiring noncompliant biomedical equipment is incomplete. In its August 15, 1998, quarterly report to OMB, VA estimated the Year 2000 cost to replace or repair this equipment at $40 million. It also reported that VA expects the costs to replace or repair noncompliant biomedical equipment to increase as manufacturers continue to disclose their compliance status. The VHA Year 2000 Project Manager told us that VHA expects to manage these costs within the department’s budget. However, the $40 million estimate is not based on updated cost information from the medical facilities, and VHA does not know the replacement and repair cost for biomedical equipment for the manufacturers that have not reported compliance and cost information, as well as the nearly 100 manufacturers that are no longer in business. VHA’s Year 2000 Project Manager informed us that three quarters of the $40 million estimate was calculated based on cost information provided by the VISNs and medical facilities. Specifically, the VISNs and facilities reported to the Year 2000 Project Office the number of noncompliant and/or conditional-compliant equipment items in their inventories and the replacement or repair cost for this equipment using information provided to VHA by the manufacturers and posted on its intranet web site in January 1998. The remaining $10 million was calculated based on the VHA Year 2000 Project Office’s estimate of the number of such equipment items at VHA medical facilities and any cost information provided by manufacturers during the period February through April 1998. VHA’s Year 2000 Project Manager has acknowledged the shortcomings of the current cost estimate. Accordingly, the VISNs were to begin using a new reporting process, effective July 31, 1998. The new process will use a recently developed software package to track the status of noncompliant and conditional-compliant equipment at the medical facilities and the associated costs to replace, repair, or retire it. In commenting on a draft of this report, VA stated that this software was released on July 10, 1998, and the Under Secretary for Health signed an information letter, providing direction and instruction on the software to VHA medical facilities on July 20, 1998. VHA Has Not Yet Completed Business Continuity and Contingency Plans for Biomedical Equipment To assist agencies in their business continuity and contingency planning efforts, we have prepared a guide that discusses the scope of the Year 2000 challenge and offers a step-by-step approach for reviewing an agency’s risks and threats as well as how to develop backup strategies to minimize these risks. This business continuity and contingency planning process safeguards the agency’s ability to produce a minimally acceptable level of outputs and services in the event of failures of internal or external mission-critical information systems and services. A business-level contingency plan would address how each VHA medical facility would handle various types of Year 2000 problems caused by business partner problems, such as nonresponsive manufacturers and the nearly 100 manufacturers that VHA determined were no longer in business. Despite the uncertainties surrounding the compliance status of many of VHA’s biomedical equipment and the potential health risks to patients of certain equipment, VHA medical facilities have not yet completed business continuity and contingency plans on actions they must take to address potential Year 2000-related failures. The Year 2000 Project Manager informed us that these plans need to be ready for implementation by October 31, 1998. He did not know the status of these plans because the project office had not reviewed them. The Project Manager told us that he expects to review these plans when Year 2000 Project Office representatives visit the VISNs and medical facilities later in 1998. Our review of the March 25, 1998, VISN Assessment Feedback Reports for the three VISNs we visited showed that these VISNs had reported that they did not have business continuity and contingency plans to deal with 76 of the 89 noncompliant biomedical equipment items identified in their inventories. The CIOs at two of these VISNs informed us that they are currently in the process of developing these plans. The third CIO said the VISN’s medical facilities have prepared business continuity and contingency plans. However, our review of four of the five plans for this VISN disclosed that these plans did not specifically address Year 2000-related failures of biomedical equipment. Instead, they focused on preventative maintenance inspections and general system and equipment failures. In light of the uncertainties surrounding the compliance status of VHA’s biomedical equipment and their potential effect on patient health and safety, it is crucial that medical facilities be prepared in the event of Year 2000 failures. An official in VHA’s Year 2000 Project Office told us that the office is in the process of developing a guidebook to assist the VISNs and medical facilities in addressing Year 2000 business continuity and contingency planning for biomedical equipment and other related issues. The Year 2000 Project Manager said the guidebook will discuss VHA’s strategy for obtaining information from nonresponsive manufacturers and address issues such as replacing, repairing, and/or retiring noncompliant biomedical equipment and equipment produced by the nearly 100 manufacturers no longer in business; using the new reporting software for biomedical equipment; procuring compliant biomedical equipment; and having adequate facility staff available on the weekend of January 1, 2000. In commenting on a draft of this report, VA noted that a draft of the guidebook was completed on August 6, 1998, and it expects to issue a final guidebook by September 1998. FDA Is Also Relying on Biomedical Equipment Manufacturers for Compliance Information FDA, the agency with oversight and regulatory responsibility for domestic and imported medical devices, is also trying to determine the Year 2000 compliance status of these devices, as well as some scientific and research instruments. Its goal is to provide a comprehensive, centralized source of information on the Year 2000 compliance status of biomedical equipment used in the United States and make this information publicly available on an Internet World Wide Web site. On January 21, 1998, HHS, on FDA’s behalf, issued a letter to approximately 16,000 domestic and foreign biomedical equipment manufacturersrequesting information on the Year 2000 compliance of their complete product line. The letter stated that all information received would be made available to the public through FDA’s web site. Manufacturers were asked to identify any noncompliant products by type and model number and provide a brief description of the date-related problems and the solutions for mitigating the problems. If all the manufacturer’s products were considered compliant, the manufacturer was asked to provide a statement certifying such compliance. In this case, the manufacturer did not have to provide information on the compliant device’s make and model. Manufacturers were instructed to forward their responses in writing or electronically to FDA’s Center for Devices and Radiological Health. FDA acknowledges that the response rate to date to the January 1998 letter is disappointing. As of July 30, 1998, FDA had received 1,975 responses from biomedical equipment manufacturers and posted them on its web site. The Director of FDA’s Division of Electronics and Computer Science cited several reasons for the low response rate, including manufacturers not yet completing their assessments and the manufacturers’ responses to FDA’s request being voluntary. He also indicated that the vast majority of manufacturers that received letters from FDA do not make products with any sort of electronic components, and he believed that many of these manufacturers chose not to respond because the request did not pertain to them. On June 29, 1998, FDA sent a second request to 1,935 medical device manufacturers that had not previously responded to its inquiry and that FDA believes have products that might employ computers or embedded systems. According to the Director, as of July 30, 1998, 628 manufacturers reported that their products employ a date/time function. Of these, about 100 indicated that one or more of their products were not compliant. “Inclusion of information in this database indicates that the manufacturer has certified that the data is complete and accurate. The Food and Drug Administration, however, cannot and does not make any independent assurances or guarantees as to the accuracy or completeness of this data.” The Director informed us that except for diagnostic X-ray equipment, FDA does not test new medical devices entering the market. In addition, he said that FDA has performed about 8 to 10 tests per year involving forensic investigations of problem devices. In commenting on a draft of this report, HHS stated that FDA tests this equipment during the premarket review process to ensure that it is in compliance with a mandatory federal performance standard for X-ray equipment. It also indicated that the testing of this equipment does not include compliance with Year 2000 requirements. According to the Director, FDA reviews the test results submitted by manufacturers requesting premarket approval of their medical devices to see if the manufacturers have demonstrated that products are safe and effective for intended use. When asked if FDA will request test reports from manufacturers that have renovated medical devices that are not Year 2000 compliant, the Director informed us that FDA will not. He said that correcting a date problem does not change the design of the device, and it is the manufacturers’ responsibility to ensure proper device design. We disagree with the Director that the date change will not change the design of the device. Correcting the date problem will change the software design of the device and may alter the internal logic of the software. The Director also cited staff limitations as another reason for not requesting and reviewing test results from the manufacturers. Some Users Question Usefulness of Current FDA Biomedical Equipment Web Site While FDA is making an effort to assemble information on biomedical equipment compliance and making this information available to the public, some biomedical engineers attending a June 1998 meeting of the Association for the Advancement of Medical Instrumentation expressed concern that information on the FDA web site is not detailed enough to be useful. Specifically, as mentioned earlier, FDA’s list of compliant equipment contains no information on the equipment’s make and model. In contrast, VHA’s list of compliant equipment generally contains such information. Also, a review of the FDA database for noncompliant equipment disclosed that some manufacturers have reported that they will have solutions for their equipment in late 1999. Putting off solutions until this late date is risky. However, making this information publicly available does provide hospitals and other users of biomedical equipment with the opportunity to plan alternative solutions. Further, the Year 2000 compliance information publicly available through FDA does not include responses from many of the manufacturers that have responded to VHA. For example, we selected, on a random basis, a sample of 53 manufacturers in VHA’s database that reported their products to be Year 2000 compliant and found that 48 of them were not listed in the FDA database. We, likewise, selected a sample of 13 manufacturers in VHA’s database that reported that their products are not Year 2000 compliant, and found that 12 of them were not listed in the FDA database. These manufacturers’ products include cardiology equipment, defibrillator monitors, and ultrasound equipment. The Director of FDA’s Division of Electronics and Computer Science acknowledged that the manufacturers were more responsive to VHA’s requests, and the VHA database, therefore, contains a higher percentage of responses. He said that he believed the primary reason for this was VHA’s position as a large volume customer that could take future action toward the manufacturer if the information was not forthcoming. He also noted that FDA requested information on the complete product line of the manufacturers, while VA requested information from the manufacturers on its list of suppliers. New Reporting Requirements Identify Medical Devices Posing Health Risk FDA implemented a new rule on May 18, 1998, requiring medical device manufacturers and importers to report promptly to FDA action to correct and remove devices that pose a health risk or that are in violation of the Federal Food, Drug, and Cosmetic Act. This rule protects public health by ensuring that FDA has current and complete information regarding actions taken on medical devices. These reports are expected to improve FDA’s ability to evaluate device-related problems, as well as enable it to take prompt action regarding devices, that pose a health risk. Under the new rule, the affected manufacturer is required to submit a report of action taken to correct the problem or remove the device from service. According to the Director of the Center for Devices and Radiological Health, under the new rule, FDA has a better chance of learning what corrective actions, including those to address the Year 2000 computer problem, are taken by the manufacturers on medical devices that could pose health risks. The Director said that no manufacturers have yet submitted any reports under this new reporting requirement. VHA Plans to Make Compliance Information Available to the Public In contrast to FDA, VHA had not been making information obtained from biomedical equipment manufacturers on the Year 2000 compliance status of their products available to the public through an Internet World Wide Web site. VHA has not yet done so because (1) when VHA requested this information from the manufacturers, VHA did not tell them that it intended to release the information outside the federal government and (2) VHA said that it had concerns regarding whether it would be proper for it to release some of the information provided by the manufacturers because the information may be proprietary. The VHA Year 2000 Project Manager told us that VHA believed it would need the manufacturers’ permission before it could share this information. He said that VHA is concerned about the proprietary nature of the products, potential legal issues, and manufacturers’ price structure for Year 2000 compliant products. VHA had shared some of Year 2000 compliance status information provided by manufacturers with federal agencies, such as the Department of Defense and the National Institutes of Health (NIH), with the caveat that it was for federal use only. NIH then shared this information with FDA. VHA, on the advice of the VA Acting General Counsel, has recently informed the manufacturers of its plans to make this information available to the public through an Internet World Wide Web site. Specifically, on June 17, 1998, VHA mailed letters to manufacturers that had responded to VHA’s previous requests for compliance information. It informed the manufacturers that it intended to place information they provided to VHA on a publicly-available World Wide Web site unless the manufacturers informed it otherwise. VHA included similar language in a June 24, 1998, letter to manufacturers that had not yet provided compliance data. The VHA Year 2000 Project Manager said the response from the manufacturers as of June 30, 1998, has been positive. He added that two manufacturers objected to disclosing this information to the public, citing proprietary reasons. These responses have been referred to VA’s legal department. VA has not yet decided how and when a clearinghouse of compliance information provided to VHA from manufacturers will be made available to the public. According to VHA’s Year 2000 Project Manager, the FDA web site is one of the options being considered for the clearinghouse. The Director of FDA’s Division of Electronics and Computer Science informed us that FDA and VHA have discussed using FDA’s web site as such a clearinghouse. VA’s Under Secretary of Health recognizes the importance of gathering compliance data and sharing them publicly. Specifically, in a July 9, 1998, press conference sponsored by the National Patient Safety Partnership,he called on biomedical equipment manufacturers to identify and address potential patient safety problems resulting from the Year 2000 problem. On behalf of the partnership, he called for (1) all health care practitioners and medical treatment facilities to survey their equipment and seek information from their relevant biomedical equipment manufacturers about their products’ Year 2000 compatibility, (2) all health care consumers who use biomedical equipment at home to check with their health care providers about the products’ Year 2000 compatibility, (3) the medical equipment manufacturers to take immediate action to determine the compliance status of their equipment, and (4) the establishment of a single, national clearinghouse from which compliance information from manufacturers can be readily accessed by the public. The Under Secretary reiterated these four items in a July 23, 1998, hearing before the Senate Special Committee on Year 2000. Conclusions Prompt correction of the Year 2000 problem for biomedical equipment is critical to VHA’s role as a health care provider. Although VHA has made progress in assessing its biomedical equipment, it does not yet know the full extent of the Year 2000 problem with this equipment and the associated costs to address this problem because it has not received compliance information from many of the manufacturers. This information is important because VHA relies on the manufacturers to validate, test, and certify that their equipment, including replacement equipment, is compliant. Despite these uncertainties, VHA medical facilities have not yet completed business continuity and contingency plans on actions they must take to address Year 2000-related failures. The Year 2000 Project Office also has not yet completed a Year 2000 contingency guidebook for biomedical equipment to assist the VISNs and medical facilities in their business continuity and contingency planning and other activities. Until these issues are resolved, VHA lacks adequate assurance that its delivery of medical care through the use of biomedical equipment will not be adversely affected by the Year 2000 problem. FDA’s goal is to provide a comprehensive, centralized source of information on the Year 2000 compliance status of biomedical equipment used in the United States, and make this information publicly available on an Internet World Wide Web site. FDA, like VHA, relies on the manufacturers to validate, test, and certify that the equipment is Year 2000 compliant. However, FDA has no assurance that the manufacturers have adequately addressed the Year 2000 problem for noncompliant equipment because it does not require manufacturers to submit test results to FDA certifying compliance. Also, FDA does not have as much information in its database on the compliance status of biomedical equipment as VHA. Finally, VHA, which currently does not make compliance information obtained from the manufacturers available to the public, now plans to do this through an Internet World Wide Web site. The sharing of this information could greatly assist all health care providers and other users of biomedical equipment in identifying noncompliant and conditional-compliant equipment in their inventories and taking prompt action to make them compliant. Sharing also could provide users with a mechanism to overcome the deficiencies in the FDA database, such as the lack of detailed information on the make and model of compliant equipment and the disappointing response rate from manufacturers to FDA’s request for compliance information. Recommendations to the Secretary of Veterans Affairs We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take prompt action to: Ensure that the VISNs and medical facilities use the new reporting system to provide the VHA Year 2000 Project Office with up-to-date and more complete information on the cost to replace and/or repair noncompliant and conditional-compliant biomedical equipment. Complete and issue as soon as possible to the VISNs and medical facilities a Year 2000 guidebook on how to address contingency planning and other related issues for biomedical equipment for incorporation in their individual Year 2000 plans. Require that each VISN director ensure that medical facilities within the VISN complete development of a Year 2000 business continuity and contingency plan for biomedical equipment in its inventory. This plan should address steps the facility will take on (1) biomedical equipment produced by the manufacturers from which VHA has not received compliance information and the nearly 100 manufacturers no longer in business, (2) noncompliant equipment that have date-time problems but can still be safely used on and after January 1, 2000, and (3) equipment that manufacturers have certified as compliant but that may cease to function or malfunction on and after January 1, 2000. Recommendations to the Secretary of Veterans Affairs and the Secretary of Health and Human Services We recommend that the Secretary of Veterans Affairs and the Secretary of Health and Human Services work jointly to develop immediately a single data clearinghouse that provides compliance information to all users of biomedical equipment. Development of this clearinghouse should involve representatives from the health care industry, such as the Department of Defense’s Health Affairs, American Hospital Association, American Medical Association, and Health Industry Manufacturers Association. At a minimum, the clearinghouse should contain (1) information on the compliance status of all biomedical equipment by make and model, (2) the identity of manufacturers that are no longer in business, including the types of equipment, makes, and models produced by these manufacturers, (3) the identity of manufacturers that have and have not provided VHA and/or FDA with test results certifying that their equipment is Year 2000 compliant, and (4) the identity of manufacturers that have not provided compliance information to VHA and/or FDA. We also recommend that the Secretary of Veterans Affairs and the Secretary of Health and Human Services, in conjunction with VA’s Under Secretary for Health and the Commissioner of the Food and Drug Administration, determine what actions, if any, should be taken regarding biomedical equipment manufacturers that have not provided VHA and/or FDA with compliance information; determine what actions, if any, are needed to address biomedical equipment produced by manufacturers no longer in business; take prudent steps to review the test results for critical care/life support biomedical equipment, especially equipment once determined to be noncompliant but now deemed compliant, and that for which there are concerns about the determination of compliance, and make the results of these reviews publicly available through the single data clearinghouse; and determine what legislative, regulatory, or other changes are necessary to obtain assurances that the manufacturers’ equipment is compliant, including performing independent verification and validation of the manufacturers’ certification. Agency Comments and Our Evaluation In commenting on a draft of this report, VA generally concurred with our recommendations to the Secretary of Veterans Affairs and the first of two joint recommendations to the Secretary of Veterans Affairs and the Secretary of Health and Human Services to develop a single data clearinghouse. VA stated that VHA is working closely with other federal agencies, such as the Department of Defense and FDA, to address common problems with biomedical, clinical, and laboratory equipment and facilities. VA also noted that it has joined with the American Hospital Association, the American Nurses Association, and the Joint Commission on the Accreditation of Healthcare Organizations in calling for a joint effort to create a national clearinghouse for Year 2000 information. VA stated that the percentage of manufacturers not responding to VHA’s inquiries is now 14 percent, meaning an 86 percent response rate. However, VHA counted letters returned to VHA by the U.S. Postal Service marked with no forwarding address as responses. Because these manufacturers did not provide VHA with information on the compliance status of their products, the response rate from manufacturers, based on updated information provided to us by VA as of July 29, 1998, is 73 percent, only slightly above the 71 percent rate cited in our draft report. VA also described actions taken and planned to implement our recommendations, as well as a number of suggested changes to this report. These comments have been incorporated into the report as appropriate and are reprinted in appendix I. Regarding our second joint recommendation to the Secretary of Veterans Affairs and the Secretary of Health and Human Services, VA stated that it has no legislative or regulatory authority to implement this recommendation and defers to HHS. VA, however, stated that it will provide consultation or other appropriate assistance to HHS in implementing this recommendation. HHS, in commenting on a draft of this report, also concurred with the joint recommendation to the Secretary of Veterans Affairs and the Secretary of Health and Human Services to develop a single data clearinghouse. It stated that HHS and VA are merging their efforts to provide complete information to the health care community and the general public regarding the Year 2000 compliance of biomedical equipment. It also stated that FDA will post on the web site the names of manufacturers that have not provided compliance certification. However, HHS did not believe that it is necessary or cost-effective to list all compliant products. It believed that information at the individual model level is only needed for noncompliant products. We disagree with HHS. The make and model information will provide users with detailed data on the reported compliance status of their products, especially for those 195 manufacturers that VA has determined to have merged or been bought out by other manufacturers as of July 29, 1998. In addition, HHS concurred with two of the three components of the second joint recommendation. Specifically, it concurred with the component of the recommendation to determine the actions that should be taken regarding manufacturers who fail to respond to requests for compliance information. HHS also stated that under current regulations, FDA does not have the authority to require all device manufacturers to submit reports on whether their devices are Year 2000 compliant. HHS also concurred that the identity of defunct manufacturers, along with the known types, makes, and models of devices they manufactured should be included in the clearinghouse database. It further stated that it would explore possible approaches to acquiring additional information regarding defunct manufacturers’ products. HHS did not concur with the component of the recommendation to review test results supporting the medical device equipment manufacturers’ certifications that their equipment is compliant. It believed that the submission of appropriate certifications of compliance is sufficient to ensure that the certifying manufacturers are in compliance. We disagree that this is sufficient. Through independent reviews of the manufacturers’ test results, users of the medical devices are provided with a level of confidence that the devices are Year 2000 compliant. HHS also stated that it did not have the resources to undertake such a review, and there is insufficient time to complete a review of this nature. In this regard, if HHS lacks sufficient resources to review the manufacturers’ test results, it may want to solicit those of federal health care providers and professional associations, such as VA and the National Patient Safety Partnership. Additionally, to make effective use of limited resources, FDA and the health care community, at a minimum, should focus their review efforts on critical care/life support biomedical equipment that was determined to be noncompliant but is now deemed compliant and that for which there are concerns about the determination of compliance. Regarding our recommendation on legislative or regulatory changes necessary to obtain assurances that manufacturers’ biomedical equipment is compliant, HHS believed that the solutions to the Year 2000 problems can be reached through approaches such as the clearinghouse. HHS also clarified FDA’s testing of diagnostic X-ray equipment. We have revised the report to reflect this. Finally, HHS described actions it has taken and planned to implement our recommendations, and these are reprinted in appendix II. HHS also provided a number of technical suggestions to this report, and these comments have been incorporated into the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Ranking Minority Member of the Subcommittee on Oversight and Investigations, House Committee on Veterans’ Affairs, and the Chairmen and Ranking Minority Members of the Subcommittee on Benefits, House Committee on Veterans’ Affairs, and the Subcommittee on Health, House Committee on Veterans’ Affairs. We will also provide copies to the Chairmen and Ranking Minority Members of the Senate and House Committees on Veterans’ Affairs; the Senate Committee on Appropriations; the Senate and House Subcommittees on VA, HUD and Independent Agencies, Senate and House Committees on Appropriations; the Subcommittee on Labor, Health and Human Services, Education and Related Agencies, Senate Committee on Appropriations; the Senate Committee on Labor and Human Services; the Permanent Subcommittee on Investigations, Senate Committee on Governmental Affairs; the Subcommittee on Public Health and Safety, Senate Committee on Labor and Human Resources; House Committee on Appropriations; the Subcommittee on Labor, Health and Human Services, and Education, House Committee on Appropriations; the House Committee on Government Reform and Oversight; the Subcommittee on Human Resources, House Committee on Government Reform and Oversight; and the Subcommittee on Oversight and Investigations, House Committee on Commerce; and the Secretary of Veterans Affairs; the Acting Commissioner of the Food and Drug Administration; the Director of the Office of Management and Budget; and the Chair of the President’s Council on Year 2000 Conversion. Copies will also be made available to others upon request. Please contact me at (202) 512-6253 or by e-mail at [email protected] if you have any questions concerning this report. Major contributors to this report are listed in appendix III. Comments From the Department of Veterans Affairs The following are GAO’s comments on the Department of Veterans Affairs’ letter dated August 25, 1998. GAO Comments 1. Discussed in “Agency Comments and Our Evaluation” section of report. 2. Report updated to reflect that only 1 of 19 manufacturers remains unresponsive. 3. Report changed to reflect agency comments. Comments From the Department of Health and Human Services The following are GAO’s comments on the Department of Health and Human Services’ letter dated September 2, 1998. GAO Comments 1. Report modified to include “1,935 manufacturers.” 2. Discussed in “Agency Comments and Our Evaluation” section of report. 3. Report revised to reflect agency comments. 4. Report revised to clarify the terms “biomedical equipment” and “medical devices.” The term biomedical equipment includes both medical devices subject to FDA regulation and scientific and research instruments which are not subject to FDA regulation. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Helen Lew, Assistant Director Nabajyoti Barkakati, Technical Assistant Director Tonia L. Johnson, Senior Information Systems Analyst J. Michael Resser, Business Process Analyst-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the status of the Veterans Health Administration's (VHA) and the Food and Drug Administration's (FDA) Year 2000 biomedical equipment programs. GAO noted that: (1) VHA has made progress in implementing its year 2000 strategy for biomedical equipment, which relies on compliance information from the manufacturers; (2) as of July 29, 1998, VHA had received information on biomedical equipment compliance from 73 percent of the 1,490 manufacturers on its list of suppliers; 701, or 47 percent, of these manufacturers reported that their products are year 2000 compliant; (3) in spite of this, VHA does not yet know the full extent of the year 2000 problem on its biomedical equipment and the associated costs to address this problem; (4) among the manufacturers that had yet to respond or complete their assessments is one that supplies high-dollar value equipment, such as radiology systems and electronic imaging systems equipment, to VHA; (5) according to VHA's Year 2000 Project Manager, most of the manufacturers reporting that they had noncompliant equipment cited incorrect display of date or time as problems; (6) date or time display problems should not present a risk to patient safety because health care providers can work around them; (7) however, some manufacturers cited problems that could pose a risk to patient safety; (8) to the extent that noncompliant biomedical equipment has to be replaced or repaired, the cost estimate reported by the Department of Veterans Affairs (VA) to the Office of Management and Budget is incomplete; (9) to assist health care facilities in the public and private sectors, FDA issued a letter in January 1998 to biomedical equipment manufacturers, requesting information on products affected by this computer problem; (10) in contrast to VHA, as of July 30, 1998, FDA had only received responses from 1,975, or about 12 percent, of the approximately 16,000 biomedical equipment manufacturers to which its letter was sent; (11) FDA has made information from biomedical equipment manufacturers available through a world wide web site; (12) VHA, however, has not yet done so because: (a) when VHA requested the information from the manufacturers, VHA did not tell them that it intended to release the information outside the federal government; and (b) VHA said it had concerns regarding whether it would be proper for it to release some of the information provided by the manufacturers because the information may be proprietary; and (13) VHA, on the advice of VA's Acting General Counsel, informed manufacturers in June 1998 that it plans to release information that the manufacturers said was not confidential commercial information.
Prior Actions Have Improved Port Security, but Issues Remain Port security overall has improved because of the development of organizations and programs such as AMSCs, Area Maritime Security Plans (AMSPs), maritime security exercises, and the International Port Security Program, but challenges to successful implementation of these efforts remain. Additionally, agencies may face challenges addressing the additional requirements directed by the SAFE Port Act, such as a provision that DHS establish interagency operational centers at all high- priority ports. AMSCs and the Coast Guard’s sector command centers have improved information sharing, but the types and ways information is shared varies. AMSPs, limited to security incidents, could benefit from unified planning to include an all-hazards approach. Maritime security exercises would benefit from timely and complete after action reports, increased collaboration across federal agencies, and broader port level coordination. The Coast Guard’s International Port Security Program is currently evaluating the antiterrorism measures maintained at foreign seaports. Area Maritime Security Committees Share Information and Coast Guard Plans to Expand Interagency Operational Centers Two main types of forums have developed for agencies to coordinate and share information about port security: area committees and Coast Guard sector command centers. AMSCs serve as a forum for port stakeholders, facilitating the dissemination of information through regularly scheduled meetings, issuance of electronic bulletins, and sharing key documents. MTSA provided the Coast Guard with the authority to create AMSCs— composed of federal, state, local, and industry members—that help to develop the AMSP for the port. As of August 2007, the Coast Guard had organized 46 AMSCs. Each has flexibility to assemble and operate in a way that reflects the needs of its port area, resulting in variations in the number of participants, the types of state and local organizations involved, and the way in which information is shared. Some examples of information shared includes assessments of vulnerabilities at specific port locations, information about potential threats or suspicious activities, and Coast Guard strategies intended for use in protecting key infrastructure. As part of an ongoing effort to improve its awareness of the maritime domain, the Coast Guard developed 35 sector command centers, four of which operate in partnership with the U.S. Navy. We have previously reported that both of these types of forums have helped foster cooperation and information-sharing. We further reported that AMSCs provided a structure to improve the timeliness, completeness, and usefulness of information sharing between federal and nonfederal stakeholders. These committees improved upon previous information- sharing efforts because they established a formal structure and new procedures for sharing information. In contrast to AMSCs, the Coast Guard’s sector command centers can provide continuous information about maritime activities and involve various agencies directly in operational decisions using this information. We have reported that these centers have improved information sharing, and the types of information and the way information is shared varies at these centers depending on their purpose and mission, leadership and organization, membership, technology, and resources. The SAFE Port Act called for establishment of interagency operational centers, directing the Secretary of DHS to establish such centers at all high-priority ports no later than 3 years after the Act’s enactment. The act required that the centers include a wide range of agencies and stakeholders and carry out specified maritime security functions. In addition to authorizing the appropriation of funds and requiring DHS to provide the Congress a proposed budget and cost-sharing analysis for establishing the centers, the act directed the new interagency operational centers to utilize the same compositional and operational characteristics of existing sector command centers. According to the Coast Guard, none of the 35 centers meets the requirements set forth in the SAFE Port Act. Nevertheless, the four centers the Coast Guard operates in partnership with the Navy are a significant step in meeting these requirements, according to a senior Coast Guard official. The Coast Guard is currently piloting various aspects of future interagency operational centers at existing centers and is also working with multiple interagency partners to further develop this project. DHS has submitted the required budget and cost-sharing analysis proposal, which outlines a 5-year plan for upgrading its centers into future interagency operations centers to continue to foster information sharing and coordination in the maritime domain. The Coast Guard estimates the total acquisition cost of upgrading 24 sectors that encompass the nation’s high priority ports into interagency operations centers will be approximately $260 million, to include investments in information system, sensor network, facilities upgrades and expansions. According to the Coast Guard, future interagency operations centers will allow the Coast Guard and its partners to use port surveillance with joined tactical and intelligence information, and share this data with port partners working side-by-side in expanded facilities. In our April 2007 testimony, we reported on various challenges the Coast Guard faces in its information sharing efforts. These challenges include obtaining security clearances for port security stakeholders and creating effective working relationships with clearly defined roles and responsibilities. In our past work, we found the lack of federal security clearances among area committee members had been routinely cited as a barrier to information sharing. In turn, this inability to share classified information may limit the ability to deter, prevent, and respond to a potential terrorist attack. The Coast Guard, having lead responsibility in coordinating maritime information, has made improvements to its program for granting clearances to area committee members and additional clearances have been granted to members with a need to know. In addition, the SAFE Port Act includes a specific provision requiring DHS to sponsor and expedite security clearances for participants in interagency operational centers. However, the extent to which these efforts will ultimately improve information sharing is not yet known. As the Coast Guard expands its relationships with multiple interagency partners, collaborating and sharing information effectively under new structures and procedures will be important. While some of the existing centers achieved results with existing interagency relationships, other high-priority ports might face challenges establishing new working relationships among port stakeholders and implementing their own interagency operational centers. Finally, addressing potential overlapping responsibilities —such as leadership roles for the Coast Guard and its interagency partners—will be important to ensure that actions across the various agencies are clear and coordinated. Operations to Provide Overall Port Security Face Resource Constraints As part of its operations, the Coast Guard has also imposed additional activities to provide overall port security. The Coast Guard’s operations order, Operation Neptune Shield, first released in 2003, specifies the level of security activities to be conducted. The order sets specific activities for each port; however, the amount of each activity is established based on the port’s specific security concerns. Some examples of security activities include conducting waterborne security patrols, boarding high-interest vessels, escorting vessels into ports, and enforcing fixed security zones. When a port security level increases, the amount of activity the Coast Guard must conduct also increases. The Coast Guard uses monthly field unit reports to indicate how many of its security activities it is able to perform. Our review of these field unit reports indicates that many ports are having difficulty meeting their port security responsibilities, with resource constraints being a major factor. In an effort to meet more of its security requirements, the Coast Guard uses a strategy that includes partnering with other government agencies, adjusting its activity requirements, and acquiring resources. Despite these efforts, many ports are still having difficulty meeting their port security requirements. The Coast Guard is currently studying what resources are needed to meet certain aspects of its port security program, but to enhance the effectiveness of its port security operations, a more comprehensive study to determine all additional resources and changes to strategy to meet minimum security requirements may be needed. Area Maritime Security Plans Are in Place but Need to Address Recovery and Natural Disasters Implementing regulations for MTSA specified that AMSPs include, among other things, operational and physical security measures in place at the port under different security levels, details of the security incident command and response structure, procedures for responding to security threats including provisions for maintaining operations in the port, and procedures to facilitate the recovery of the marine transportation system after a security incident. A Coast Guard Navigation and Vessel Inspection Circular (NVIC) provided a common template for AMSPs and specified the responsibilities of port stakeholders under them. As of September 2007, 46 AMSPs are in place at ports around the country. The Coast Guard approved the plans by June 1, 2004, and MTSA requires that they be updated at least every 5 years. The SAFE Port Act added a requirement to AMSPs, which specified that they include recovery issues by identifying salvage equipment able to restore operational trade capacity. This requirement was established to ensure that the waterways are cleared and the flow of commerce through United States ports is reestablished as efficiently and quickly as possible after a security incident. While the Coast Guard sets out the general priorities for recovery operations in its guidelines for the development of AMSPs, we have found that this guidance offers limited instruction and assistance for developing procedures to address recovery situations. The Maritime Infrastructure Recovery Plan (MIRP) recognizes the limited nature of the Coast Guard’s guidance and notes the need to further develop recovery aspects of the AMSPs. The MIRP provides specific recommendations for developing the recovery sections of the AMSPs. The AMSPs that we reviewed often lacked recovery specifics and none had been updated to reflect the recommendations made in the MIRP. The Coast Guard is currently updating the guidance for the AMSPs and aims to complete the updates by the end of calendar year 2007 so that the guidance will be ready for the mandatory 5-year re-approval of the AMSPs in 2009. Coast Guard officials commented that any changes to the recovery section would need to be consistent with the national protocols developed for the SAFE Port Act. Additionally, related to recovery planning, the Coast Guard and CBP have developed specific interagency actions focused on response and recovery. This should provide the Coast Guard and CBP with immediate security options for the recovery of ports and commerce. Further, AMSPs generally do not address natural disasters (i.e., they do not have an all-hazards approach). In a March 2007 report examining how ports are dealing with planning for natural disasters such as hurricanes and earthquakes, we noted that AMSPs cover security issues but not other issues that could have a major impact on a port’s ability to support maritime commerce. As currently written, AMSPs are concerned with deterring and, to a lesser extent, responding to security incidents. We found, however, that unified consideration of all risks—natural and man- made—faced by a port may be beneficial. Because of the similarities between the consequences of terrorist attacks and natural or accidental disasters, much of the planning for protection, response, and recovery capabilities is similar across all emergency events. Combining terrorism and other threats can thus enhance the efficiency of port planning efforts. This approach also allows port stakeholders to estimate the relative value of different mitigation alternatives. The exclusion of certain risks from consideration, or the separate consideration of a particular type of risk, raises the possibility that risks will not be accurately assessed or compared, and that too many or too few resources will be allocated toward mitigation of a particular risk. As ports continue to revise and improve their planning efforts, available evidence indicates that by taking a systemwide approach and thinking strategically about using resources to mitigate and recover from all forms of disaster, ports will be able to achieve the most effective results. AMSPs provide a useful foundation for establishing an all-hazards approach. While the SAFE Port Act does not call for expanding AMSPs in this manner, it does contain a requirement that natural disasters and other emergencies be included in the scenarios to be tested in the Port Security Exercise Program. On the basis of our prior work, we found there are challenges in using AMSCs and AMSPs as the basis for broader all-hazards planning. These challenges include determining the extent that security plans can serve all-hazards purposes. We recommended that DHS encourage port stakeholders to use the AMSCs and MTSA-required AMSPs to discuss all- hazards planning. DHS concurred with this recommendation. Maritime Security Exercises Require a Broader Scope and Participation The Coast Guard Captain of the Port and the AMSC are required by MTSA regulations to conduct or participate in exercises to test the effectiveness of AMSPs annually, with no more than 18 months between exercises. These exercises—which have been conducted for the past several years— are designed to continuously improve preparedness by validating information and procedures in the area plan, identifying weaknesses and strengths, and practicing command and control within an incident command/unified command framework. In August 2005, the Coast Guard and the TSA initiated the Port Security Training Exercise Program (PortSTEP)—an exercise program designed to involve the entire port community, including public governmental agencies and private industry, and intended to improve connectivity of various surface transportation modes and enhance AMSPs. Between August 2005 and October 2007, the Coast Guard expected to conduct PortSTEP exercises for 40 area committees and other port stakeholders. Additionally, the Coast Guard initiated its own Area Maritime Security Training and Exercise Program (AMStep) in October 2005. This program was also designed to involve the entire port community in the implementation of the AMSP. Between the two programs, PortSTEP and AMStep, all AMSCs have received a port security exercise each year since inception. The SAFE Port Act included several new requirements related to security exercises, such as establishing a Port Security Exercise Program to test and evaluate the capabilities of governments and port stakeholders to prevent, prepare for, mitigate against, respond to, and recover from acts of terrorism, natural disasters, and other emergencies at facilities that MTSA regulates. The act also required the establishment of a port security exercise improvement plan process that would identify, disseminate, and monitor the implementation of lessons learned and best practices from port security exercises. Though we have not specifically examined compliance with these new requirements, our work in examining past exercises suggests that implementing a successful exercise program faces several challenges. These challenges include setting the scope of the program to determine how exercise requirements in the SAFE Port Act differ from area committee exercises that are currently performed. This is especially true for incorporating recovery scenarios into exercises. In this past work, we also found that Coast Guard terrorism exercises frequently focused on prevention and awareness, but often did not include recovery activities. According to the Coast Guard, with the recent emphasis on planning for recovery operations, it has held several exercises over the past year that have included in part, or solely, recovery activities. It will be important that future exercises also focus on recovery operations so public and private stakeholders can cover gaps that might hinder commerce after a port incident. Other long-standing challenges include completing after- action reports in a timely and thorough manner and ensuring that all relevant agencies participate. According to the Coast Guard, as the primary sponsor of these programs, it faces a continuing challenge in getting comprehensive participation in these exercises. The Coast Guard Is Evaluating the Security of Foreign Ports, but Faces Resource Challenges The security of domestic ports also depends upon security at foreign ports where cargoes bound for the United States originate. To help secure the overseas supply chain, MTSA required the Coast Guard to develop a program to assess security measures in foreign ports and, among other things, recommend steps necessary to improve security measures in those ports. The Coast Guard established this program, called the International Port Security Program, in April 2004. Under this program, the Coast Guard and host nations review the implementation of security measures in the host nations’ ports against established security standards, such as the International Maritime Organization’s International Ship and Port Facility Security (ISPS) Code. Coast Guard teams have been established to conduct country visits, discuss security measures implemented, and collect and share best practices to help ensure a comprehensive and consistent approach to maritime security in ports worldwide. The conditions of these visits, such as timing and locations, are negotiated between the Coast Guard and the host nation. Coast Guard officials also make annual visits to the countries to obtain additional observations on the implementation of security measures and ensure deficiencies found during the country visits are addressed. Both the SAFE Port Act and other congressional directions have called for the Coast Guard to increase the pace of its visits to foreign countries. Although MTSA did not set a time frame for completion of these visits, the Coast Guard initially set a goal to visit the approximately 140 countries that conduct maritime trade with the United States by December 2008. In September 2006, the conference report accompanying the fiscal year 2007 DHS Appropriations Act directed the Coast Guard to “double the amount” at which it was conducting its visits. Subsequently, in October 2006, the SAFE Port Act required the Coast Guard to reassess security measures at the foreign ports every 3 years. Coast Guard officials said they will comply with the more stringent requirements and will reassess countries on a 2-year cycle. With the expedited pace, the Coast Guard now expects to assess all countries by March 2008, after which reassessments will begin. We are currently conducting a review of the Coast Guard’s International Port Security Program that evaluates the Coast Guard’s implementation of international enforcement programs. The report, expected to be issued in early 2008, will cover issues related to the program, such as the extent to which the program is using a risk-based approach in carrying out its work, what challenges the program faces as it moves forward, and the extent to which the observations collected during the country visits are used by other programs such as the Coast Guard’s port state control inspections and high interest vessel boarding programs. As of September 2007, the Coast Guard reported that it has visited 109 countries under this program and plans to visit another 29 more by March 2008. For the countries for which the Coast Guard has issued a final report, the Coast Guard reported that most had “substantially implemented the security code,” while a few countries were found to have not yet implemented the ISPS Code and will be subject to a reassessment or other sanctions. The Coast Guard also found several facilities needing improvements in areas such as access controls, communication devices, fencing, and lighting. While our review is still preliminary, Coast Guard officials told us that to plan and prepare for the next cycle of reassessments that are to begin next year, they are considering modifying their current visit methodology to incorporate a risk-based approach to prioritize the order and intensity of the next round of country visits. To do this, they have consulted with a contractor to develop an updated country risk prioritization model. Under the previous model, the priority assigned to a country for a visit was weighted heavily towards the volume of U.S. trade with that country. The new model being considered is to incorporate other factors, such as corruption and terrorist activity levels within the countries. Program officials told us that the details of this revised approach have yet to be finalized. Coast Guard officials told us that as they complete the first round of visits and move into the next phase of revisits, challenges still exist in implementing the program. One challenge identified was that the faster rate at which foreign ports will now be reassessed will require hiring and training new staff—a challenge the officials expect will be made more difficult because experienced personnel who have been with the program since its inception are being transferred to other positions as part of the Coast Guard’s rotational policy. These officials will need to be replaced with newly assigned personnel. Reluctance by some countries to allow the Coast Guard to visit their ports due to concerns over sovereignty was another challenge cited by program officials in completing the first round of visits. According to these officials, before permitting Coast Guard officials to visit their ports, some countries insisted on visiting and assessing a sample of U.S. ports. The Coast Guard was able to accommodate their request through the program’s reciprocal visit feature in which the Coast Guard hosts foreign delegations to visit U.S. ports and observe ISPS Code implementation in the United States. This subsequently helped gain the cooperation of the countries in hosting a Coast Guard visit to their own ports. However, as they begin to revisit countries as part of the program’s next phase, program officials stated that sovereignty concerns may still be an issue. Some countries may be reluctant to host a comprehensive country visit on a recurring basis because they believe the frequency—once every 2 to 3 years—is too high. Sovereignty also affects the conditions of the visits, such as timing and locations, because such visits are negotiated between the Coast Guard and the host nation. Thus the Coast Guard team making the visit could be precluded from seeing locations that are not in compliance. Another challenge program officials cite is having limited ability to help countries build on or enhance their capacity to implement the ISPS Code requirements. For example, the SAFE Port Act required that GAO report on various aspects of port security in the Caribbean Basin. We earlier reported that although the Coast Guard found that most of the countries had substantially implemented the ISPS Code, some facilities needed to make improvements or take additional measures. In addition, our discussions with facility operators and government officials in the region indicated that assistance—such as additional training—would help enhance their port security. Program officials stated that while their visits provide opportunities for them to identify potential areas to improve or help sustain the security measures put in place, other than sharing best practices or providing presentations on security practices, the program does not currently have the resources to directly assist countries with more in-depth training or technical assistance. To overcome this, program officials have worked with other agencies (e.g., the Departments of Defense and State) and international organizations (e.g., the Organization of American States) to secure funding for training and assistance to countries where port security conferences have been held (e.g., the Dominican Republic and the Bahamas). Program officials indicated that as part of reexamining the approach for the program’s next phase, they will also consider possibilities to improve the program’s ability to provide training and capacity building to countries when a need is identified. Port Facility Security Efforts Continue, but Additional Evaluation is Needed To improve the security at individual facilities at ports, many long-standing programs are underway. However, new challenges to their successful implementation have emerged. The Coast Guard is required to conduct assessments of security plans and facility compliance inspections, but faces challenges in staffing and training to meet the SAFE Port Act’s additional requirements such as the sufficiency of trained personnel and guidance to conduct facility inspections. TSA’s TWIC program has addressed some of its initial program challenges, but will continue to face additional challenges as the program rollout continues. Many steps have been taken to ensure that transportation workers are properly screened, but redundancies in various background checks have decreased efficiency and highlighted the need for increased coordination. The Coast Guard’s Compliance Monitoring of Maritime Facilities Identifies Deficiencies, but Program Effectiveness Overall Has Not Been Evaluated MTSA and its implementing regulations required owners and operators of certain maritime facilities (e.g., power stations, chemical manufacturing facilities, and refineries that are located on waterways and receive foreign vessels) to conduct assessments of their security vulnerabilities, develop security plans to mitigate these vulnerabilities, and implement measures called for in the security plans by July 1, 2004. Under the Coast Guard regulations, these plans are to include items such as measures for access control, responses to security threats, and drills and exercises to train staff and test the plan. The plans are “performance-based,” meaning that the Coast Guard has specified the outcomes it is seeking to achieve and has given facilities responsibility for identifying and delivering the measures needed to achieve these outcomes. Under MTSA, Coast Guard guidance calls for the Coast Guard to conduct one on-site facility inspection annually to verify continued compliance with the plan. The SAFE Port Act, enacted in 2006, required the Coast Guard to conduct at least two inspections—one of which was to be unannounced—of each facility annually. We currently have ongoing work that reviews the Coast Guard’s oversight strategy under MTSA and SAFE Port Act requirements. The report, expected later this year, will cover, among other things, the extent to which the Coast Guard has met its inspection requirements and found facilities to be in compliance with its security plans, the sufficiency of trained inspectors and guidance to conduct facility inspections, and aspects of the Coast Guard’s overall management of its MTSA facility oversight program, particularly documenting compliance activities. Our work is preliminary. However, according to our analysis of Coast Guard records and statements from officials, the Coast Guard seems to have conducted facility compliance exams annually at most—but not all— facilities. Redirection of staff to a higher-priority mission, such as Hurricane Katrina emergency operations, may have accounted for some facilities not having received an annual exam. The Coast Guard also conducted a number of unannounced inspections—about 4,500 in 2006, concentrated in around 1,200 facilities—prior to the SAFE Port Act’s passage. According to officials we spoke with, the Coast Guard selected facilities for unannounced inspection based on perceived risk and inspection convenience (e.g., if inspectors were already at the facility for another purpose). The Coast Guard has identified facility plan compliance deficiencies in about one-third of facilities inspected each year, and the deficiencies identified are concentrated in a small number of categories (e.g., failure to follow the approved plan for ensuring facility access control, record keeping, or meeting facility security officer requirements). We are still in the process of reviewing the data Coast Guard uses to document compliance activities and will have additional information in our forthcoming report. Sectors we visited generally reported having adequate guidance and staff for conducting consistent compliance exams, but until recently, little guidance on conducting unannounced inspections, which are often incorporated into work while performing other mission tasks. Lacking guidance on unannounced inspections, the process for conducting one varied considerably in the sectors we visited. For example, inspectors in one sector found the use of a telescope effective in remotely observing facility control measures (such as security guard activities), but these inspectors primarily conduct unannounced inspections as part of vehicle patrols. Inspectors in another sector conduct unannounced inspections at night, going up to the security gate and querying personnel about their security knowledge (e.g., knowledge of high-security level procedures). As we completed our fieldwork, the Coast Guard issued a Commandant message with guidance on conducting unannounced inspections. This message may provide more consistency, but how the guidance will be applied and its impact on resource needs remain uncertain. Coast Guard officials said they plan to revise their primary circular on facility oversight by February 2008. They are also planning to revise MTSA regulations to conform to SAFE Port Act requirements in 2009 (in time for the reapproval of facility security plans) but are behind schedule. We recommended in June 2004 that the Coast Guard evaluate its compliance inspection efforts taken during the initial 6-month period after July 1, 2004, and use the results to strengthen its long-term strategy for ensuring compliance. The Coast Guard agreed with this recommendation. Nevertheless, based on our ongoing work, it appears that the Coast Guard has not conducted a comprehensive evaluation of its oversight program to identify strengths or target areas for improvement after 3 years of program implementation. Our prior work across a wide range of public and private- sector organizations shows that high-performing organizations continuously assess their performance with information about results based on their activities. For decision makers to assess program strategies, guidance, and resources, they need accurate and complete data reflecting program activities. We are currently reviewing the accuracy and completeness of Coast Guard compliance data and will report on this issue later this year. TSA Has Made Progress in Implementing the TWIC Program, but Key Deadline Has Been Missed as TSA Evaluates Test Program To control access to secure areas of port facilities and vessels, the Secretary of DHS was required by MTSA to, among other things, issue a transportation worker identification card that uses biometrics, such as fingerprints.. TSA had already initiated a program to create an identification credential that could be used by workers in all modes of transportation when MTSA was enacted. This program, called the TWIC program, is designed to collect personal and biometric information to validate workers’ identities, conduct background checks on transportation workers to ensure they do not pose a threat to security, issue tamper- resistant biometric credentials that cannot be counterfeited, verify these credentials using biometric access control systems before a worker is granted unescorted access to a secure area, and revoke credentials if disqualifying information is discovered, or if a card is lost, damaged, or stolen. TSA, in partnership with the Coast Guard, is focusing initial implementation on maritime facilities. We have previously reported on the status of this program and the challenges that it faces. Most recently, we reported that TSA has made progress in implementing the TWIC program and addressing problems we previously identified regarding contract planning and oversight and coordination with stakeholders. For example, TSA reported that it added staff with program and contract management expertise to help oversee the contract and developed plans for conducting public outreach and education efforts. The SAFE Port Act required TSA to implement TWIC at the 10 highest-risk ports by July 1, 2007, conduct a pilot program to test TWIC access control technologies in the maritime environment; issue regulations requiring TWIC card readers based on the findings of the pilot; and periodically report to Congress on the status of the program. However, TSA did not meet the July 1 deadline, citing the need to conduct additional testing of the systems and technologies that will be used to enroll the estimated 770,000 workers that will be required to obtain a TWIC card. According to TSA officials, the agency plans to complete this testing and begin enrolling workers at the Port of Wilmington on October 16, 2007, and begin enrolling workers at additional ports in November 2007. TSA is also in the process of conducting a pilot program to test TWIC access control technologies in the maritime environment that will include a variety of maritime facilities and vessels in multiple geographic locations. According to TSA, the results of the pilot program will help the agency issue future regulations that will require the installation of access control systems necessary to read the TWIC cards. It is important that TSA establish clear and reasonable time frames for implementing TWIC as the agency begins enrolling workers and issuing TWIC cards in October. TSA could face additional challenges as the TWIC implementation progresses; these include monitoring the effectiveness of contract planning and oversight. TSA has developed a quality assurance surveillance plan with performance metrics that the enrollment contractor must meet to receive payment. The agency has also taken steps to strengthen government oversight of the TWIC contract by adding staff with program and contract management expertise. However, the effectiveness of these steps will not be clear until implementation of the TWIC program begins. Ensuring a successful enrollment process for the program presents another challenge. According to TSA, the agency has made communication and coordination top priorities by taking actions such as establishing a TWIC stakeholder communication committee and requiring the enrollment contractor to establish a plan for coordinating and communicating with all stakeholders who will be involved in the program. Finally, TSA will have to address access control technologies to ensure that the program is implemented effectively. It will be important that TSA’s TWIC access control technology pilot ensure that these technologies work effectively in the maritime environment before facilities and vessels will be required to implement them. DHS Working to Coordinate Multiple Background Check Programs for Transportation Workers Since the 9/11 attacks, the federal government has taken steps to ensure that transportation workers, many of whom transport hazardous materials or have access to secure areas in locations such as port facilities, are properly screened to ensure they do not pose a security risk. Concerns have been raised, however, that transportation workers may face a variety of background checks, each with different standards. In July 2004, the 9/11 Commission reported that having too many different biometric standards, travel facilitation systems, credentialing systems, and screening requirements hampers the development of information crucial for stopping terrorists from entering the country, is expensive, and is inefficient. The commission recommended that a coordinating body raise standards, facilitate information-sharing, and survey systems for potential problems. In August 2004, Homeland Security Presidential Directive - 11 announced a new U.S. policy to “implement a coordinated and comprehensive approach to terrorist-related screening—in immigration, law enforcement, intelligence, counterintelligence, and protection of the border, transportation systems, and critical infrastructure—that supports homeland security, at home and abroad.” DHS components have begun a number of their own background check initiatives. For example, in January 2007, TSA determined that the background checks required for three other DHS programs satisfied the background check requirement for the TWIC program. That is, an applicant who has already undergone a background check in association with any of these three programs does not have to undergo an additional background check and pays a reduced fee to obtain a TWIC card. Similarly, the Coast Guard plans to consolidate four credentials and require that all pertinent information previously submitted by an applicant at a Coast Guard Regional Examination Center will be forwarded by the center to TSA through the TWIC enrollment process. In April 2007, we completed a study of DHS background check programs as part of a SAFE Port Act requirement to do so. We found that the six programs we reviewed were conducted independently of one another, collected similar information, and used similar background check processes. Further, each program operated separate enrollment facilities to collect background information and did not share it with the other programs. We also found that DHS did not track the number of workers who, needing multiple credentials, were subjected to multiple background check programs. Because DHS is responsible for a large number of background check programs, we recommended that DHS ensure that its coordination plan includes implementation steps, time frames, and budget requirements; discusses potential costs/benefits of program standardization; and explores options for coordinating and aligning background checks within DHS and other federal agencies. DHS concurred with our recommendations and continues to take steps— both at the department level and within its various agencies—to consolidate, coordinate, and harmonize such background check programs. At the department level, DHS created SCO in July 2006 to coordinate DHS background check programs. SCO is in the early stages of developing its plans for this coordination. In December 2006, SCO issued a report identifying common problems, challenges, and needed improvements in the credentialing programs and processes across the department. The office awarded a contract in April 2007 that will provide the methodology and support for developing an implementation plan to include common design and comparability standards and related milestones to coordinate DHS screening and credentialing programs. Since April 2007, DHS and SCO signed a contract to produce three deliverables to align its screening and credentialing activities, set a method and time frame for applying a common set of design and comparability standards, and eliminate redundancy through harmonization. These three deliverables are as follows: Credentialing framework: A framework completed in July 2007 that describes a credentialing life-cycle of registration and enrollment, eligibility vetting and risk assessment, issuance, expiration and revocation, and redress. This framework was to incorporate risk-based levels or criteria, and an assessment of the legal, privacy, policy, operational, and technical challenges. Technical review: An assessment scheduled for completion in October 2007 is to be completed by the contractor in conjunction with the DHS Office of the Chief Information Officer. This is to include a review of the issues present in the current technical environment and the proposed future technical environment needed to address those issues, and provide recommendations for targeted investment reuse and key target technologies. Transition plan: A plan scheduled to be completed in November 2007 is to outline the projects needed to actualize the framework, including identification of major activities, milestones, and associated timeline and costs. Stakeholders in this effort include multiple components of DHS and the Departments of State and Justice. In addition, the DHS Office of the Chief Information Officer (CIO) and the director of SCO issued a memo in May 2007 to promote standardization across screening and credentialing programs. In this memo, DHS indicated that (1) programs requiring the collection and use of fingerprints to vet individuals will use the Automated Biometric Identification System (IDENT); (2) these programs are to reuse existing or currently planned and funded infrastructure for the intake of identity information to the greatest extent possible; (3) its CIO is to establish a procurement plan to ensure that the department can handle a large volume of automated vetting from programs currently in the planning phase; and (4) to support the sharing of databases and potential consolidation of duplicative applications, the Enterprise Data Management Office is currently developing an inventory of biographic data assets that DHS maintains to support identity management and screening processes. While continuing to consolidate, coordinate, and harmonize background check programs, DHS will likely face additional challenges, such as ensuring that its plans are sufficiently complete without being overly restrictive, and lack of information regarding the potential costs and benefits associated with the number of redundant background checks. SCO will be challenged to coordinate DHS’s background check programs in such a way that any common set of standards developed to eliminate redundant checks meets the varied needs of all the programs without being so strict that it unduly limits the applicant pool or so intrusive that potential applicants are unwilling to take part. Without knowing the potential costs and benefits associated with the number of redundant background checks that harmonization would eliminate, DHS lacks the performance information that would allow its program managers to compare their program results with goals. Thus, DHS cannot be certain where to target program resources to improve performance. As we recommended, DHS could benefit from a plan that includes, at a minimum, a discussion of the potential costs and benefits associated with the number of redundant background checks that would be eliminated through harmonization. Container Security Programs Continue to Expand and Mature, but New Challenges Emerge Through the development of strategic plans, human capital strategies, and performance measures, several container security programs have been established and matured. However, these programs continue to face technical and management challenges in implementation. As part of its layered security strategy, CBP developed the Automated Targeting System as a decision support tool to assess the risks of individual cargo containers. ATS is a complex mathematical model that uses weighted rules that assign a risk score to each arriving shipment based on shipping information (e.g., manifests, bills of lading, and entry data). Although the program has faced quality assurance challenges from its inception, CBP has made significant progress in addressing these challenges. CBP’s in- bond program does not collect detailed information at the U.S. port of arrival that could aid in identifying cargo posing a security risk and promote the effective use of inspection resources. In the past, CSI has lacked sufficient staff to meet program requirements. C-TPAT has faced challenges with validation quality and management in the past, in part due to its rapid growth. The Department of Energy’s (DOE) Megaports Initiative faces ongoing operational and technical challenges in the installation and maintenance of radiation detection equipment at ports. In addition, implementing the Secure Freight Initiative and the 9/11 Commission Act of 2007 presents additional challenges for the scanning of cargo containers inbound to the United States. Management of the Automated Targeting System Has Improved CBP is responsible for preventing terrorists and WMD from entering the United States. As part of this responsibility, CBP addresses the potential threat posed by the movement of oceangoing cargo containers. To perform this mission, CBP officers at seaports utilize officer knowledge and CBP automated systems to assist in determining which containers entering the country will undergo inspections, and then perform the necessary level of inspection of each container based upon risk. To assist in determining which containers are to be subjected to inspection, CBP uses a layered security strategy that attempts to focus resources on potentially risky cargo shipped in containers while allowing other ocean going containers to proceed without disrupting commerce. ATS is one key element of this strategy. CBP uses ATS as a decision support tool to review documentation, including electronic manifest information submitted by the ocean carriers on all arriving shipments, and entry data submitted by brokers to develop risk scores that help identify containers for additional inspection. CBP requires the carriers to submit manifest information 24 hours prior to a United States-bound sea container being loaded onto a vessel in a foreign port. CBP officers use these scores to help them make decisions on the extent of documentary review or additional inspection as required. We have conducted several reviews of ATS and made recommendations for its improvement. Consistent with these recommendations, CBP has implemented a number of important internal controls for the administration and implementation of ATS. For example, CBP (1) has established performance metrics for ATS, (2) is manually comparing the results of randomly conducted inspections with the results of inspections resulting from ATS analysis of the shipment data, and (3) has developed and implemented a testing and simulation environment to conduct computer-generated tests of ATS. Since our last report on ATS, the SAFE Port Act required that the CBP Commissioner take additional actions to improve ATS. These requirements included steps such as (1) having an independent panel review the effectiveness and capabilities of ATS; (2) considering future iterations of ATS that would incorporate smart features; (3) ensuring that ATS has the capability to electronically compare manifest and other available data to detect any significant anomalies and facilitate their resolution; (4) ensuring that ATS has the capability to electronically identify, compile, and compare select data elements following a maritime transportation security incident; and (5) developing a schedule to address recommendations made by GAO and the Inspectors General of the Department of the Treasury and DHS. CBP’s Management of the In-Bond Cargo System Impedes Efforts to Manage Security Risks CBP’s in-bond system—which allows goods to transit the United States without officially entering U.S. commerce—must balance the competing goals of providing port security, facilitating trade, and collecting trade revenues. However, we have earlier reported that CBP’s management of the system has impeded efforts to manage security risks. Specifically, CBP does not collect detailed information on in-bond cargo at the U.S. port of arrival that could aid in identifying cargo posing a security risk and promote effective use of inspection resources. The in-bond system is designed to facilitate the flow of trade throughout the United States and is estimated to be widely used. The U.S. customs system allows cargo to move from the U.S. arrival port, without appraisal or payment of duties to another U.S. port for official entry into U.S. commerce or for exportation. In-bond regulations currently permit bonded carriers from 15 to 60 days, depending on the mode of shipment, to reach their final destination and allow them to change a shipment’s final destination without notifying CBP. The in-bond system allows the trade community to avoid congestion and delays at U.S. seaports whose infrastructure has not kept pace with the dramatic growth in trade volume. In-bond facilitates trade by allowing importers and shipping agents the flexibility to move cargo more efficiently. Using the number of in-bond transactions reported by CBP for the 6-month period of October 2004 to March 2005, we found over 6.5 million in-bond transactions were initiated nationwide. Some CBP port officials have estimated that in-bond shipments represent from 30 percent to 60 percent of goods received at their ports. As discussed earlier in this testimony, CBP uses manifest information it receives on all cargo arriving at U.S. ports (including in-bond cargo) as input for ATS scoring to aid in identifying security risks and setting inspection priorities. For regular cargo, the ATS score is updated with more detailed information as the cargo makes official entry at the arrival port. For in-bond cargo, the ATS scores generally are not updated until these goods move from the port of arrival to the destination port for official entry into United States commerce, or not updated at all for cargo that is intended to be exported. As a result, in-bond goods might transit the United States without having the most accurate ATS risk score. Entry information frequently changes the ATS score for in-bond goods. For example, CBP provided data for four major ports comparing the ATS score assigned to in-bond cargo at the port of arrival based on the manifest to the ATS score given after goods made official entry at the destination port. These data show that for the four ports, the ATS score based on the manifest information stayed the same an average of 30 percent of the time after being updated with entry information, ATS scores increased an average of 23 percent of the time and decreased an average of 47 percent of the time. A higher ATS score can result in higher priority being given to cargo for inspection than otherwise would be given based solely on the manifest information. A lower ATS score can result in cargo being given a lower priority for inspection and potentially shift inspection resources to cargo deemed a higher security risk. Without having the most accurate ATS score, in-bond goods transiting the United States pose a potential security threat because higher-risk cargo may not be identified for inspection at the port of arrival. In addition, scarce inspection resources may be misdirected to in-bond goods that a security score based on better information might have shown did not warrant inspection. We earlier recommended that the Commissioner of CBP take action in three areas to improve the management of the in-bond program, which included collecting and using improved information on in-bond shipments to update the ATS score for in-bond movements at the arrival port and enable better informed decisions affecting security, trade and revenue collection. DHS agreed with most of our recommendations. According to CBP, they are in the process of developing an in-bond weight set to be utilized to further identify cargo posing a security risk. The weight set is being developed based on expert knowledge, analysis of previous in-bond seizures, and creation of rules based on in-bond concepts. The SAFE Port Act of 2006 contains provisions related to securing the international cargo supply chain, including provisions related to the movement of in-bond cargo. Specifically, it requires that CBP submit a report to several congressional committees on the in-bond system that includes an assessment of whether ports of arrival should require additional information for in-bond cargo, a plan for tracking in-bond cargo in CBP’s Automated Commercial Environment information system, and assessment of the personnel required to ensure reconciliation of in-bond cargo between arrival port and destination port. The report must also contain an assessment of the feasibility of reducing transit time while traveling in-bond, and an evaluation of the criteria for targeting and examining in-bond cargo. Although the report was due June 30, 2007, CBP has not yet finalized the report and released it to Congress. The CSI Program Continues to Mature, but Addressing SAFE Port Act Requirements Adds New Challenges CPB initiated its CSI program to detect and deter terrorists from smuggling WMD via cargo containers before they reach domestic seaports in January 2002. The SAFE Port Act formalized the CSI program into law. Under CSI, foreign governments sign a bilateral agreement with CBP to allow teams of U.S. customs officials to be stationed at foreign seaports to identify cargo container shipments at risk of containing WMD. CBP personnel use automated risk assessment information and intelligence to target to identify those at risk containing WMD. When a shipment is determined to be high risk, CBP officials refer it to host government officials who determine whether to examine the shipment before it leaves their seaport for the United States. In most cases, host government officials honor the U.S. request by examining the referred shipments with nonintrusive inspection equipment and, if they deem necessary, by opening the cargo containers to physically search the contents inside. CBP planned to have a total of 58 seaports by the end of fiscal year 2007. Our 2003 and 2005 reports on the CSI program found both successes and challenges faced by CBP in implementing the program. Since our last CSI report in 2005, CBP has addressed some of the challenges we identified and has taken steps to improve the CSI program. Specifically, CBP contributed to the Strategy to Enhance International Supply Chain Security that DHS issued in July 2007, which addressed a SAFE Port Act requirement and filled an important gap—between broad national strategies and program-specific strategies, such as for CSI—in the strategic framework for maritime security that has evolved since 9/11. In addition, in 2006 CBP issued a revised CSI strategic plan for 2006 to 2011, which added three critical elements that we had identified in our April 2005 report as missing from the plan’s previous iteration. In the revised plan, CBP described how performance goals and measures are related to CSI objectives, how CBP evaluates CSI program operations, and what external factors beyond CBP’s control could affect program operations and outcomes. Also, by expanding CSI operations to 58 seaports by the end of September 2007, CBP would have met its objective of expanding CSI locations and program activities. CBP projected that at the end of fiscal year 2007 between 85 and 87 percent of all U.S. bound shipments in containers will pass through CSI ports where the risk level of the container cargo is assessed and the contents are examined as deemed necessary. Although CBP’s goal is to review information about all U.S.-bound containers at CSI seaports for high-risk contents before the containers depart for the United States, we reported in 2005 that the agency has not been able to place enough staff at some CSI ports to do so. Also, the SAFE Port Act required DHS to develop a human capital management plan to determine adequate staffing levels in U.S. and CSI ports. CBP has developed a human capital plan, increased the number of staff at CSI ports, and provided additional support to the deployed CSI staff by using staff in the United States to screen containers for various risk factors and potential inspection. With these additional resources, CBP reports that manifest data for all US-bound container cargo are reviewed using ATS to determine whether the container is at high risk of containing WMD. However, the agency faces challenges in ensuring that optimal numbers of staff are assigned to CSI ports due in part to its reliance on placing staff overseas at CSI ports without systematically determining which functions could be performed overseas and which could be performed domestically. Also, in 2006 CBP improved its methods for conducting onsite evaluations of CSI ports, in part by requiring CSI teams at the seaports to demonstrate their proficiency at conducting program activities and by employing electronic tools designed to assist in the efficient and systematic collection and analysis of data to help in evaluating the CSI team’s proficiency. In addition, CBP continued to refine the performance measures it uses to track the effectiveness of the CSI program by streamlining the number of measures it uses to six, modifying how one measure is calculated to address an issue we identified in our April 2005 report; and developing performance targets for the measures. We are continuing to review these assessment practices as part of our ongoing review of the CSI program, and expect to report on the results of this effort shortly. Similar to our recommendation in a previous CSI report, the SAFE Port Act called upon DHS to establish minimum technical criteria for the use of nonintrusive inspection equipment in conjunction with CSI. The act also directs DHS to require that seaports receiving CSI designation operate such equipment in accordance with these criteria and with standard operating procedures developed by DHS. CBP officials stated that their agency faces challenges in implementing this requirement due to sovereignty issues and the fact that the agency is not a standard setting organization, either for equipment or for inspections processes or practices. However, CBP has developed minimum technical standards for equipment used at domestic ports and the World Customs Organization (WCO) had described issues—not standards—to consider when procuring inspection equipment. Our work suggests that CBP may face continued challenges establishing equipment standards and monitoring host government operations, which we are also examining in our ongoing review of the CSI program. C-TPAT Continues to Expand and Mature, but Management Challenges Remain CBP initiated C-TPAT in November 2001 to complement other maritime security programs as part of the agency’s layered security strategy. In October 2006, the SAFE Port Act formalized C-TPAT into law. C-TPAT is a voluntary program that enables CBP officials to work in partnership with private companies to review the security of their international supply chains and improve the security of their shipments to the United States. In return for committing to improve the security of their shipments by joining the program, C-TPAT members receive benefits that result in the likelihood of reduced scrutiny of their shipments, such as a reduced number of inspections or shorter wait times for their shipments. CBP uses information about C-TPAT membership to adjust risk-based targeting of these members shipments in ATS. As of July 2007, CBP had certified more than 7,000 companies that import goods via cargo containers through U.S. seaports—which accounted for approximately 45 percent of all U.S. imports—and validated the security practices of 78 percent of these certified participants. We reported on the progress of the C-TPAT program in 2003 and 2005 and recommended that CBP develop a strategic plan and performance measures to track the program’s status in meeting its strategic goals. DHS concurred with these recommendations. The SAFE Port Act also mandated that CBP develop and implement a 5-year strategic plan with outcome-based goals and performance measures for C-TPAT. CBP officials stated that they are in the process of updating their strategic plan for C-TPAT, which was issued in November 2004, for 2007 to 2012. This updated plan is being reviewed within CBP, but a time frame for issuing the plan has not been established. We recommended in our March 2005 report that CBP establish performance measures to track its progress in meeting the goals and objectives established as part of the strategic planning process. Although CBP has since put additional performance measures in place, CBP’s efforts have focused on measures regarding program participation and facilitating trade and travel. CBP has not yet developed performance measures for C-TPAT’s efforts aimed at ensuring improved supply chain security, which is the program’s purpose. In our previous work, we acknowledged that the C-TPAT program holds promise as part of a layered maritime security strategy. However, we also raised a number of concerns about the overall management of the program. Since our past reports, the C-TPAT program has continued to mature. The SAFE Port Act mandated that actions—similar to ones we had recommended in our March 2005 report—be taken to strengthen the management of the program. For example, the act included a new goal that CBP make a certification determination within 90 days of CBP’s receipt of a C-TPAT application, validate C-TPAT members’ security measures and supply chain security practices within 1 year of their certification, and revalidate those members no less than once in every 4 years. As we recommended in our March 2005 report, CBP has developed a human capital plan and implemented a records management system for documenting key program decisions. CBP has addressed C-TPAT staffing challenges by increasing the number of supply chain security specialists from 41 in 2005 to 156 in 2007. In February 2007, CBP updated its resource needs to reflect SAFE Port Act requirements, including that certification, validation, and revalidation processes be conducted within specified time frames. CBP believes that C-TPAT’s current staff of 156 supply chain security specialists will allow it to meet the act’s initial validation and revalidation goals for 2007 and 2008. If an additional 50 specialists authorized by the act are made available by late 2008, CBP expects to be able to stay within compliance of the act’s time frame requirements through 2009. In addition, CBP developed and implemented a centralized electronic records management system to facilitate information storage and sharing and communication with C-TPAT partners. This system—known as the C-TPAT Portal—enables CBP to track and ascertain the status of C-TPAT applicants and partners to ensure that they are certified, validated, and revalidated within required time frames. As part of our ongoing work, we are reviewing the data captured in Portal, including data needed by CBP management to assess the efficiency of C-TPAT operations and to determine compliance with its program requirements. These actions—dedicating resources to carry out certification and validation reviews and putting a system in place to track the timeliness of these reviews—should help CBP meet several of the mandates of the SAFE Port Act. We expect to issue a final report early next year. Our 2005 report raised concerns about CBP granting benefits prematurely—before CBP had validated company practices. Instead of granting new members full benefits without actual verification of their supply chain security, CBP implemented three tiers to grant companies graduated benefits based on CBP’s certification and validation of their security practices. Related to this, the SAFE Port Act codified CBP’s policy of granting graduated benefits to C-TPAT members. Tier 1 benefits—a limited reduction in the score assigned in ATS—are granted to companies upon certification that their written description of their security profile meets minimum security criteria. Companies whose security practices CBP validates in an on-site assessment receive Tier 2 benefits that may include reduced scores in ATS, reduced cargo examinations, and priority searches of cargo. If CBP’s validation shows sustained commitment by a company to security practices beyond what is expected, the company receives Tier 3 benefits. Tier 3 benefits may include expedited cargo release at U.S. ports at all threat levels, further reduction in cargo examinations, priority examinations, and participation in joint incident management exercises. Our 2005 report also raised concerns about whether the validation process was rigorous enough. Similarly, the SAFE Port Act mandates that the validation process be strengthened, including setting a year time frame for completing validations. CBP initially set a goal of validating all companies within their first 3 years as C-TPAT members, but the program’s rapid growth in membership made the goal unachievable. CBP then moved to a risk-based approach to selecting members for validation, considering factors such as a company’s having foreign supply chain operations in a known terrorist area or involving multiple foreign suppliers. CBP further modified its approach to selecting companies for validation to achieve greater efficiency by conducting “blitz” operations to validate foreign elements of multiple members’ supply chains in a single trip. Blitz operations focus on factors such as C-TPAT members within a certain industry, supply chains within a certain geographic area, or foreign suppliers to multiple C-TPAT members. Risks remain a consideration, according to CBP, but the blitz strategy drives the decision of when a member company will be validated. In addition to taking these actions to efficiently conduct validations, CBP has periodically updated the minimum security requirements that companies must meet to be validated and is conducting a pilot program of using third-party contractors to conduct validation assessments. As part of our ongoing work, we are reviewing these actions, which are required as part of the SAFE Port Act, and other CBP efforts to enhance its C-TPAT validation process. CBP Has Played a Key Role in Promoting Global Customs Security Standards and Initiatives, but Progress with These Efforts Presents New Challenges for CSI and C-TPAT The CSI and C-TPAT programs have provided a model for global customs security standards, but as other countries adopt the core principles of CSI and programs similar to C-TPAT, CBP may face new challenges. Foreign officials within the WCO and elsewhere have observed the CSI and C-TPAT programs as potential models for enhancing supply chain security. Also, CBP has taken a lead role in working with members of the domestic and international customs and trade community on approaches to standardizing supply chain security worldwide. As CBP has recognized, and we have previously reported, in security matters the United States is not self-contained, in either its problems or its solutions. The growing interdependence of nations requires policymakers to recognize the need to work in partnerships across international boundaries to achieve vital national goals. For this reason, CBP has committed through its strategic planning process to develop and promote an international framework of standards governing customs-to-customs relationships and customs-to-business relationships in a manner similar to CSI and C-TPAT, respectively. To achieve this, CBP has worked with foreign customs administrations through the WCO to establish a framework creating international standards that provide increased security of the global supply chain while facilitating international trade. The member countries of the WCO, including the United States, adopted such a framework, known as the WCO Framework of Standards to Secure and Facilitate Global Trade and commonly referred to as the SAFE Framework, in June 2005. The SAFE Framework internationalizes the core principles of CSI in creating global standards for customs security practices and promotes international customs-to-business partnership programs, such as C-TPAT. As of September 11, 2007, 148 WCO member countries had signed letters of intent to implement the SAFE Framework. CBP, along with the customs administrations of other countries and through the WCO, provides technical assistance and training to those countries that want to implement the SAFE Framework, but do not yet have the capacity to do so. The SAFE Framework enhances the CSI program by promoting the implementation of CSI-like customs security practices, including the use of electronic advance information requirements and risk-based targeting, in both CSI and non-CSI ports worldwide. The framework also lays the foundation for mutual recognition, an arrangement whereby one country can attain a certain level of assurance about the customs security standards and practices and business partnership programs of another country. In June 2007, CBP entered into the first mutual recognition arrangement of a business-to-customs partnership program with the New Zealand Customs Service. This arrangement stipulates that members of one country’s business-to-customs program be recognized and receive similar benefits from the customs service of the other country. CBP is pursuing similar arrangements with Jordan and Japan, and is conducting a pilot program with the European Commission to test approaches to achieving mutual recognition and address differences in their respective programs. However, the specific details of how the participating counties’ customs officials will implement the mutual recognition arrangement— such as what benefits, if any, should be allotted to members of other countries’ C-TPAT like programs—have yet to be determined. As CBP goes forward, it may face challenges in defining the future of its CSI and C-TPAT programs and, more specifically, in managing the implementation of mutual recognition arrangements, including articulating and agreeing to the criteria for accepting another country’s program; the specific arrangements for implementation, including the sharing of information; and the actions for verification, enforcement; and, if necessary, termination of the arrangement. DNDO Faces Challenges Testing Radiation Detection Equipment DHS also has container security programs to develop and test equipment to scan containers for radiation. Its DNDO was originally created in April 2005 by presidential directive; but the office was formally established in October 2006 by Section 501 of the SAFE Port Act. DNDO has lead responsibility for conducting the research, development, testing, and evaluation of radiation detection equipment that can be used to prevent nuclear or radiological materials from entering the United States. DNDO is charged with devising the layered system of radiation detection equipment and operating procedures—known as the “global architecture”—designed to prevent nuclear smuggling at foreign ports, the nation’s borders, and inside the United States. Much of DNDO’s work on radiation detection equipment to date has focused on the development and use of radiation detection portal monitors, which are larger-scale equipment that can screen vehicles, people, and cargo entering the United States. Current portal monitors detect the presence of radiation but cannot distinguish between benign, naturally occurring radiological materials such as ceramic tile, and dangerous materials such as highly enriched uranium. Since 2005, DNDO has been testing, developing, and planning to deploy the next generation of portal monitors, known as “Advanced Spectroscopic Portals” (ASPs), which can not only detect but also identify radiological and nuclear materials within a shipping container. In July 2006, DNDO announced that it had awarded contracts to three vendors to develop and purchase $1.2 billion worth of ASPs over 5 years for deployment at U.S. points of entry. We have reported a number of times to Congress concerning DNDO’s execution of the ASP program. To ensure that DHS' substantial investment in radiation detection technology yields the greatest possible level of detection capability at the lowest possible cost, in March 2006 we recommended that once the costs and capabilities of ASPs were well understood, and before any of the new equipment was purchased for deployment, the Secretary of DHS work with the Director of DNDO to analyze the costs and benefits of deploying ASPs. Further, we recommended that this analysis focus on determining whether any additional detection capability provided by the ASPs was worth the considerable additional costs. In response to our recommendation, DNDO issued its cost-benefit analysis in May 2006 and an updated, revised version in June 2006. According to senior agency officials, DNDO believes that the basic conclusions of its cost-benefit analysis showed that the new ASP monitors are a sound investment for the U.S. government. However, in October 2006, we concluded that DNDO’s cost benefit analysis did not provide a sound basis for DNDO’s decision to purchase and deploy ASP technology because it relied on assumptions of the anticipated performance level of ASPs instead of actual test data and that it did not justify DHS’ planned $1.2 billion expenditure. We also reported that DNDO did not assess the likelihood that ASPs would either misidentify or fail to detect nuclear or radiological material. Rather, it focused its analysis on reducing the time necessary to screen traffic at border check points and reduce the impact of any delays on commerce. We recommended that DNDO conduct further testing of ASPs and the currently deployed portal monitors before spending additional funds to purchase ASPs. DNDO conducted this testing of ASPs at the Nevada Test site during February and March 2007. In September 2007, we testified on these tests, stating that, in our view, DNDO used biased test methods that enhanced the performance of the ASPs. In particular, DNDO conducted preliminary runs of almost all the materials and combination of materials that it used in the formal tests and then allowed ASP contractors to collect test data and adjust their systems to identify these materials. In addition, DNDO did not attempt in its tests to identify the limitations of ASPs—a critical oversight in its test plan. Specifically, the materials that DNDO included in its test plan did not emit enough radiation to hide or mask the presence of nuclear materials located within a shipping container. Finally, in its tests of the existing radiation detection system, DNDO did not include a critical standard operating procedure that officers with CBP use to improve the system’s effectiveness. It is important to note that, during the course of our work, CBP, DOE, and national laboratory officials we spoke to voiced concern about their lack of involvement in the planning and execution of the Nevada Test Site tests. For example, DOE officials told us that they informed DNDO in November 2006 of their concerns that the materials DNDO planned to use in its tests were too weak to effectively mask the presence of nuclear materials in a container. DNDO officials rejected DOE officials’ suggestion to use stronger materials in the tests because, according to DNDO, there would be insufficient time to obtain these materials and still obtain the DHS Secretary’s approval for full-scale production of ASPs by DNDO’s self- imposed deadline of June 26, 2007. Although DNDO has agreed to perform computer simulations to address this issue, the DNDO Director would not commit at the September testimony to delaying full-scale ASP production until all the test results were in. DOE Continues to Expand Its Megaports Program The Megaports Initiative, initiated by DOE’s National Nuclear Security Administration in 2003, represents another component in the efforts to prevent terrorists from smuggling WMD in cargo containers from overseas locations. The goal of this initiative is to enable foreign government personnel at key foreign seaports to use radiation detection equipment to screen shipping containers entering and leaving these ports, regardless of the containers’ destination, for nuclear and other radioactive material that could be used against the United States or its allies. DOE installs radiation detection equipment, such as radiation portal monitors and handheld radioactive isotope identification devices, at foreign seaports that is then operated by foreign government officials and port personnel working at these ports. Through August 2007, DOE had completed installation of radiation detection equipment at eight ports: Rotterdam, the Netherlands; Piraeus, Greece; Colombo, Sri Lanka; Algeciras, Spain; Singapore; Freeport, Bahamas; Manila, Philippines; and Antwerp, Belgium (Phase I). Operational testing is under way at four additional ports: Antwerp, Belgium (Phase II); Puerto Cortes, Honduras; Qasim, Pakistan; and Laem Chabang, Thailand. Additionally, DOE has signed agreements to begin work and is in various stages of implementation at ports in 12 other countries, including the United Kingdom, United Arab Emirates/Dubai, Oman, Israel, South Korea, China, Egypt, Jamaica, the Dominican Republic, Colombia, Panama, and Mexico, as well as Taiwan and Hong Kong. Several of these ports are also part of the Secure Freight Initiative, discussed in the next section. Further, in an effort to expand cooperation, DOE is engaged in negotiations with approximately 20 additional countries in Europe, Asia, the Middle East, and Latin America. DOE had made limited progress in gaining agreements to install radiation detection equipment at the highest priority seaports when we reported on this program in March 2005. Then, the agency had completed work at only two ports and signed agreements to initiate work at five others. We also noted that DOE’s cost projections for the program were uncertain, in part because they were based on DOE’s $15 million estimate for the average cost per port. This per port cost estimate may not be accurate because it was based primarily on DOE’s radiation detection assistance work at Russian land borders, airports, and seaports and did not account for the fact that the costs of installing equipment at individual ports vary and are influenced by factors such as a port’s size, physical layout, and existing infrastructure. Since our review, DOE has developed a strategic plan for the Megaports Initiative and revised it’s per port estimates to reflect port size, with per port estimates ranging from $2.6 million to $30.4 million. As we earlier reported, DOE faces several operational and technical challenges specific to installing and maintaining radiation detection equipment at foreign ports as the agency continues to implement its Megaports Initiative. These challenges include ensuring the ability to detect radioactive material, overcoming the physical layout of ports and cargo-stacking configurations, and sustaining equipment in port environments with high winds and sea spray. Secure Freight Initiative Testing Feasibility of Combining Scanning Technologies The SAFE Port Act required that a pilot program—known as the Secure Freight Initiative (SFI)—be conducted to determine the feasibility of 100 percent scanning of U.S. bound containers. To fulfill this requirement, CBP and DOE jointly announced the formation of SFI in December 2006, as an effort to build upon existing port security measures by enhancing the U.S. government’s ability to scan containers for nuclear and radiological materials overseas and better assess the risk of inbound containers. In essence, SFI builds upon the CSI and Megaports programs. The SAFE Port Act specified that new integrated scanning systems that couple nonintrusive imaging equipment and radiation detection equipment must be pilot-tested. It also required that, once fully implemented, the pilot integrated scanning system scan 100 percent of containers destined for the United States that are loaded at pilot program ports. According to agency officials, the initial phase of the initiative will involve the deployment of a combination of existing container scanning technology—such as X-ray and gamma ray scanners used by host nations at CSI ports to locate high-density objects that could be used to shield nuclear materials, inside containers—and radiation detection equipment. The ports chosen to receive this integrated technology are: Port Qasim in Pakistan, Puerto Cortes in Honduras, and Southampton in the United Kingdom. Four other ports located in Hong Kong, Singapore, the Republic of Korea, and Oman will receive more limited deployment of these technologies as part of the pilot program. According to CBP, containers from these ports will be scanned for radiation and other risk factors before they are allowed to depart for the United States. If the scanning systems indicate that there is a concern, both CSI personnel and host country officials will simultaneously receive an alert and the specific container will be inspected before that container continues to the United States. CBP officials will determine which containers are inspected, either on the scene locally or at CBP’s National Targeting Center. Per the SAFE Port Act, CBP is to report by April 2008 on, among other things, the lessons learned from the SFI pilot ports and the need for and the feasibility of expanding the system to other CSI ports. Every 6 months thereafter, CBP is to report on the status of full-scale deployment of the integrated scanning systems to scan all containers bound for the United States before their arrival. New Requirement for 100 Percent Scanning Introduces New Challenges Recent legislative actions have updated U.S. maritime security requirements and may affect overall international maritime security strategy. In particular, the recently enacted Implementing Recommendations of the 9/11 Commission Act (9/11 Act) requires, by 2012, 100 percent scanning of U.S.-bound cargo containers using nonintrusive imaging equipment and radiation detection equipment at foreign seaports. The act also specifies conditions for potential extensions beyond 2012 if a seaport cannot meet that deadline. Additionally, it requires the Secretary of DHS to develop technological and operational standards for scanning systems used to conduct 100 percent scanning at foreign seaports. The Secretary also is required to ensure that actions taken under the act do not violate international trade obligations and are consistent with the WCO SAFE Framework. The 9/11 Act provision replaces the requirement of the SAFE Port Act that called for 100 percent scanning of cargo containers before their arrival in the United States, but required implementation as soon as possible rather than specifying a deadline. While we have not yet reviewed the implementation of the 100 percent scanning requirement, we have a number of preliminary observations based on field visits of foreign ports regarding potential challenges CBP may face in implementing this requirement: CBP may face challenges balancing new requirement with current international risk management approach. CBP may have difficulty requiring 100 percent scanning while also maintaining a risk- based security approach that has been developed with many of its international partners. Currently, under the CSI program, CBP uses automated targeting tools to identify containers that pose a risk for terrorism for further inspection before being placed on vessels bound for the United States. As we have previously reported, using risk management allows for reduction of risk against possible terrorist attack to the nation given resources allocated and is an approach that has been accepted governmentwide. Furthermore, many U.S. and international customs officials we have spoken to, including officials from the World Customs Organization, have stated that the 100 percent scanning requirement is contrary to the SAFE Framework developed and implemented by the international customs community, including CBP. The SAFE Framework, based on CSI and C-TPAT, calls for a risk management approach, whereas the 9/11 Act calls for the scanning of all containers regardless of risk. United States may not be able to reciprocate if other countries request it. The CSI program, whereby CBP officers are placed at foreign seaports to target cargo bound for the United States, is based on a series of bilateral, reciprocal agreements with foreign governments. These reciprocal agreements also allow foreign governments the opportunity to place customs officials at U.S. seaports and request inspection of cargo containers departing from the United States and bound for their home country. Currently, customs officials from certain countries are stationed at domestic seaports and agency officials have told us that CBP has inspected 100 percent of containers that these officials have requested for inspection. According to CBP officials, the SFI pilot, as an extension of the CSI program, allows foreign officials to ask the United States to reciprocate and scan 100 percent of cargo containers bound for those countries. Although the act establishing the 100 percent scanning requirement does not mention reciprocity, CBP officials have told us that the agency does not have the capacity to reciprocate should it be requested to do so, as other government officials have indicated they might when this provision of the 9/11 Act is in place. Logistical feasibility is unknown and may vary by port. Many ports may lack the space necessary to install additional equipment needed to comply with the requirement to scan 100 percent of U.S. bound containers. Additionally, we observed that scanning equipment at some seaports is located several miles away from where cargo containers are stored, which may make it time consuming and costly to transport these containers for scanning. Similarly, some seaports are configured in such a way that there are no natural bottlenecks that would allow for equipment to be placed such that all outgoing containers can be scanned and the potential to allow containers to slip by without scanning may be possible. Transshipment cargo containers—containers moved from one vessel to another—are only available for scanning for a short period of time and may be difficult to access. Similarly, it may be difficult to scan cargo containers that remain on board a vessel as it passes through a foreign seaport. CBP officials told us that currently containers such as these that are designated as high-risk at CSI ports are not scanned unless specific threat information is available regarding the cargo in that particular container. Technological maturity is unknown. Integrated scanning technologies to test the feasibility of scanning 100 percent of U.S. bound cargo containers are not yet operational at all seaports participating in the pilot program, known as SFI. The SAFE Port Act requires CBP to produce a report regarding the program, which will include an evaluation of the effectiveness of scanning equipment at the SFI ports. However, this report will not be due until April 2008. Moreover, agency officials have stated that the amount of bandwidth necessary to transmit scanning equipment outputs to CBP officers for review exceeds what is currently feasible and that the electronic infrastructure necessary to transmit these outputs may be limited at some foreign seaports. Additionally, there are currently no international standards for the technical capabilities of inspection equipment. Agency officials have stated that CBP is not a standard setting organization and has limited authority to implement standards for sovereign foreign governments. Resource responsibilities have not been determined. The 9/11 Act does not specify who would pay for additional scanning equipment, personnel, computer systems, or infrastructure necessary to establish 100 percent scanning of U.S. bound cargo containers at foreign ports. According to the Congressional Budget Office (CBO) in its analysis of estimates for implementing this requirement, this provision would neither require nor prohibit the U.S. federal government from bearing the cost of conducting scans. For the purposes of its analysis, CBO assumed that the cost of acquiring, installing, and maintaining systems necessary to comply with the 100 percent scanning requirement would be borne by foreign ports to maintain trade with the United States. However, foreign government officials we have spoken to expressed concerns regarding the cost of equipment. They also stated that the process for procuring scanning equipment may take years and can be difficult when trying to comply with changing U.S. requirements. These officials also expressed concern regarding the cost of additional personnel necessary to: (1) operate new scanning equipment; (2) view scanned images and transmit them to the United States; and (3) resolve false alarms. An official from one country with whom we met told us that, while his country does not scan 100 percent of exports, modernizing its customs service to focus more on exports required a 50 percent increase in personnel, and other countries trying to implement the 100 percent scanning requirement would likely have to increase the size of their customs administrations by at least as much. Use and ownership of data have not been determined. The 9/11 Act does not specify who will be responsible for managing the data collected through 100 percent scanning of U.S.-bound containers at foreign seaports. However, the SAFE Port Act specifies that scanning equipment outputs from SFI will be available for review by U.S. government officials either at the foreign seaport or in the United States. It is not clear who would be responsible for collecting, maintaining, disseminating, viewing or analyzing scanning equipment outputs under the new requirement. Other questions to be resolved include ownership of data, how proprietary information would be treated, and how privacy concerns would be addressed. CBP officials have indicated they are aware that challenges exist. They also stated that the SFI will allow the agency to determine whether these challenges can be overcome. According to senior officials from CBP and international organizations we contacted, 100 percent scanning of containers may divert resources, causing containers that are truly high risk to not receive adequate scrutiny due to the sheer volume of scanning outputs that must be analyzed. These officials also expressed concerns that 100 percent scanning of U.S.-bound containers could hinder trade, leading to long lines and burdens on staff responsible for viewing images. However, given that the SFI pilot program has only recently begun, it is too soon to determine how the 100 percent scanning requirement will be implemented and its overall impact on security. Agency Comments We provided a draft of the information in this testimony to DHS. DHS provided technical comments, which we incorporated as appropriate. Mr. Chairman and members of the Committee, this completes my prepared statement. I will be happy to respond to any questions that you or other members of the committee have at this time. GAO Contact and Staff Acknowledgments For information about this testimony, please contact Stephen L. Caldwell, Director, Homeland Security and Justice Issues, at (202) 512-9610, or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Richard Ascarate, Jonathan Bachman, Jason Bair, Fredrick Berry, Christine Broderick, Stockton Butler, Steven Calvo, Frances Cook, Christopher Currie, Anthony DeFrank, Wayne Ekblad, Christine Fossett, Nkenge Gibson, Geoffrey Hamilton, Christopher Hatscher, Valerie Kasindi, Monica Kelly, Ryan Lambert, Nicholas Larson, Daniel Klabunde, Matthew Lee, Gary Malavenda, Robert Rivas, Leslie Sarapu, James Shafer, Kate Siggerud, and April Thompson. GAO Related Products Combating Nuclear Smuggling: Additional Actions Needed to Ensure Adequate Testing of Next Generation of Radiation Detection Equipment. GAO-07-1247T. Washington, D.C.: September 18, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1240T. Washington, D.C.: September 18, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1081T. Washington, D.C.: September 6, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: August 17, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Information on Port Security in the Caribbean Basin. GAO-07-804R. Washington, D.C.: June 29, 2007. Department of Homeland Security: Science and Technology Directorate’s Expenditure Plan. GAO-07-868. Washington, D.C.: June 22, 2007. Homeland Security: Guidance from Operations Directorate Will Enhance Collaboration among Departmental Operations Centers. GAO-07-683T. Washington, D.C.: June 20, 2007. Department of Homeland Security: Progress and Challenges in Implementing the Department’s Acquisition Oversight Plan. GAO-07-900. Washington, D.C.: June 13, 2007. Department of Homeland Security: Ongoing Challenges in Creating an Effective Acquisition Organization. GAO-07-948T. Washington, D.C.: June 7, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-835T. Washington, D.C.: May 15, 2007. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-833T. Washington, D.C.: May 10, 2007. Maritime Security: Observations on Selected Aspects of the SAFE Port Act. GAO-07-754T. Washington, D.C.: April 26, 2007. Transportation Security: DHS Efforts to Eliminate Redundant Background Check Investigations. GAO-07-756. Washington, D.C.: April 26, 2007. International Trade: Persistent Weaknesses in the In-Bond Cargo System Impede Customs and Border Protection’s Ability to Address Revenue, Trade, and Security Concerns. GAO-07-561. Washington, D.C.: April 17, 2007. Transportation Security: TSA Has Made Progress in Implementing the Transportation Worker Identification Credential Program, but Challenges Remain. GAO-07-681T. Washington, D.C.: April 12, 2007. Customs Revenue: Customs and Border Protection Needs to Improve Workforce Planning and Accountability. GAO-07-529. Washington, D.C.: April 12, 2007. Port Risk Management: Additional Federal Guidance Would Aid Ports in Disaster Planning and Recovery. GAO-07-412. Washington, D.C.: March 28, 2007. Combating Nuclear Smuggling: DNDO Has Not Yet Collected Most of the National Laboratories’ Test Results on Radiation Portal Monitors in Support of DNDO’s Testing and Development Programs. GAO-07-347R. Washington, D.C.: March 9, 2007. Combating Nuclear Smuggling: DHS’s Cost-Benefit Analysis to Support the Purchase of New Radiation Detection Portal Monitors Was Not Based on Available Performance Data and Did Not Fully Evaluate All the Monitors’ Costs and Benefits. GAO-07-133R. Washington, D.C.: October 17, 2006. Transportation Security: DHS Should Address Key Challenges before Implementing the Transportation Worker Identification Credential Program. GAO-06-982. Washington, D.C.: September 29, 2006. Maritime Security: Information-Sharing Efforts Are Improving. GAO-06-933T. Washington, D.C.: July 10, 2006. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: March 30, 2006. Combating Nuclear Smuggling: DHS Made Progress Deploying Radiation Detection Equipment at U.S. Ports of Entry, but Concerns Remain. GAO-06-389. Washington: D.C.: March 22, 2006. Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making. GAO-05-927. Washington, D.C.: September 9, 2005. Combating Nuclear Smuggling: Efforts to Deploy Radiation Detection Equipment in the United States and in Other Countries. GAO-05-840T. Washington, D.C.: June 21, 2005. Container Security: A Flexible Staffing Model and Minimum Equipment Requirements Would Improve Overseas Targeting and Inspection Efforts. GAO-05-557. Washington, D.C.: April 26, 2005. Homeland Security: Key Cargo Security Programs Can Be Improved. GAO-05-466T. Washington, D.C.: May 26, 2005. Maritime Security: Enhancements Made, But Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C.: May 17, 2005. Cargo Security: Partnership Program Grants Importers Reduced Scrutiny with Limited Assurance of Improved Security. GAO-05-404. Washington, D.C.: March 11, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 30, 2005. Protection of Chemical and Water Infrastructure: Federal Requirements, Actions of Selected Facilities, and Remaining Challenges. GAO-05-327. Washington, D.C.: March 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. Port Security: Better Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106. Washington, D.C.: December 2004. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Because the safety and economic security of the United States depend in substantial part on the security of its 361 seaports, the United States has a vital national interest in maritime security. The Security and Accountability for Every Port Act (SAFE Port Act), modified existing legislation and created and codified new programs related to maritime security. The Department of Homeland Security (DHS) and its U.S. Coast Guard, Transportation Security Agency, and U.S. Customs and Border Protection have key maritime security responsibilities. This testimony synthesizes the results of GAO's completed work and preliminary observations from GAO's ongoing work related to the SAFE Port Act pertaining to (1) overall port security, (2) security at individual facilities, and (3) cargo container security. To perform this work GAO visited domestic and overseas ports; reviewed agency program documents, port security plans, and post-exercise reports; and interviewed officials from the federal, state, local, private, and international sectors. Federal agencies have improved overall port security efforts by establishing committees to share information with local port stakeholders, taking steps to establish interagency operations centers to monitor port activities, conducting operations such as harbor patrols and vessel escorts, writing port-level plans to prevent and respond to terrorist attacks, testing such plans through exercises, and assessing the security at foreign ports. However, these agencies face resource constraints and other challenges trying to meet the SAFE Port Act's requirements to expand these activities. For example, the Coast Guard faces budget constraints in trying to expand its current command centers and include other agencies at the centers. Similarly, private facilities and federal agencies have taken action to improve security at about 3,000 individual facilities by writing facility-specific security plans, inspecting facilities to ensure they are complying with their plans, and developing special identification cards for workers to prevent terrorists from getting access to secure areas. Federal agencies face challenges trying to meet the act's requirements to expand the scope or speed the implementation of such activities. For example, the Transportation Security Agency missed the act's July 2007 deadline to implement the identification card program at 10 selected ports because of delays in testing equipment and procedures. Federal programs related to the security of cargo containers have also improved as agencies are enhancing systems to identify high-risk cargo, expanding partnerships with other countries to screen containers before they depart for the United States, and working with international organizations to develop a global framework for container security. Federal agencies face challenges implementing container security aspects of the SAFE Port Act and other legislation. For example, Customs and Border Protection must test and implement a new program to screen 100 percent of all incoming containers overseas--a departure from its existing risk-based programs.
Background For over 70 years, the public accounting profession, through its independent audit function, has played a critical role in financial reporting and disclosure, which supports the effective functioning of U.S. capital markets. Over this period, the accounting profession and the accounting firms have undergone significant changes, including changes in the scope of services provided in response to the changing needs of their clients. Following significant mergers among the Big 8 in the 1980s and 1990s and the dissolution of Arthur Andersen in 2002, market share among the accounting firms became more concentrated and dominated by the Big 4. Full Disclosure Critical for Market Confidence The Securities Act of 1933 and the Securities Exchange Act of 1934 established the principle of full disclosure, which requires that public companies provide full and accurate information to the investing public. Moreover, these federal securities laws require that public companies have their financial statements audited by an independent public accountant. While officers and directors of a public company are responsible for the preparation and content of financial statements that fully and accurately reflect the company’s financial condition and the results of its operations, public accounting firms, which function as independent external auditors, provide an additional safeguard. The external auditor is responsible for auditing the financial statements in accordance with generally accepted auditing standards to provide reasonable assurance that a company’s financial statements are fairly presented in all material respects in accordance with generally accepted accounting principles. Public and investor confidence in the fairness of financial reporting is critical to the effective functioning of U.S. capital markets. Auditors attest to the reliability of financial statements of public companies. Moreover, investors and other users of financial statements expect auditors to bring integrity, independence, objectivity, and professional competence to the financial reporting process and to prevent the issuance of misleading financial statements. The resulting sense of confidence in companies’ financial statements, which is key to the efficient functioning of the markets for public companies’ securities, can only exist if reasonable investors perceive auditors as independent and expert professionals who will conduct thorough audits. Repeal of Ban on Advertising and Solicitation Created More Competitive Environment For many decades, public accountants, like members of other professions, could not advertise, solicit clients, or participate in a competitive bidding process for clients. These restrictions were set by AICPA, which directed the professional code of conduct for its members, and the state accountancy boards for the 50 states, District of Columbia, Guam, Puerto Rico, and U.S. Virgin Islands. Beginning in the 1970s, FTC, DOJ, and individual professionals began to challenge the legality of these restrictions through various court actions. As a result of these challenges, AICPA and state boards adopted new rules that targeted only false, misleading, or deceptive advertising; liberalized restrictions on solicitation; and changed bans on competitive bidding. While large public companies generally did not switch auditors based on price competition, increased competition and solicitations served as incentives for incumbent firms to continually offer competitive fees to retain their clients. Expansion and Contraction of Management Consulting Services Raised Concerns about Auditor Independence Historically, accounting firms offered a broad range of services to their clients. In addition to traditional services such as audit and attest services and tax services, firms also offered consulting services in areas such as information technology. As figure 1 illustrates, over the past several decades, the provision of management consulting services increased substantially. For example, in 1975, on average, management consulting services comprised 11 percent of the Big 8’s total revenues, ranging from 5 percent to 16 percent by firm. By 1998, revenues from management consulting services increased to an average of 45 percent, ranging from 34 to 70 percent of the Big 5’s revenues for that year. However, by 2000, firms had begun to sell or divest portions of their consulting business and average revenue from management consulting services had decreased to about 30 percent of the Big 5’s total revenues. Although all of the Big 4 firms continue to offer certain consulting services, three of the Big 4 have sold or divested portions of their consulting businesses. PricewaterhouseCoopers’ consulting practice was sold to International Business Machines Corp.; KPMG’s consulting practice became BearingPoint; and Ernst & Young sold its practice to Cap Gemini Group S.A. While it has contemplated doing so, Deloitte & Touche has not divested its management consulting practice. The increase in the provision of management consulting and other nonaudit services contributed to growing regulatory and public concern about auditor independence. Although auditor independence standards have always required that the accounting firm be independent both in fact and in appearance, concern over auditor independence is a long-standing and continuing issue for accounting firms. During the late 1970s, when consulting services represented only a small portion of the Big 8’s revenue, a congressional study noted that an auditor’s ability to remain independent was diminished when the firm provided both consulting and audit services to the same client. A number of subsequent studies resulted in various actions taken by both the accounting firms and SEC to enhance the real and perceived independence of auditors. By 2000, SEC proposed to amend its rules on auditor independence because of the growing concern that the increase in nonaudit services had impaired auditor independence. The rules that were promulgated in 2001 amended SEC’s existing rules regarding auditor independence and identified certain nonaudit services that in some instances may impair the auditor’s independence, among other things. The amendments also required most public companies to disclose in their annual financial statements certain information about nonaudit services provided by their auditor. Following the implementation of the Sarbanes-Oxley Act in 2002, SEC issued new independence rules in March 2003. The new rules placed additional limitations on management consulting and other nonaudit services that firms could provide to their audit clients. Big 8 Mergers and Andersen Dissolution Brought about the Big 4 Although U.S. accounting firms have used mergers and acquisitions to help build their businesses and expand nationally and internationally since the early part of the twentieth century, in the late 1980s Big 8 firms began to merge with one another. As shown in figure 2, the first such merger in 1987 between Peat Marwick Mitchell, one of the Big 8, and KMG Main Hurdman, a non-Big 8 U.S. affiliate of the European firm, Klynveld Main Goerdeler, resulted in the creation of KPMG Peat Marwick. Because of the extensive network Klynveld Main Goerdeler had in Europe, which none of the other Big 8 had, the merged firm became the largest accounting firm worldwide and the second largest U.S. firm until 1989. In 1989, six of the Big 8 firms explored merging. In June 1989, the first merger among the Big 8 involved fourth-ranked Ernst & Whinney and sixth-ranked Arthur Young to form Ernst & Young. The resulting firm became the largest firm nationally (and internationally). In August 1989, seventh-ranked Deloitte Haskins & Sells and eighth-ranked Touche Ross merged to form Deloitte & Touche. The resulting firm became the third largest firm nationally (and internationally). A proposed merger between Andersen and Price Waterhouse was called off in September 1989. In 1997, four firms proposed additional mergers. The first two were Price Waterhouse and Coopers & Lybrand. Soon thereafter, the leaders of Ernst & Young and KPMG Peat Marwick announced a proposal to merge their two firms. DOJ and the European Commission of the European Union initiated studies of both merger requests. However, Ernst & Young and KPMG Peat Marwick subsequently withdrew their proposal. In 1998, sixth- ranked Price Waterhouse merged with fifth-ranked Coopers & Lybrand to become the second-ranked firm, PricewaterhouseCoopers. To evaluate these mergers, DOJ, as indicated in its Merger Guidelines, used various measures to determine whether the mergers were likely to create or enhance market power and should, therefore, be challenged. DOJ assessed whether the merger would result in a concentrated market, increase the likelihood of adverse competitive effects, and whether entry of other competitors into the market would be timely, likely, and sufficient “to deter or counteract the competitive effects of concern.” DOJ then evaluated whether the mergers would result in efficiency gains that could not be achieved by other means and whether one of the parties to the merger would be likely to fail and exit the market if the transaction was not approved. Finally, the market consolidated to the Big 4 in 2002. The criminal indictment of fourth-ranked Andersen for obstruction of justice stemming from its role as auditor of Enron Corporation led to a mass exodus of Andersen partners and staff as well as clients. Andersen was dissolved in 2002. Several Key Factors Spurred Consolidation in the 1980s and 1990s Any one or a combination of several key factors were cited by the Big 4 and others as spurring the mergers of the Big 8 in the 1980s and 1990s—notably the immense growth of U.S. businesses internationally, desire for greater economies of scale, and need and desire to build or expand industry- specific and technical expertise, among others. First, the trend toward corporate globalization led to an increased demand for accounting firms with greater global reach. Second, some firms wanted to achieve greater economies of scale as they modernized their operations and built staff capacity and to spread risk over a broader capital base. Third, some firms wanted to build industry-specific or technical expertise as the operations of their clients became increasingly complex and diversified. Finally, some firms merged to increase or maintain their market share and maintain their market position among the top tier. Globalization of Clients Prompted Need for Greater Global Reach According to representatives of the Big 4 firms, globalization was a driving force behind the mergers of the 1980s and 1990s. As their clients expanded their operations around the world, the top-tier firms felt pressure to expand as well as to provide service to their clients. The trend toward corporate globalization, which continues today, was spurred in part by the lowering of trade barriers. Moreover, by the mid-1990s, the overall economic environment was changing dramatically as technological and telecommunications advances changed the way businesses operated. As a result, large U.S. companies operated worldwide and more foreign-based companies entered U.S. markets. Although all of the Big 8 had offices in certain countries, they did not have extensive networks that enabled them to provide comprehensive services to large multinational clients. Some of the smaller Big 8 firms had difficulty attracting and retaining strong foreign affiliates. Mergers with compatible firms were the quickest way to fill gaps in geographic coverage. For instance, in the 1980s, Ernst & Whinney had an established network in the Pacific Rim countries while Arthur Young did not. Likewise, Price Waterhouse had a network in South America while Coopers & Lybrand’s network was in Europe. In addition to expanding their reach and staff capacity, firms believed that they needed to establish global networks to stay abreast of country-specific generally accepted accounting principles and regulations. Globalization also had raised a number of tax issues that required firms to have networks able to accommodate clients with operations in a growing number of countries. To have successful global networks, the Big 8 needed affiliations with prominent foreign firms. Growing Complexity of Client Operations Prompted Need for Greater Industry- Specific and Technical Expertise In addition to responding to globalization, representatives of the firms told us that some of the mergers served to increase their industry-specific and technical expertise and expand and build management-consulting operations to better serve the complex needs of their rapidly evolving clients. Each of the Big 8 firms had different strengths and industry specializations. Through mergers, firms were able to build expertise across more industries and diversify their operations. For example, the Ernst & Whinney and Arthur Young merger brought together two firms that specialized in healthcare and technology, respectively. Similarly, the Price Waterhouse and Coopers & Lybrand merger brought together two firms that dominated the market for audit services in the energy and gas and telecommunications industries, respectively. In addition, firm officials said that some of the mergers of the 1980s and 1990s were spurred by the need and desire to build or expand management consulting services, which, as discussed previously, were becoming a larger percentage of revenue. Officials also said that the mergers allowed them to achieve economies of scope by offering a broader range of services to clients. As firms merged, they were able to create synergies and offer their clients extensive services beyond traditional audit and attest services such as tax consulting, internal audit, and information systems support. In order to remain competitive, some firms merged to build upon different operating strengths such as consulting services versus auditing. For example, the Deloitte Haskins & Sells and Touche Ross merger brought together a firm with substantial audit and tax consulting operations and a firm with a strong management consulting business. In the same era, some firm officials said that they had to build their technical expertise in areas such as derivatives and other complex financial arrangements used by their clients. Firms also needed to build their expertise to address a series of changes to the U.S. tax code and the regulatory requirements faced by their clients in other countries. Strengthening a firm’s technical expertise was critical, because some firms believed that clients were increasingly selecting their auditors based on specialized expertise and geographic coverage. Firms began to provide technological support and services to clients that were modernizing their operations. Mergers Enabled Firms to Achieve Greater Economies of Scale Like public companies, the accounting firms were undergoing dramatic technological change and innovation in the 1980s and 1990s. According to firm officials, firms were beginning to transition to computer-based accounting systems and develop new auditing approaches that required a considerable capital commitment. By expanding their capital base through mergers, firms planned to create economies of scale by spreading infrastructure costs from modernizing across a broader capital base. Some firm officials said that mergers were critical to the firms’ modernization because, unlike their clients, accounting firms could not raise new capital by issuing securities. Because of their prevailing partnership structures, the firms’ capital bases were largely dependent upon partner-generated capital. In addition to economies of scale, firm officials said that they also expected that mergers would increase overall staff capacity and result in more efficient delivery of services and more effective allocation of resources in order to better respond to market demands. The broader capital bases also allowed firms to invest substantial resources in staff training and development. Big 4 representatives said that staff training and development were critical in attracting and retaining quality staff necessary to offer services demanded by clients. Firm officials said that they also expected that economies of scale would improve operational efficiencies and offset declining profit margins as competition increased. Mergers Helped Firms Increase Market Share and Maintain Market Position Many accounting firms also merged to maintain or increase their market share in order to hold their market position among top-tier firms. Furthermore, some firms believed that some of their foreign affiliates would change affiliations if they perceived that greater advantages in seeking and retaining client business could be obtained through affiliation with a larger firm. The mergers of the 1980s resulted in a growing disparity in size between the largest and smallest of the Big 8. Big 4 representatives told us that merging was a practical alternative to trying to build the business through internal growth. For example, when seventh-ranked Deloitte Haskins & Sells and eighth-ranked Touche Ross merged, they became the third-ranked firm. The creation of Deloitte & Touche resulted in Coopers & Lybrand being the second smallest of the top tier until it merged with the smallest top-tier firm, Price Waterhouse, in 1998 to become PricewaterhouseCoopers, the second-largest firm. Audit Market Has Become More Highly Concentrated, Leaving Large Public Companies with Few Choices Since 1988, the audit market has become increasingly concentrated, especially in the market for large national and multinational company audits, leaving these companies with fewer choices. The 1989 and 1998 mergers led to significant increases in certain key concentration measures typically used by DOJ and FTC to evaluate potential mergers for antitrust concerns. These measures indicate highly concentrated markets in which the Big 4 have the potential to exercise significant market power. In addition to using concentration measures, we employed a simple model of pure price competition to assess whether the current high degree of concentration in the market for audit services was necessarily inconsistent with a purely price-competitive setting. Regardless of the ability of the firms to exercise market power or not, consolidation has limited the number of choices of accounting firms for large national and multinational companies that require firms with requisite staff resources, industry- specific and technical expertise, extensive geographic coverage, and international reputation. In some cases, the choices would be further limited due to conflicts of interest, independence rules, and industry specialization. Large Public Company Audit Market is a Tight Oligopoly By any measure, the large public company audit market is a tight oligopoly, which is defined as the top four firms accounting for more than 60 percent of the market and other firms facing significant barriers to entry into the market. In the large public company audit market, the Big 4 now audit over 97 percent of all public companies with sales over $250 million, and other firms face significant barriers to entry into the market. As table 1 illustrates, when comparing the top 25 firms on the basis of total revenues, partners, and staff resources, the Big 4 do not have any smaller-firm competitors, a situation that has given rise to renewed concerns about a possible lack of effective competition in the market for large company audit services. The Big 4 accounting firms dominate internationally as well, with over $47 billion in total global net revenues for 2002, according to a February 2003 edition of Public Accounting Report. Moreover, information provided by officials from foreign regulators suggests that the national markets for audit services to large public companies in the other countries tend to be as highly concentrated as they are in the United States, with the Big 4 accounting firms auditing a vast majority of these large public company clients. For example, according to regulatory officials the Big 4 audited over 80 percent of all public companies in Japan and at least 90 percent of all listed companies in the Netherlands in 2002, while the Big 4 firms were the auditors for virtually all major listed companies in the United Kingdom. According to Italian regulators, in 2001 the Big 5 audited over 80 percent of listed companies in Italy. and subsequent dissolution resulted in the HHI increasing to 2,566, well above the threshold for significant market power. It is unclear whether and to what extent the Antitrust Division was consulted and to what extent DOJ’s Antitrust Division had input into the decision to criminally indict Andersen. In 2002, we found that the most significant concentration among accounting firms was in the large public company market segment. As figure 4 shows, although consistently above 1,000, HHIs (based on number of clients) for firms auditing public companies with total sales between $1 million and $100 million are all below the 1,800 threshold. However, HHIs for companies with sales over $100 million are consistently above the 1,800 threshold, indicating the potential for significant market power in the market for larger company audits. Analysis of the four-firm concentration ratio also indicates that concentration among the top four accounting firms has increased significantly since 1988. As shown in figure 5, in 1988 the top four firms (Price Waterhouse, Andersen, Coopers & Lybrand, and KPMG) audited 63 percent of total public company sales. The next four firms (Ernst & Whinney, Arthur Young, Deloitte Haskins & Sells, and Touche Ross) were significant competitors, auditing 35 percent of total public company sales. Also shown in figure 5, by 1997 the top four firms audited 71 percent of public company total sales, with two major competitors (Coopers & Lybrand and KPMG) auditing an additional 28 percent. Finally, by 2002, the top four firms audited 99 percent of public company total sales with no significant competitors (see fig. 5). Likewise, the four-firm concentration ratio based on the total number of public company clients increased from 51 percent in 1988 to 65 percent in 1997 and to 78 percent in 2002 (see fig. 6). Not surprisingly, the larger public company segment of the market is even more concentrated than the overall market. For example, the Big 4 audit roughly 97 percent of all public companies with sales between $250 million and $5 billion and almost all public companies with sales greater than $5 billion. Effective competition does not require pure competitive conditions; however, a tight oligopoly raises concerns because the firms may exercise market power, and the concentrated structure of the market makes successful collusion, overt or tacit, easier. In terms of market concentration, the audit market does not differ from numerous other markets in the United States that are also characterized by high degrees of concentration (see table 2). Although the resulting structures are similar, the factors contributing to the market structures and the competitive environments may be fundamentally different. Consolidation Does Not Appear to Have Impaired Price Competition to Date Despite the high degree of concentration among accounting firms, with four firms auditing more than 78 percent of all public companies and 99 percent of all public company sales, we found no evidence that price competition to date has been impaired. As indicated in table 2, much of the economy is concentrated, but U.S. markets are generally considered quite competitive. Thus, market concentration data can overstate the significance of a tight oligopoly on competition. While concentration ratios and HHI are good indicators of market structure, these measures only indicate the potential for oligopolistic collusion or the exercise of market power. As market structure has historically been thought to influence market conduct and economic performance, there is concern that a tight oligopoly in the audit market might have resulted in detrimental effects on both purchasers of audit services and users of audited financial statements. We employed a simple model of pure price competition to assess whether the high degree of concentration in the market for audit services was necessarily inconsistent with a price-competitive setting. The model is designed to simulate a market driven by pure price competition, in which clients choose auditors on price—neither quality nor reputation, for example, is a factor. The model’s simulation results suggest that a market driven solely by price competition could also result in a high degree of market concentration. We found that the model simulated market shares that were close to the actual market shares of the Big 4, which are thought to be driven by a number of other factors including quality, reputation, and global reach. (See app. I for a detailed discussion of the model, results, and limitations.) Specifically, the model predicted that the Big 4 would audit 64 percent of companies in the sampled market, compared with the Big 4 actual market share of 62.2 percent in 2002 for the companies included in the simulation. Moreover, the model predicted that the Big 4 would audit 96.3 percent of companies in the sample with assets greater than $250 million, compared with the 97 percent of these companies actually audited by the Big 4 in 2002. While evidence to date does not appear to indicate that competition in the market for audit services has been impaired, the increased degree of concentration coupled with the recently imposed restrictions on the provision of nonaudit services by incumbent auditors to their audit clients could increase the potential for collusive behavior or the exercise of market power. Large Public Companies Have Limited Number of Accounting Firm Choices The most observable impact of consolidation among accounting firms appeared to be the limited number of auditor choices for most large national and multinational public companies if they voluntarily switched auditors or were required to do so, such as through mandatory firm rotation. Of the public companies responding to our survey to date, 88 percent (130 of 147) said that they would not consider using a smaller (non- Big 4) firm for audit and attest services. See appendix II for survey questionnaires and responses. In addition, our analysis of 1,085 former Andersen clients that changed auditors between October 2001 and December 2002 suggested that public companies (especially large companies) overwhelmingly preferred the Big 4. Only one large public company with assets over $5 billion that was audited by Andersen switched to a smaller firm. See appendix III for a detailed analysis. For most large public companies, the maximum number of choices has gone from eight in 1988 to four in 2003. According to our preliminary survey results, a large majority (94 percent or 137 of 145) of public companies that responded to our survey to date said that they had three or fewer alternatives were they to switch accounting firms. All 20 of the audit chairpersons with whom we spoke believed that they had three or fewer alternatives. Of the companies responding to our survey, 42 percent (61 of 147) said that they did not have enough options for audit and attest services. However, when asked whether steps should be taken to increase the number of available choices, results revealed that 76 percent (54 of 71) of public companies responding to our survey to date said they would strongly favor or somewhat favor letting market forces operate without government intervention. We also found that client choices could be even further limited due to potential conflicts of interest, the new independence rules, and industry specialization by the firms—all of which may further reduce the number of available alternatives to fewer than three. First, the Big 4 tend to specialize in particular industries and, as our preliminary survey results indicated, public companies that responded often preferred firms with established records of industry-specific expertise, which could further reduce a company’s number of viable choices. For example, 80 percent (118 of 148) of the public companies responding to our survey to date said industry specialization or expertise would be of great or very great importance to them if they had to choose a new auditor. When asked why they would not consider an alternative to the Big 4, 91 percent (117 of 129) of public companies responding to date cited technical skills or knowledge of their industry as a reason of great or very great importance. As figure 7 shows, in selected industries, specialization can often limit the number of firm choices to two—in each case, two firms accounted for well over 70 percent of the total assets audited in each industry in 2002. As a result, it might be difficult for a large company to find a firm with the requisite industry-specific expertise and staff capacity. Figure 7 also shows the impact of the Price Waterhouse and Coopers & Lybrand merger and dissolution of Andersen on industry specialization and associated client choice. While two firms also dominated the four selected industries in 1997, this concentration became much more pronounced by 2002, as illustrated in figure 7. See appendix IV for a detailed discussion of industry specialization and further industry-specific examples and limitations of this type of analysis. Industry specialization, as captured by a relatively high market share of client assets or client sales in a given industry, may also be indicative of a firm’s dominance in that industry on a different level. As a hypothetical example, consider a highly concentrated industry, with several very large companies and numerous smaller companies, in which a single accounting firm audits a significant portion of the industry assets. This firm’s interpretation of accounting standards specific to the industry could become the prevailing standard practice in that industry due to the firm’s dominant role. If, subsequently, these interpretations were found to be inappropriate (by some influential external third party, for example), the firm as well as the companies audited by that firm could be exposed to heightened liability risk, which could potentially have a severe negative impact on that industry as a whole as well as the firm. Finally, the new independence rules established under the Sarbanes-Oxley Act of 2002, which limit the nonaudit services firms can provide to their audit clients, may also serve to reduce the number of auditor choices for some large public companies. As a hypothetical example, suppose that a large multinational petroleum company that used one Big 4 firm for its audit and attest services and another Big 4 firm for its outsourced internal audit function wanted to hire a new accounting firm because its board of directors decided that the company should change auditors every 7 years. In this case, this company would appear to have two remaining alternatives if it believed that only the Big 4 had the global reach and staff resources necessary to audit its operations. However, one of the remaining two Big 4 firms did not enter a bid because its market niche in this industry was small companies. Consequently, this company would be left with one realistic alternative. Although hypothetical, this scenario spotlights another concern that focuses on the potential exercise of market power, as it is highly probable the remaining firm would be aware of its competitive position. Conceivably, there are other scenarios and circumstances in which such a company would have no viable alternatives for its global audit and attest needs. Linking Consolidation to Audit Price, Quality, and Auditor Independence Is Difficult We found little empirical evidence to link past consolidation to changes in audit fees, quality, and auditor independence. Given the significant changes that have occurred in the accounting profession since the mid-1980s, we were also unable to isolate the impact of consolidation from other factors. However, researchers (relying on analyses based on aggregate billings of small samples of companies or proxies for audit fees, such as average audit revenues) generally found that audit fees remained flat or increased slightly since 1989. Additionally, although not focused on consolidation, a variety of studies have attempted to measure overall changes in audit quality and auditor independence. The results varied, and we spoke with numerous accounting experts who offered varying views about changes in quality and independence. Like audit fees, a variety of factors, such as the increasing importance of management consulting services provided to clients, make attributing any changes, real or perceived, to any one of the factors difficult. Research on Changes in Audit Fees Used a Variety of Measures but Did Not Conclusively Determine Effects from Consolidation Existing research indicated that audit fees (measured in different ways) generally remained flat or decreased slightly from the late 1980s through the mid-1990s but have been increasing since the late 1990s (inflation adjusted). However, we were unable to isolate the effects of consolidation and competition from the numerous other changes that have affected accounting firms and how they conduct business. These changes included evolving audit scope, the growth of management consulting services, technological developments, and evolving audit standards and legal reforms that altered audit firms’ litigation exposure. Given potential changes in the scope of the audit, only the public accounting firms themselves can accurately determine whether hourly audit fees have increased or decreased since 1989. In general, the scope of an audit is a function of client complexity and risk. Although there are very little data on changes in audit fees over time and existing studies used a variety of approaches to measure audit fees, two recent academic studies are widely cited. One used a proxy measure for the audit fee (Ivancevich and Zardkoohi) and the other was based on actual fees charged to a small sample of companies (Menon and Williams). For the period following the mergers of the late 1980s, both studies found that audit fees declined through the mid 1990s. Using audit revenues per accounting firm divided by the dollar value of assets audited as a proxy for the audit fee, Ivancevich and Zardkoohi found that “fees” fell for both the merged firms (Ernst & Young and Deloitte & Touche) and the remaining Big 6 accounting firms from 1989 through 1996. Similarly, Menon and Williams found that the average real audit fee per client declined from $3.4 million in 1989 to $2.8 million in 1997, the year Price Waterhouse and Coopers & Lybrand announced their proposed merger. Moreover, although the results were limited due to the small sample size used in the regression analysis, the study did not find any evidence that the Big 6 mergers resulted in a permanent increase in fees. In addition, as figure 8 illustrates, the periodic survey of actual audit fees of about 130 companies conducted by Manufacturers Alliance also found a similar downward trend in audit fees per $100 of public company revenues in 1989 (and earlier) through 1995. In 1995, the Private Securities Litigation Reform Act was enacted, which limited the liability exposure of accounting firms, among others. However, the survey revealed a slight increase from 1995 through 1999 for U.S. and foreign companies. Figure 8 shows that U.S. companies also paid lower fees than their foreign counterparts over the survey period. Separately, using net average audit revenues for the top tier as a percentage of total sales audited as a proxy for audit fees, we found that audit fees declined slightly from 1989 through 1995 and increased from 1995 through 2001 (see fig. 9). However, no determination can be made as to whether consolidation negatively or positively impacted audit fees in either case. Although audit fees are generally a relatively small percentage of a public company’s revenue, recent evidence suggests audit fees have increased significantly since 2000 and there are indications they may increase further in the future. Some experts believe that during the 1980s and 1990s audit services became “loss leaders” in order for accounting firms to gain entry into other more lucrative professional service markets, primarily management consulting services. Therefore, evidence of flat audit fees since 1989 and the relatively small percentage of company revenue in 2000 may reveal little about the possible market power produced by having fewer firms. Likewise, historical fees (especially certain proxy measures of audit fees) reveal little about the potential for noncompetitive pricing in the future given the new independence rules and evolving business model. According to one source, average audit fees for Standard & Poor’s 500 companies increased 27 percent in 2002 due primarily to new requirements and changing audit practices in the wake of recent accounting scandals. Moreover, many market participants, experts, and academics with whom we consulted believe prices will increase further due to the implementation of the Sarbanes-Oxley requirements and related changes in the scope of certain audit services and possible changes in auditing standards. Because of these important changes and the potential for market power, it would be difficult to isolate the portion of any price increase resulting from noncompetitive behavior. Likewise, nearly all accounting firms that responded to our survey said that both costs and fees have increased over the past decade, but that costs have increased more: 24 firms (51 percent) said their costs have “greatly” increased, and another 22 firms (47 percent) said that costs have “moderately” increased. However, when asked about the fees they charge, only 12 of the 47 firms (26 percent) responded that the fees they charge have greatly increased while another 33 firms (70 percent) said that their fees had moderately increased. When public companies were asked about fees, 93 percent (137 of 147) of the public companies that responded to our survey to date said that audit fees had somewhat or greatly increased over the past decade and 48 percent (70 of 147) said that consolidation had a great or moderate upward influence on those fees. Some companies indicated that most of this increase has occurred in the last few years. Linking Consolidation to Audit Quality and Auditor Independence Is Difficult Although we identified no research directly studying the impact of consolidation among the accounting firms on audit quality or auditor independence, we did find limited research that attempted to measure general changes in audit quality and auditor independence, and we explored these issues with market participants and researchers. We found that theoretical and empirical research on both issues to date present mixed and inconclusive results as, in general, measurement issues made it difficult to assess changes in audit quality or auditor independence. Research Offers Competing Theories on Factors Influencing Audit Quality and Auditor Independence Audit quality and auditor independence are, in general, difficult to observe or measure. Theory suggests that auditor independence and audit quality are inextricably linked, with auditor independence being an integral component of audit quality. One widely cited academic study defined auditor independence as the probability that an auditor would report a discovered problem in a company’s financial reports while another widely cited academic study defined audit quality as the joint probability that an auditor would discover a problem in a company’s financial reports and, further, that the auditor would report the problem. Research offers competing theories that address how competition among firms, auditor tenure, and accounting firm size—all factors that could be influenced directly by consolidation—might impact auditor independence and, thus, audit quality. For example, some research hypothesized that increased competition could have a negative effect, as a client’s opportunities and incentives to replace an incumbent auditor might increase for reasons ranging from minimizing audit fees to a desire for a more compliant auditor. However, other research hypothesized that increased competition could reduce the probability that some accounting firms could exercise disproportionate influence over the establishment of accounting principles and policies. Likewise, auditor tenure might also have a positive or negative impact. Some research hypothesized that an auditor that served a given client for a longer period of time may be more valuable to that client due to its deeper familiarity with and deeper insight into the client’s operations, which would allow the auditor to become less dependent on the client for information about the client’s operations. However, other research hypothesized that increased tenure could result in complacency, lack of innovation, less rigorous audit procedures, and a reflexive confidence in the client. Some research hypothesized that an accounting firm’s size might also have an impact, as a larger firm might become less dependent on a given client than a smaller firm. Academic research suggests that larger auditors will perform higher quality audits and there are many studies employing proxies for audit quality that frequently report results consistent with such a notion. However, given its unobservable nature, there does not appear to be definitive evidence confirming the existence of differential audit quality between the Big 4 accounting firms and other auditors. Some researchers have dismissed the notion of differential audit quality, while others have questioned the assumption that the larger firms provide higher quality audits. Some experts with whom we consulted asserted that there was a quality differential, while others were not convinced of this. One academic told us that the question of differential audit quality was difficult to answer, since large accounting firms generally handle most large company audits. This individual also suggested that smaller accounting firms could provide the same audit quality as larger accounting firms, provided that these smaller firms only accepted clients within their expertise and service potential. Studies Often Use Restatements, Going-Concern Opinions, and Earnings Management to Measure Audit Quality and Auditor Independence Audit quality is not generally measurable and tends only to be made public when a company experiences financial difficulties and its investors have a reason to question it. Studies addressing audit quality and auditor independence have typically focused on financial statement restatements, going-concern opinions, and earnings management or manipulation. Financial statement restatements due to accounting improprieties have been used by some as a measure of audit quality. By this measure, there is some evidence suggesting that audit quality may have declined over the 1990s, as several recent studies have found that financial statement restatements due to accounting irregularities have been increasing, and those by larger companies have been increasing as well. As larger companies typically employ larger accounting firms, which have been perceived historically by some as providing higher quality audits, this trend toward larger company financial statement restatements may heighten concerns about potentially pervasive declining audit quality. In addition, in some recent high-profile restatement cases it appeared that the auditors identified problems but failed to ensure that management appropriately addressed their concerns, raising questions about auditor independence. Another measure that has been employed by researchers to gauge audit quality is whether an auditor issues a going-concern opinion warning investors prior to a company’s bankruptcy filing. One study found that during the 1990s accounting firms issued fewer going-concern audit opinions to financially stressed companies prior to bankruptcy. This study found that auditors were less likely to issue going-concern opinions in 1996-1997 than in 1992-1993, and again less likely to issue such opinions in 1999-2000 than in 1996-1997. Moreover, another study that analyzed going- concern opinions found that accounting firms failed to warn of nearly half of the 228 bankruptcies identified from January 2001 through June 2002, despite the fact that nearly 9 out of 10 of these companies displayed at least two indicators of financial stress. However, numerous prior studies also found that approximately half of all companies filing for bankruptcy in selected periods prior to the 1990s did not have prior going-concern opinions in their immediately preceding financial statements either. Another study focusing on going-concern opinions over a relatively short, recent time period examined whether there was an association between nonaudit fees and auditor independence, but it found no significant association between the two using auditors’ propensity to issue going- concern opinions. This study’s findings were consistent with market- based institutional incentives dominating expected benefits from auditors compromising their independence. Corporate earnings reported in companies’ annual filings (to which auditors attest fairness) can be an important factor in investors’ investment decisions, and can be used by corporate boards and institutional investors in assessing company performance and management quality, and in structuring loans and other contractual arrangements. As such, they can have an impact on securities prices and managers’ compensation, among other things. Earnings management or manipulation (captured by, for example, managers’ propensity to meet earnings targets) is another measure that has been used by researchers to capture audit quality, although in this case an auditor’s influence on its clients’ earnings characteristics is likely to be less direct and there can be more significant measurement problems. While there has been growing anecdotal and empirical evidence of earnings management, research using this measure to determine whether audit quality or auditor independence was impaired yielded mixed results. For example, while one recent study suggested that nonaudit fees impair the credibility of financial reports, another cast doubt on its results, and another found evidence consistent with auditors increasing their independence in response to greater financial dependence (that is, for larger clients). Despite Contrasting Views on Audit Quality, Experts and Professionals Did Not View Consolidation as Cause Existing research on audit quality and auditor independence presents inconclusive results, suffers from problematic measurement issues, and generally does not consider or compare these factors over extended time periods. Many academics and other accounting experts we contacted indicated that they believed audit quality had declined since 1989. However, others, including small accounting firms and large company clients that responded to our survey to date, believed that audit quality had not decreased. For example, 43 percent (63 of 147) of public companies that responded believed the overall quality had gotten much or somewhat better over the past decade, while 18 percent (27 of 147) felt it had gotten much or somewhat worse. Of the public companies that responded to our survey to date, 60 percent (88 of 147) indicated that their auditor had become much more or somewhat more independent over the last decade. However, some accounting firms acknowledged that achieving auditor independence was difficult: 10 percent (14 of the 147) accounting firms that responded to our survey said that it had become much or somewhat harder to maintain independence at the firm level in the past decade and 19 percent (9 of the 47) indicated that it had become much more difficult or somewhat harder to maintain independence at the individual partner level over the past decade. Even if audit quality or auditor independence has been affected, it would be difficult to determine any direct link to consolidation among accounting firms because of numerous other structural changes that occurred both within and outside of the audit market. When we asked our survey respondents how consolidation influenced the quality of audit services they received, 64 percent (94 of 147) of the public companies responding to date and 95 percent (41 of 43) of accounting firms said that consolidation had little or no effect. However, some academics we contacted believed that consolidation might have indirectly influenced audit quality during the 1990s, with some suggesting, for example, that concentration among a few firms enabled the largest accounting firms to exercise greater influence over the audit standard setting process and regulatory requirements. Academics and Other Experts Said Other Factors Affected Audit Quality and Auditor Independence In general, many of the people with whom we spoke—representing academia, the profession, regulators, and large public companies— believed that other factors could potentially have had a greater effect on audit quality than consolidation. According to knowledgeable individuals with whom we spoke, a variety of factors may have had a more direct impact on audit quality and auditor independence than consolidation. For example, they cited the removal of restrictions against advertising and direct solicitation of clients, the increased relative importance of management consulting services to accounting firms, legal reforms, changing auditing standards, and a lack of emphasis on the quality of the audit by clients and some capital market participants. Several individuals who were knowledgeable about accounting firm history suggested that when advertising and direct solicitation of other firms’ clients began to be permitted in the 1970s, the resulting competitive pressure on audit prices led accounting firms to look for ways to reduce the scope of the audit, resulting in a decline in audit quality. Many of the experts with whom we consulted also suggested that the entry of accounting firms into more lucrative management consulting services led to conflict-of-interest issues that compromised the integrity and quality of the audit service. Other sources noted that, as a result of several legal reforms during the 1990s, it became more difficult and less worthwhile for private plaintiffs to assert civil claims against auditors and audit quality may also have suffered. This view was supported by a study that concluded that accounting firms were less likely to warn investors about financially troubled companies following the litigation reforms of the 1990s. Consolidation Appears to Have Had Little Effect on Capital Formation or Securities Markets to Date, and Future Implications Are Unclear Although accounting firms play an important role in capital formation and the efficient functioning of securities markets, we found no evidence to suggest that consolidation among accounting firms has had an impact on either of these to date. Moreover, we were unable to find research directly addressing how consolidation among accounting firms might affect capital formation or the securities markets in the future. Capital formation and the securities markets are driven by a number of interacting factors, including interest rates, risk, and supply and demand. Isolating any impact of consolidation among accounting firms on capital formation or the securities markets is difficult because of the complex interaction among factors that may influence the capital formation process, and we were unable to do so. Moreover, most capital market participants and other experts with whom we spoke were either unsure or did not believe that consolidation had any directly discernible impact on capital formation or the securities markets. Some said that the broader issues facing accounting firms, such as the recent accounting-related scandals involving Enron and WorldCom, might have affected the capital markets by reducing investor confidence, but that these were not necessarily linked to consolidation. The informational role played by accounting firms is key to reducing the disparity in information between a company’s management and capital market participants regarding the company’s financial condition, thus enhancing resource allocation. Consequently, to the extent that consolidation might affect audit quality, especially the perception of audit quality, the cost and allocation of capital could be affected. For example, a perceived decline in audit quality for a given company might lead the capital markets to view that company’s financial statements with increased skepticism, potentially increasing the company’s cost of capital as well as altering the capital allocation decisions of capital market participants. The liability to which accounting firms are subject also creates a form of “insurance” to investors through an auditor’s assurance role, which provides investors with a claim on an accounting firm in the event of an audit failure. To the extent that consolidation increased the capital bases of some accounting firms, investors might view this as potentially increasing loss recovery in the event of an audit failure involving those firms. However, it is unclear whether there has been or would be any impact on investor behavior, either positive or negative, due to the increased capital base of some firms. Although there appears to be no direct effect from consolidation of the Big 8 on the capital markets to date, some capital market participants and anecdotal evidence suggested that investment bankers and institutional investors, both of whom are integral to the capital formation process, often prefer that public companies use the Big 4 to audit their financial statements. Although such a preference does not appear to represent much of a constraint to large national and multinational companies, it could have an impact on other, smaller companies accessing the capital markets, as a company’s use of a less well-known accounting firm might create added uncertainty on the part of investors and could possibly lead to delays in accessing new capital. For example, some research indicated that there was less initial public offering underpricing for companies that used Big 8 or larger accounting firms, as opposed to those that engaged smaller accounting firms. According to firm officials, as larger accounting firms reevaluate their portfolio of clients, some smaller public companies may no longer be able to engage the Big 4 or other large accounting firms with whom capital market participants are more familiar. Thus, partially as a result of a market with fewer accounting firms able or willing to provide audit services to larger public companies, some smaller companies could be hindered in their ability to raise capital. Because the audit market has become more concentrated, the Big 4 have been increasing their focus on gaining the audit contracts of larger public companies. In the process, the Big 4 shed some of their clients, particularly smaller ones, which they viewed as not profitable or as posing unacceptable risks to their firms. Likewise, smaller firms said that they have undergone similar risk assessment and client retention processes, and they have also shed some clients that no longer satisfied their client criteria. Moreover, the possible reduction in the number of accounting firms willing to audit public companies in the wake of the passage of Sarbanes-Oxley could further impact the availability and cost of capital for some smaller companies, particularly companies for whom the accounting firms may doubt the profitability of the audit engagements. As noted earlier, familiarity with an accounting firm on the part of capital market participants could lead to easier, less expensive access to the capital markets. Smaller Accounting Firms Face Numerous Barriers to Entry into the Top Tier Unlike the Big 4, which have established global operations and infrastructure, smaller accounting firms face considerable barriers to entry, such as the lack of capacity and capital limitations, when competing for the audits of large national and multinational public companies. First, smaller firms generally lack the staff resources, technical expertise, and global reach to audit large multinational companies. Second, public companies and markets appear to prefer the Big 4 because of their established reputation. Third, the increased litigation risk and insurance costs associated with auditing public companies generally create disincentives for smaller firms to actively compete for large public company clients. Fourth, raising the capital to expand their existing infrastructure to compete with the Big 4, which already have such operations in place, is also a challenge, in part because of the partnership structure of accounting firms. Finally, certain state laws, such as state licensing requirements, make it harder for smaller firms that lack a national presence to compete. The firms with whom we spoke, including the Big 4, all told us that they did not foresee any of the other accounting firms being able to grow to compete with the Big 4 for large national and multinational public company clients in the near future. Smaller Firms Generally Lack Staff Resources, Technical Expertise, and Global Reach to Audit Large Public Companies Perhaps the most difficult challenge facing smaller firms is the lack of staff resources, technical expertise, and global reach necessary to audit most large national and multinational companies and their often complex operations. Moreover, 91 percent (117 of 129) of public companies responding to our survey who would not consider using a non-Big 4 firm as their auditor said that the capacity of the firm was of great or very great importance in their unwillingness to do so. Large multinational companies are generally more complex to audit and require more auditors with greater experience and training. The complexity of a public company audit depends on many factors, such as the number of markets in which the company competes, the size of the company, the nature of the company’s business, the variety of revenue streams it has, and organizational changes. It is not uncommon for an audit of a large national or multinational public company to require hundreds of staff. Most smaller firms lack the staff resources necessary to commit hundreds of employees to a single client, which limits smaller firms’ ability to compete with the Big 4 for large audit clients. Yet, without having large clients, it is difficult to build the capacity needed to attract large clients. Even with global networks and affiliations, the capacity gap between the fourth- and fifth-ranked firms is significant. For example, the smallest Big 4 firm in terms of 2002 partners and nonpartner professional staff from U.S. operations, KPMG, is over five times the size of the fifth-largest firm, Grant Thornton. As table 3 illustrates, the gap between the top tier and the next tier has grown significantly since 1988. This gap spans revenue, number of partners, professional staff size, offices, and number of SEC clients. The result is a dual market structure—one market where the Big 4 compete with several smaller accounting firms for medium and small public companies and another market where essentially only the Big 4 compete for the largest public company clients. Although firms of all sizes expressed some difficulty attracting staff with specialized audit or industry-specific expertise, smaller firms said that this was particularly difficult. Further, some smaller firms told us that they had difficulty keeping talented employees, especially those with sought-after expertise, from leaving for jobs with the Big 4. The Big 4 can afford to more highly compensate employees and also offer a wider range of opportunities than smaller firms. Moreover, the public companies that responded to our survey to date ranked industry specialization or expertise as the third most important consideration in selecting an auditor. Some company officials also said that they preferred a firm to have a “critical mass” or depth of staff with the requisite expertise and knowledge, which generally required a firm of a certain size. In addition to smaller firms having staff resource and technical expertise constraints, some public companies said that their auditor had to have sufficient global reach to audit their international operations. Without extensive global networks, most smaller firms face significant challenges in competing for large multinational clients. As table 4 illustrates, the disparity in capacity between the Big 4 and the next three largest firms’ global operations was even more dramatic than the comparison between their U.S. operations. For example, on average, the Big 4 had over 75,000 nonpartner professional staff and over 6,600 partners compared to the next three largest firms with over 14,000 nonpartner professional staff and around 2,200 partners. While some of the smaller firms have international operations, we found that some public companies and others were either unaware that they had such operations or were uncertain of the degree of cohesive service that these smaller firms could provide through their global affiliations. The various national practices of any given Big 4 firm are separate and independent legal entities, but they often share common resources, support systems, audit procedures, and quality and internal control structures. Market participants said that the affiliates of smaller firms, in contrast, tended to have lower degrees of commonality. Rather than a tight network, they described smaller firms’ international affiliations as associations or cooperatives in which there was less sharing of resources and internal control systems. In addition, they said that quality standards, practices and procedures might be less uniform between smaller firm affiliates, which raised concerns for multinational public companies. Smaller Firms Lack Global Reputation Smaller firms face a challenge to establish recognition and credibility among large national and multinational public companies and, as discussed previously, capital market participants. One reason capital market participants often prefer a Big 4 auditor is because of their higher level of familiarity with the Big 4. For example, some large public companies said that some of the smaller accounting firms could provide audit services to certain large national public companies, depending on the complexity of the companies’ operations. These individuals added, however, that boards of directors of these companies might not consider this option. Others said that despite recent accounting scandals involving the Big 4, many capital market participants continued to expect the use of the Big 4 for audit services. Thus, companies seeking to establish themselves as worthy investments may continue to engage one of the Big 4 to increase their credibility to investors. Eighty-two percent (121 of 148) of the public companies that responded to our survey indicated that reputation or name recognition was of great or very great importance to them in choosing an auditor. This was the second-most-cited factor, exceeded only by quality. Increased Litigation Risk and Insurance Costs Make Large Company Audit Market Less Attractive Than Other Options Increased litigation risk presents another barrier for smaller firms seeking to audit larger public companies as they face difficulties managing this risk and obtaining affordable insurance. Like many of the challenges faced by smaller firms, this is a challenge for all firms. However, assuming that smaller firms were able to purchase additional insurance to cover the new risk exposure, most smaller firms lacked the size needed to achieve economies of scale to spread their litigation risk and insurance costs across a larger capital base. According to 83 percent of firms (38 of the 46) that responded to our survey, litigation and insurance factors have had a great or moderate upward influence on their costs, which they indicated have increased significantly. Specifically, some of the firms with whom we spoke said that their deductibles and premiums have increased substantially and coverage had become more limited. Given the recent high-profile accounting scandals and escalating litigation involving accounting firms, some firms said that insurance companies saw increased risk and uncertainty from insuring firms that audited public companies. As a result, some of the smaller firms with whom we spoke said they had or were considering limiting their practices to nonpublic clients. Others said that the greater risk associated with auditing large public companies was a key factor in their decisions not to attempt to expand their existing operations in the public company audit market. Finally, many of the largest non-Big 4 firms said that they had ample opportunities for growth in the mid-sized public company segment of the public company audit market and in the private company audit market. In addition, smaller firms said that they could attract large companies as clients for other audit-related and nonaudit services such as forensic audits, management consulting services, and internal audits. In their efforts to maximize profits, these smaller firms said they were targeting market segments in which they were best positioned to compete, which generally did not include the large public company audit market. Raising Capital for Growth Is Difficult Access to capital is another critical element to an accounting firm’s ability to generate the capacity needed to establish the network and infrastructure to audit large multinational companies. Several firms cited the lack of capital as one of the greatest barriers to growth and the ability to serve larger clients. They said that the partnership structure of most public accounting firms was one factor that limited the ability of all firms to raise capital but posed a particular challenge for smaller firms. Under a partnership structure, accounting firms are unable to raise capital through the public markets. To expand their operations, accounting firms must look to other options, such as borrowing from financial institutions, merging with other accounting firms, growing the business without merging, or tapping the personal resources of their partners and employees. Raising capital through borrowing may be difficult because accounting firms as professional service organizations may lack the collateral needed to secure loans. While mergers provide a way for firms to grow and expand their capital base, the smaller firms with whom we spoke indicated that they were not interested in merging with other similarly sized firms. Some firms said that they did not see the economic benefits or business advantages of doing so while others said that they wanted to maintain their unique identity. We also employed the Doogar and Easley (1998) model by simulating mergers among smaller firms in order to assess whether, in a purely price competitive environment, such mergers could lead to viable competitors to the Big 4 for large national and multinational clients. In particular, we merged the five largest firms below the Big 4 in terms of the number of partners (Grant Thornton, BDO Seidman, Baid Kurtz & Dobson, McGladrey & Pullen, and Moss Adams) and simulated the market to see if the newly merged firm could attract public companies (of any size) away from the Big 4. We first assumed that the newly merged firm would become as efficient as the Big 4, as measured by the staff-to-partner ratio. Under this best-case scenario, we projected this firm’s market share would be 11.2 percent, compared with the five firms’ actual collective 2002 market share of 8.6 percent, indicating a 2.6 percentage-point gain in market share. However, when we assumed lesser efficiency gains, the merged firm’s projected market share ranged from 4.5 percent (no efficiency gains) to 6.4 percent (some efficiency gains), indicating that the merged firm’s market share would be lower than their collective market share (see app. II). Even ignoring many real world considerations, such as reputation and global reach, these results illustrated the difficulty faced to date by any potential competitor to the Big 4 firms in the market for large public company audits. State Requirements Pose Obstacles for Smaller Firms in Particular While all accounting firms must comply with state requirements such as licensing, smaller firms that lack an existing infrastructure of national offices face increased costs and burden to establish geographic coverage needed for auditing most large public companies. All 50 states, the District of Columbia, Guam, Puerto Rico, and the U.S. Virgin Islands have laws governing the licensing of certified public accountants, including requirements for education, examination, and experience. While each jurisdiction restricts the use of the title “certified public accountant” to individuals who are registered as such with the state regulatory authority, the other licensure requirements are not uniform. State boards have been working toward a more uniform system based on the Uniform Accountancy Act (UAA), which is a model licensing law for state regulation within the accounting profession. The UAA seeks adoption of the idea of “substantial equivalency” with regard to education, examinations, and experience, so that states recognize each other’s certification as “substantially equivalent” to their own. According to National Association of State Boards of Accountancy and AICPA officials, fewer than half (23) of the jurisdictions had agreed to the equivalency practice as of July 1, 2003. Some firms expressed concerns that potential state and federal duplication of oversight could pose more of a burden for smaller firms than the Big 4 and might induce some smaller firms to stop auditing public companies altogether. Specifically, to mirror the federal oversight structure, most states (37) implemented statutorily required peer reviews for firms registered in the state. Until 2002, these requirements were generally consistent with the peer review process conducted by AICPA’s SEC Practice Section. However, Sarbanes-Oxley created PCAOB to establish auditing standards and oversee firms’ compliance with those standards. Unlike the old peer review that focused on a firm’s overall operations, PCAOB plans to conduct inspections of a firm’s public company practice. Whether this inspection will be sufficient to satisfy the peer review requirements under state law or whether firms with private clients would have to be subject to both state- and federal-level reviews is unclear at this time. Observations The audit market is in the midst of unprecedented change and evolution. It has become more highly concentrated, and the Big 4, as well as all accounting firms, face tremendous challenges as they adapt to new risks and responsibilities, new independence standards, a new business model, and a new oversight structure, among other things. In many cases it is unclear what the ultimate outcome will be and our findings about past behavior may not reflect what the situation will be in the future. Therefore, we have identified several important issues that we believe warrant additional attention and study by the appropriate regulatory or enforcement agencies at some point. First, agencies could evaluate and monitor the effect of the existing level of concentration on price and quality to see if there are any changes in the firms’ ability to exercise market power. This is especially important as the firms move to a new business model with management consulting becoming a less significant source of revenue. Second, the issue of what, if anything, can or should be done to prevent further consolidation of the Big 4 warrants consideration. Such an analysis could determine the possible impact of increased concentration through the voluntary or involuntary exit of one of the current Big 4 firms. If the effects were seen as detrimental, regulatory and enforcement agencies could evaluate the types of actions that could be taken to mitigate the impact or develop contingency plans to deal with the impact of further consolidation. Part of this analysis would be to evaluate the pros and cons of various forms of government intervention to maintain competition or mitigate the effects of market power. Third, it is important that regulators and enforcement agencies continue to balance the firms’ and the individuals’ responsibilities when problems are uncovered and to target sanctions accordingly. For example, when appropriate, hold partners and employees rather than the entire firm accountable and consider the implications of possible sanctions on the audit market. However, it is equally important that concerns about the firms’ viability be balanced against the firms’ believing they are “too few to fail” and the ensuing moral hazard such a belief creates. Fourth, Big 4 market share concentration, particularly in key industries, may warrant ongoing and additional analysis, including evaluating ways to increase accounting firm competition in certain industries by limiting market shares. Finally, it is unclear what can be done to address existing barriers to entry into the large public company market. However, it may be useful to evaluate whether addressing these barriers could prevent further concentration in the top tier. Part of this evaluation could include determining whether there are acceptable ways to hold partners personally liable while reasonably limiting the firms’ exposure, but at the same time increasing the firms’ ability to raise capital. Agency Comments and Our Evaluation We provided copies of a draft of this report to SEC, DOJ, PCAOB, and AICPA for their comment. We obtained oral comments from DOJ officials from the Antitrust and Criminal Divisions, who provided additional information on the extent to which coordination with antitrust officials and consideration of the competitive implications of the Andersen criminal indictment occurred. As a result, we clarified the language provided in this report. SEC, DOJ, and AICPA provided technical comments, which have been incorporated into this report where appropriate. PCAOB had no comments. We are sending copies of this report to the Chairman and Ranking Minority Member of the House Committee on Energy and Commerce. We are also sending copies of this report to the Chairman of SEC, the Attorney General, the Chairman of PCAOB, and other interested parties. This report will also be available at no cost on GAO’s Internet homepage at http//www.gao.gov. This report was prepared under the direction of Orice M. Williams, Assistant Director. Please contact her or me at (202) 512-8678 if you or your staff have any questions concerning this work. Key contributors are acknowledged in appendix V. Scope and Methodology As mandated by Section 701 of the Sarbanes-Oxley Act of 2002 (P.L. 107- 204) and as agreed with your staff, our objectives were to study (1) the factors leading to the mergers among the largest public accounting firms in the 1980s and 1990s; (2) the impact of consolidation on competition, including the availability of auditor choices for large national and multinational public companies; (3) the impact of consolidation on the cost, quality, and independence of audit services; (4) the impact of consolidation on capital formation and securities markets; and (5) the barriers to entry faced by smaller firms in competing with the largest firms for large national and multinational public company clients. We conducted our work in Chicago, Illinois, New York, New York, and Washington, D.C., from October 2002 through July 2003. Identifying the Factors for Consolidation To identify the factors contributing to consolidation among accounting firms, we interviewed past and current partners of public accounting firms involved in Big 8 mergers, and officials from the Department of Justice (DOJ) and Federal Trade Commission (FTC). Specifically, we conducted in- depth interviews with senior partners of the Big 4 firms and, to the extent possible, the former partners, chairmen, and chief executive officers (CEO) of the Big 8 who were instrumental in their firms’ decisions to consolidate. We asked these officials to recount in detail their firms’ histories of consolidation and their views on the impetus for merging. We also conducted interviews with senior DOJ officials about the studies and investigations they had undertaken to determine whether the mergers would raise antitrust issues. We did not, however, review any of the antitrust analyses conducted by DOJ specific to any of the proposed mergers during the 1980s and 1990s. We requested DOJ’s antitrust analysis and related documentation from the mergers among the largest firms in 1987 and 1997. According to DOJ officials, most of the firm documents had been returned to the relevant parties, and other documents were viewed as “predecisional” by DOJ. While GAO’s statute provides us with access to predecisional information absent a certification by the President or the Director of the Office of Management and Budget, we were more interested in the reasons for the mergers than DOJ’s analysis in approving the mergers. Therefore, we used other sources to obtain the necessary information for this report. To the extent possible, we obtained copies of public decisions made by FTC in the 1970s and 1980s concerning the ability to advertise by professional service firms, including the accounting firms. As directed by the mandate, we coordinated with the Securities and Exchange Commission (SEC) and SEC’s counterparts from the Group of Seven nations (Canada, France, Germany, Italy, Japan, United Kingdom, and United States). To do this, we met with the representatives of the appropriate regulatory agencies under the auspices of the International Organization of Securities Commissions and obtained additional information relevant to their countries. We also conducted a literature review of existing studies on the history of the accounting profession and consolidation. Impact of Consolidation on Competition, Auditor Choices, Audit Fees, and Audit Quality and Auditor Independence To evaluate the impact of consolidation on competition, auditor choices, audit fees, and audit quality and auditor independence, we consulted with academics and other researchers, U.S. and foreign regulators, and trade associations, and we reviewed relevant academic literature. Most of the research studies cited in this report have been published in highly regarded, refereed academic journals. These studies were also reviewed by GAO’s economists, who determined that they did not raise serious methodological concerns. However, the inclusion of these studies is purely for research purposes and does not imply that we deem them definitive. We sent out 26 structured questionnaires regarding the impact of consolidation on choice, price, and quality to a cross section of academics and other experts (with backgrounds in accounting, securities, and industrial organization) and received 14 responses. We also collected data and calculated our own descriptive statistics for analysis. Using audit market data from various sources, we computed concentration ratios and Hirschman-Herfindahl indexes and conducted trend analyses and tests of statistical independence. We also employed a simple model of pure price competition, in which clients choose auditors based on price, ignoring factors such as quality or reputation, to assess whether the current high degree of concentration in the market for audit services is necessarily inconsistent with a purely price competitive setting. To augment our empirical findings, we conducted two surveys. Finally, we interviewed a judgmental sample of 20 chairpersons of audit committees of Fortune 1000 companies to obtain their views on consolidation and competition. Data Analysis Used a Variety of Sources To address the structure of the audit market we computed concentration ratios and Hirschman-Herfindahl indexes for 1988 to 2002 using the Who Audits America database, a directory of public companies with detailed information for each company, including the auditor of record, maintained by Spencer Phelps of Data Financial Press. We used Public Accounting Report (PAR) and other sources for the remaining trend and descriptive analyses, including the analyses of the top and lower tiers of accounting firms, contained in the report. Data on audit fees were obtained from a variety of academic and other sources, including Manufacturers Alliance. The proxy for audit fees that we constructed was based on numerous issues of PAR and Who Audits America. Given the data used and the manner in which our proxy was constructed, this should be considered to be a rough proxy and is used for illustrative trend analysis in this report. To verify the reliability of these data sources, we performed several checks to test the completeness and accuracy of the data. Random samples of the Who Audits America database were crosschecked with SEC proxy filings and other publicly available information. Descriptive statistics calculated using the database were also compared with similar statistics from published research. Moreover, Professors Doogar and Easley (see next section for fuller discussion), who worked with us on the modeling component of the study, compared random samples from Compustat, Dow- Jones Disclosure, and Who Audits America and found no discrepancies. Because of the lag in updating some of the financial information, the results should be viewed as estimates useful for describing market concentration. We performed similar, albeit more limited, tests on PAR data. However, these data are self-reported by the accounting firms and it should be noted that the firms are not subject to the same reporting and financial disclosure requirements as SEC registrants. We Used the Doogar and Easley (1998) Model of Audit Market Structure to Assess Concentration in a Purely Price Competitive Framework We also employed a simple model of pure price competition, in which clients choose auditors based on price, ignoring factors such as quality or reputation, to assess whether the current high degree of concentration in the market for audit services is necessarily inconsistent with a price- competitive setting. We worked with Professor Rajib Doogar, University of Illinois at Urbana-Champaign, and Professor Robert Easley, University of Notre Dame, to expand and update their 1998 model using 2002 data. Our sample consisted of 5,448 companies listed on the American Stock Exchange, NASDAQ, and New York Stock Exchange, and other companies with stock traded on other over-the-counter markets identified from Who Audits America. To ensure consistency with Doogar and Easley (1998), we limited the market studied to only industrial companies. The information on accounting firms, such as number of partners and staff, was obtained from PAR. Professors Doogar and Easley performed the simulations. To determine whether the tight oligopoly in the audit market in 2002 could be explained with a model of pure price competition, we ran three market simulations. In the first simulation, we allowed the firms to compete for clients to determine market share in a simulated price-competitive market. For the second simulation, we assigned companies to their current auditor and simulated the market to see if the accounting firms could defend their market share in a purely price-competitive market. Finally, we combined several smaller firms to see if they could successfully compete with the Big 4 for larger clients. In each simulation, the computer-generated market mimicked a process of pure price competition in which firms bid for each client, based on the short-term cost of performing the audit. Model Assumptions The model makes several principal assumptions. First, the model assumes that firms produce audits with a constant returns-to-scale technology using a fixed number of partners and a variable number of staff. Second, it assumes that firms seek to minimize cost (maximize profits), which determines each firm’s optimal staff-to-partner, or leverage, ratio. Third, the model assumes that firms compete in a market characterized by perfect price competition—firms bid their incremental costs for audits and clients choose auditors solely on price so that firm expertise, quality, and reputation, among other things, are not considered. In the model, firms with lower leverage ratios are more efficient and can therefore bid lower prices for audit engagements than less efficient firms, and thus clients will gravitate to more efficient accounting firms. Because data on partners and staff published by PAR are reported at the consolidated level for the entire accounting firm, not just the audit division, some error may be introduced into the measure of leverage. In this model and simulation framework, a client’s size is captured by the natural logarithm (log) of its total assets, which has been shown to be a good predictor of audit hours and thus audit effort. The model ignores all client characteristics that may influence audit fees but not “out-of-pocket” costs of audit production. Liability and litigation costs are assumed to be zero. Although our survey responses revealed that other factors such as expertise, global reach, and reputation play an important role in selecting an accounting firm, it is notable that a simple model, which does not take these factors into consideration, is able to simulate actual market shares that currently exist. Our work shows how publicly available data and the Doogar and Easley (1998) model can be combined to address important audit market concentration issues that are not easily addressed, especially given limited data on audit fees. Simulation One A short-run equilibrium is obtained when accounting firms compete on price until every client seeking an auditor is satisfied (that is, it has received the lowest price possible). After all clients have been assigned to an auditor, the incumbent firm charges its client a fee equal to the second- lowest bid. The results are then generated based on various assumed levels of switching costs (the cost of changing auditors). As table 5 illustrates, the model of price competition was able to closely predict the actual 2002 market shares, regardless of the level of switching cost assumed. Of the 5,448 industrial companies, the Big 4 audited 68 percent of the log of assets in 2002, and the model of price competition consistently predicted that this tier of firms would audit 68 percent or more of the total. In fact, collectively the Big 4 firms are predicted to audit 1-2 percent more than the actual percentage audited, depending on the cost of switching auditors. As table 5 also illustrates, we found that if switching costs are prohibitively expensive (20 percent or above) companies will not switch auditors and price competition will have no impact on the Big 4’s market share. Simulation Two In the second market simulation, we assigned clients to their current auditor and simulated the market to see if the accounting firms could defend their market share in a purely competitive market. As table 6 shows, the model predicted that the Big 4 would audit 64.0 percent of the total market, compared with the Big 4 actual market share of 62.2 in 2002. Moreover, the model predicted that the Big 4 would audit 96.3 percent of companies in the sample with assets greater than $250 million compared with the 97.0 percent actually audited by the Big 4 in 2002. Additionally, Doogar and Easley (1998) found that the model of pure price competition could explain the pattern of market shares in 1995. Simulation Three Finally, we merged the five largest firms below the Big 4 in terms of the number of partners (capacity)—Grant Thornton, BDO Seidman, Baid Kurtz & Dobson, McGladrey & Pullen, and Moss Adams—and simulated the market to see if the newly merged firm could successfully win clients from the Big 4 (see table 7). Measured by the log of assets, these firms collectively audited 8.6 percent of the actual market in 2002. However, when we simulated the market to begin the process, the model predicted these firms would collectively audit only 4.5 percent of the market, while the Big 4 would audit 70.1 percent. When we simulated the merger of the five firms and assumed no efficiency gains would result, the merged firm’s market share declined slightly. When modest efficiency gains were permitted, the merged firm gained market share, to 6.4 percent, and was able to attract a few of the Big 4’s larger clients. Finally, in the best-case scenario in which we allowed the newly merged firm to become as efficient as the Big 4 (strong efficiency gains), the market share increased to 11.2 percent, and both the Big 4 and remaining accounting firms lost market share to the merged firm. However, since the five firms actually audited 8.6 percent of the market in 2002 collectively, the simulated mergers only resulted in a market share increase of 2.6 percentage points in the best-case scenario. Survey Data To augment our empirical analysis, we conducted two sample surveys to get information from the largest accounting firms and their clients. First, we surveyed representatives of each of the 97 largest accounting firms— those with 10 or more corporate clients that are registered with SEC— about their experience consolidating with other firms, their views on consolidation’s effects on competition, and what they thought were the potential implications of consolidation for auditor choice, audit fees, audit quality, and auditor independence within their industry. We identified the 97 firms and obtained name and address information for the executive to be contacted primarily from the membership list of the American Institute of Certified Public Accountants’ (AICPA) SEC Practice Section. To develop our questionnaire, we consulted a number of experts at SEC, AICPA, and others knowledgeable about the accounting profession. We also pretested our questionnaire with two of the Big 4 firms, four other firms among the largest 97, and two small firms. We began our Web-based survey on May 23, 2003, and included all usable responses as of July 11, 2003, to produce this report. One of the 97 firms was found to be ineligible for the survey because the answers of another responding firm comprised the activity of the former, so the final population surveyed was 96 firms. We received 47 usable responses from these 96 firms, for an overall response rate of 49 percent. However, the number of responses to individual questions may be fewer than 47, depending on how many responding firms were eligible to or chose to answer a particular question. Second, we surveyed a random sample of 250 of the 960 largest publicly held companies. We created this population from the 2003 list of the Fortune 1000 companies produced by Fortune, a division of Time, Inc., after removing 40 private firms from this list. We mailed a paper questionnaire to the chief financial officers, or other executives performing that role, requesting their views on the services they received from their auditor of record, the effects of consolidation on competition among accounting firms, and its potential implications. To develop this questionnaire, we consulted with AICPA and SEC and pretested with six large public companies from a variety of industries. The survey began on May 6, 2003. We removed one company that had gone out of business, and received 148 usable responses as of July 11, 2003, from the final sample of 249 companies, for an overall response rate of 59 percent. Again, the number of responses to individual questions may fluctuate, depending on how many respondents answered each question. We plan to issue a subsequent report in September 2003 on client responses received through July 30, 2003. While the public company survey results came from a random sample drawn from the population of Fortune 1000 companies and thus could be weighted to statistically represent that larger group, we are reporting totals and percentages only for those companies (and accounting firms) actually returning questionnaires. Since the small number of respondents to both surveys at the time of publication could significantly differ in their answers from the answers nonrespondents might have given had they participated, it is particularly risky to project the results of our survey to not only the nonrespondents, but also to the part of the public company population we did not sample. There are other practical difficulties in conducting any survey that may also contribute to errors in survey results. For example, differences in how a question is interpreted or the sources of information available to respondents can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages to minimize such errors. In addition to the questionnaire testing and development measures mentioned above, we followed up with the sample firms and clients with e-mails and telephone calls to encourage them to respond and offer assistance. We also checked and edited the survey data and programs used to produce our survey results. Finally, we conducted structured interviews with a judgmental sample of 20 chairs of audit committees for Fortune 1000 companies to obtain their views on audit services, consolidation, and competition within the audit market. Our selection criteria included geographic location, the company’s industry, and the chairperson’s availability. The audit chairpersons whom we interviewed all had a background in business and most had been or were currently serving as CEOs of a Fortune 1000 company. On average, the chairpersons we interviewed served on over two boards in addition to the board on which they sat for purposes of the interview. On average, they served as chairpersons of the audit committee for just over 2 years, served as a member on the audit committee for over 5 years, and served on that Fortune 1000 company’s board of directors for over 7 years. Impact of Consolidation on Capital Formation and Securities Markets To address the issue of the impact of consolidation and concentration among large accounting firms on capital formation and securities markets, we interviewed representatives from accounting firms, investment banks, institutional investors, SEC, self-regulatory organizations, credit agencies, and retail investors, among others. We also consulted with numerous academics and reviewed relevant economic literature. Identifying Barriers to Entry To identify the barriers to entry that accounting firms face in the public company audit market, we discussed competition and competitive barriers with representatives of a cross section of public accounting firms, large public companies, various government agencies, the accounting profession and trade associations, institutional investors, securities underwriters, self- regulatory organizations, credit rating agencies, and other knowledgeable officials. We obtained information from the National Association of State Boards of Accountancy and AICPA. We also reviewed existing state and federal requirements. Finally, we used the Doogar and Easley (1998) model to roughly assess whether mergers between non-Big 4 firms could potentially increase the number of accounting firms capable of auditing large national and multinational companies. GAO Surveys of Public Accounting Firms and Fortune 1000 Public Companies To provide a thorough, fair and balanced report to Congress on these issues, it is essential that we obtain the experiences and viewpoints of a representative sample of public accounting firms. Your firm has been selected from a group of public accounting firms comprising the American Institute of Certified Public Accountants' (AICPA) SEC Practice Section member firms and other public accounting firms that performed audits of public companies registered with the SEC, which are not members of the AICPA's SEC Practice Section. In conducting these studies, the GAO is asking for your cooperation and assistance by providing the views of your public accounting firm on industry consolidation and the potential effects of mandatory audit firm rotation. This survey should be completed by the senior executive of your firm (e.g. the Chief Executive Officer/Managing Partner) or their designated representative(s) who can respond for the firm on matters of industry consolidation and mandatory firm rotation. · "Public company" refers to issuers of securities subject to the financial reporting requirements of the Securities Exchange Act of 1934, the Investment Company Act of 1940, and registered with the Securities and Exchange Commission (SEC). For purposes of this survey, mutual funds and investment trusts that meet the statutory definition of issuer of securities are considered public companies. · "Multinational or foreign public company" is a public company with significant operations (10 percent or more of total revenue) in one or more countries outside the United States. · "Domestic public company" is a public company with no significant operations (10 percent or more of total revenue) outside the United States. · "auditor," "auditor of record" and "public accounting firm" refer to an independent public accounting firm registered with the SEC that performs audits and reviews of public company financial statements and prepares attestation reports filed with the SEC. In the future, these public accounting firms must be registered with the Public Company Accounting Oversight Board (PCAOB) as required by the Sarbanes-Oxley Act. Please provide the following information so that we can contact you if we have any questions: Name of Primary Contact: ______________________ 1. Is your public accounting firm currently a member of the AICPA's SEC Practice Section? 1. 2. 3. 2. At this time, does your public accounting firm plan to register with the PCAOB? 1. 2. 3. 4. 3. In total and for each of the following categories, approximately how many public companies did your public accounting firm serve as auditor of record during your firm's last fiscal year? Enter numeric digit in each box. N=44 Mean=32 Median=17 Range=2 - 232 4. With respect to your public company audit, review, and attest clients during your firm's last fiscal year, did you serve as auditor of record for a public company or number of public companies that together represent over 25% of the market share of a specific industry? 1. Yes (click to go to Question 5.) 2. No (click to go to Question 6.) 94% 3. 5. Please identify each industry for which your public company audit, review, and attest clients during your firms last fiscal year represented, in the aggregate, at least 25% of the public company market share in the industry. In addition for each industry identified please also provide your firm's estimate of the aggregate market share your public company clients represent and the basis your firm used for estimating market share (for example, share of number of public companies in an industry, share of industry revenue, share of industry market capitalization, etc.) 6. With respect to your firm's public company audit, review, and attest clients during your firm's last fiscal year, please indicate those industries for which 5 percent or more of your public company audit, review, and attest practice resources (based on hours, staff, etc.) were devoted following industry classification is based on the North American Industry Classification System (NAICS). Generally, we have included classifications covering each NAICS industry sector and, with respect to the Manufacturing sector, selected sub-sectors.) 1. Accommodations and Food Services N=2 2. Administrative and Support Services and Waste Management and Remediation Services N=2 3. Agricultural, Forestry, Fishing, and Hunting N=0 4. Ambulatory Health Care Services N=1 5. Arts, Entertainment, and Recreation N=5 6. Construction N=2 7. Educational Services N=0 8. Finance and Insurance N=19 9. Information Services N=13 10. Management of Companies and Enterprises N=0 11. Manufacturing--Chemical N=2 12. Manufacturing-Computer and Electronic Products N=9 13. Manufacturing-Food N=1 14. Manufacturing-Paper N=0 15. Manufacturing-Primary Metal N=1 16. Manufacturing-Transportation Equipment N=2 17. Manufacturing-Other N=14 18. Mining N=5 19. Professional, Scientific, and Technical Services N=10 20. Public Administration N=0 21. Real Estate and Rental and Leasing N=6 22. Trade--Retail N=4 23. Trade--Wholesale N=4 24. Transportation and Warehousing N=2 25. Utilities N=2 26. Other - please specify in box below N=21 If you checked "Other" industries - specify below: 7. Approximately what percentage of your firm's total revenue (from U.S. operations) came from each of the following types of services during your firm's last fiscal year? Please fill in the percentages so that they add up to 100%. N=45 Mean=53 Median=49 Range=25 - 100 N=43 Mean=30 Median=30 Range=10 - 55 N=25 Mean=14 Median=10 Range=2 - 40 N=37 Mean=14 Median=10 Range=1 - 40 8. Approximately what percentage of your firm's audit, review, and attest revenue from U.S. operations came from each of the following categories of clients during your firm's last fiscal year? Please fill in the percentages so that they add up to 100%. N=18 Mean=16 Median=12 Range=1 - 90 9. Does your firm plan to offer audit, review, and attestation services to large public companies during the next 5 years? 1. Yes (Click to go to Question 10.) 19% 2. 3. 4. Please explain why your firm currently does not plan to offer audit, review, and attest services to large (revenues of $5 billion or more) public companies during the next 5 years? 10. Approximately how many times did your firm succeed another public accounting firm as auditor of record for a public company client during your firm's last three fiscal years? N=45 Mean=39 Median=10 Range=1 - 414 11. Since December 31, 2001 approximately how many times did your firm succeed Arthur Andersen as auditor of record for a public company client? N=17 Mean=49 Median=2 Range=1 - 308 12. When your answers to the "Public Accounting Firm Background" part of this survey are final and ready to be used by GAO, please click the "Completed This Part of Survey" button below. 1. Completed This Part of Survey 100% 2. 13. Please click the "Next Section" button at the bottom of the page to continue with the questionnaire, or click the link below to return to the main menu. Click here CONSOLIDATION IN THE PUBLIC ACCOUNTING PROFESSION We are focusing on the trend towards consolidation in the public accounting profession starting in 1987, when consolidation activity among the largest accounting firms began. Your Firm's Consolidation History Please consider whether your firm has combined with another to form a new entity or has restructured in any way that involved the assumption of new assets and services. Please include any mergers or acquisitions as consolidation events. 14. Has your firm been involved in one or more consolidations since 1987? Please check one box. 1. 2. No (Click to go to Question 16.) 3. 15. IF YES: What size firm(s) did your firm merge with or acquire? Please check all that apply. 1. Firm(s) with larger net revenue 2. Firm(s) with similar net revenue 3. Firm(s) with smaller net revenue N=25 4. Other - please describe in box below N=2 If you checked "Other" - please describe below: 16. Starting in 1987, has your firm declined any opportunities to participate in consolidation activity that would have significantly increased its market share? Please click one button. 1. 2. 3. Please explain: 17. Apart from consolidations, has your firm entered into any affiliations - such as networks, alliances, global organizations, or other arrangements - with other accounting firms in the U.S. or internationally to provide audit, review, and attest services since 1987? Please click one button. 1. Yes - we joined an affiliation since 1987 2. No - but we joined an affiliation before 1987 3. No - we once were a member of an affiliation but are no longer 4. 5. If your firm HAS been involved in any form of consolidation activity, please answer the following questions; otherwise click below to skip to the next applicable question. Click here 18. How important was each of the following reasons in your firm's decisions to consolidate? Click one button in each row. To increase market share/to increase To establish presence in new geographic To decrease costs/achieve economies of To gain talented staff N=30 To expand audit, review, and attest To enhance audit, review, and attest To gain certain clients N=30 To establish presence in new client To gain access to capital N=30 To compete more successfully against To improve the quality of the audit N=30 Other reason - describe in the box below If "Other reason" -- Please describe: 19. Has your consolidation activity enabled your firm to provide or increase audit, review, and attest services to large domestic or multinational clients? 1. Yes, previously unable to provide, but are now able 2. Yes, previously able to provide and increased our ability 3. No, our ability remained unchanged 4. No Answer Please continue with the next question if your firm has ever DECLINED AN OPPORTUNITY to participate in a consolidation activity that would have significantly increased its market share OR if has NOT been involved in a consolidation since 1987; otherwise click on the link below to skip to the next applicable question. 20. To what extent does each of the following reasons explain why your firm did NOT participate in a consolidation activity? Click one button in each row. Consolidation in the Accounting Profession ALL FIRMS: This next section asks you to consider the relative role that the consolidation activity of the largest accounting firms, among other things, has played in influencing certain aspects of the accounting profession in the past decade. Please base your response on your experience in the past decade, or if this is not possible, on the time frame that reflects your experience. 21. How have your costs for performing audit, review, and attest services changed in the past decade? (Please adjust for inflation and volume of business.) 1. 2. 3. 4. 5. 6. 22. Many factors impact costs in different ways. In which way have each of the following influenced your audit, review, and attest operating costs, if at all, over the past decade? (Please adjust for inflation and volume of business where appropriate.) Click one button in each row. of audits and accounting standards The consolidation activity that has occurred starting in 1987 among the largest accounting firms N=43 The consolidation activity that has occurred within your firm (leave "No Answer" checked if your firm has not consolidated) N=29 Other factor - describe in the box If “other factor” – please describe: 23. How have your audit, review, and attest fees (for example, net rate per billable hour) changed in the past decade? (Please adjust for inflation and volume of business.) 1. 2. 3. 4. 5. 6. No Answer 24. In which way has each of the following influenced your audit, review, and attest fees, if at all, in the past decade? (Please adjust for inflation and volume of business where appropriate.) The consolidation activity that has occurred starting in 1987 among the The consolidation activity that has occurred within your firm (leave "No Answer" checked if your firm has not Other factor - describe in the box below If "Other factor" -- Please describe: 25. Has it become harder or easier for your firm to maintain audit quality in the past decade? 1. 2. 3. 4. 5. 6. No Answer 26. In which way has each of the following contributed to making it harder or easier for your firm to maintain audit quality in the past decade? Ability to recruit and retain qualified Skills of staff members N=46 The consolidation activity that has occurred starting in 1987 among the The consolidation activity that has occurred within your firm (leave "No Answer" checked if your firm has not Other factor - describe in the box below If "Other factor" -- Please describe: 27. Has it become harder or easier for your firm to maintain independence as an auditor at the firm level in the past decade? 1. 2. 3. 4. 5. 6. Please explain: 28. In which way has each of the following contributed to making it harder or easier to maintain independence as an auditor at the firm level in the past decade? Profitability of non-audit services N=43 Tenure of relationship with client N=43 The consolidation activity that has occurred starting in 1987 among the The consolidation activity that has occurred within your firm (leave "No Answer" checked if your firm has not Other factor - describe in the box below If "Other factor" -- Please describe: 29. Has it become harder or easier to maintain personal independence as an auditor in the past decade? 1. 2. 3. 4. 5. 6. Please explain: 30. Has it become harder or easier for your firm to successfully compete to be the auditor of record for large domestic or multinational public clients in the past decade? 1. 2. 3. 4. 5. 6. 31. In which way has each of the following contributed to making it harder or easier for your firm to successfully compete to be the auditor of record for large domestic or multinational public clients in the past decade? Threat of litigation to your firm N=21 Threat of litigation to clients N=20 Tenure of relationship with client N=21 The consolidation activity that has occurred starting in 1987 among the The consolidation activity that has occurred within your firm (leave "No Answer" checked if your firm has not Other factor - describe in the box below If "Other factor" -- Please describe: 32. Please indicate whether you have experienced a net increase or decrease over the past decade in the following types of clients for whom your firm performs audit, review, and attest services. 33. Has your firm lost any audit, review, and attest clients to other accounting firms specifically because the client(s) wanted another firm to help them prepare for an initial public offering or subsequent issuance of securities? 1. Yes - client went to a Big 4 firm for IPO or other securities issuance 47% 2. Yes - client went to a NON-Big 4 firm for IPO or other securities issuance 17% 3. 4. 34. In the past five years, has your firm accepted any new clients specifically to assist their initial public offerings or subsequent issuance of securities? 1. Yes - Please enter approximate number in the box below 72% 2. 3. If "Yes" -- enter an approximate number of clients, using numeric digits: Competition in the Accounting Profession 35. Based on your experience, how would you describe the current level of competition among public accounting firms as a whole in providing audit, review, and attest services to the following types of companies? 2% 36. Based on your experience, how has the overall level of competition to provide audit, review, and attest services to each of the following types of companies changed in the past decade as a result of the consolidation activity that has occurred in the accounting profession? 37. How, if at all, has the consolidation activity of the largest accounting firms affected each of the following areas? Opportunity for your firm to provide service to large public companies N=37 Opportunity for your firm to provide service to small and mid-sized public Opportunity for your firm to provide service to private companies N=47 Other area - describe in the box below If "Other area" -- Please describe: 38. Overall, how do you think that the consolidation activity that has occurred in the accounting profession in the past decade has affected competition? 1. 2. 3. 4. 5. 6. 7. No Answer Impediments to Competition (Barriers to Entry) 39. To what extent do you think that each of the following is an impediment for accounting firms wishing to provide audit, review, and attest service to large domestic or multinational public companies that are subject to the securities laws? Not being a "Big 4" firm N=42 Credibility with financial markets and Other impediment - describe in the box IF "OTHER IMPEDIMENT" -- Please describe: 40. Are there any federal or state regulations that impede competition among public accounting firms to provide audit, review, and attest services to public companies? 1. 2. 3. 41. For each of the following federal or state regulatory requirements, please indicate how much of an impediment, if any, that requirement is to competition among public accounting firms in the United States. Please also list any additional federal and/or state regulations that impede competition. The Sarbanes-Oxley Act of 2002 N=46 Other regulation - describe in the FIRST Other regulation - describe in the SECOND box below N=9 If "Other regulation" -- Please describe FIRST additional regulation: If second "Other regulation" -- Please describe SECOND additional regulation: 42. Would you favor or oppose the following actions to increase competition to provide audit, review, and attest services for large domestic or multinational public clients? Government action to break up the Big 4 Government action to assist the non-Big Let market forces operate without Other action - describe in the FIRST box Other action - describe in the SECOND If "Other action" -- Please describe FIRST additional action: If second "Other action" -- Please describe SECOND additional action: 43. Do you have any additional comments on any of the issues covered by this survey? Please use the space below to make additional comments or clarifications of any answers you gave in this survey. 44. When your answers to the “Consolidation in the Public Accounting Profession” part of the survey are final and ready to be used by GAO, please click the “Completed This Part of Survey” button below. 1. Completed This Part of Survey 100% 2. 45. Please click the "Next Section" button at the bottom of the page to continue with the questionnaire, or click the link below to return to the main menu. Please complete this questionnaire specifically the U.S. General Accounting Office (GAO), the for the company named in the cover letter, and independent research and investigative arm of not for any subsidiaries or related companies. This questionnaire should be completed by the profession. To provide a thorough, fair, and balanced report historical information on mergers, operations to Congress, it is essential that we obtain the and finance, as well as report the corporate experiences and viewpoints of a representative policy of this firm. sample of public companies. Your company was selected randomly from the enclosed envelope within 10 business days of 2002 list of Fortune 1000 companies. It is receipt. If the envelope is misplaced, our important for every selected firm to respond to ensure the validity of our research. Congress. Telephone: (202) 512-3608 Email: [email protected] Thank you for participating in this survey. Page 1 of 15 1. Approximately what percentage of your company’s total revenues are derived from operations within and outside of the United States? Please enter percentages totaling 100%. _____% of our revenues are derived from operations within the United States _____% of our revenues are derived from operations outside of the United States 2. If your company was founded in the past decade, in what year was it founded? Please enter 4-digit year. 3. What is the name of your company’s current auditor of record and when did this firm become your auditor of record? Please enter name of auditor and 4-digit year hired. _______________________________ First year employed as auditor 4. What type of services does your auditor of record currently provide to your company? Please check all that apply. 1. Only audit and attest services 2. Tax-related services (e.g., tax preparation) 3. Assistance with company debt and equity offerings (e.g. comfort letters) N=98 4. Other services - please describe: _____________________________________________ Page 2 of 15 5. Approximately how much were the total annual fees that your company paid to your auditor of record for audit and attest services during your last fiscal year? Please enter approximate dollar figure. Range=$13,807-$62,000,000 6. Starting in 1987, when consolidation of the largest accounting firms began, or since your company was founded (if that occurred after 1987), has your company employed more than one auditor of record? Please check one box. 1. Yes - how many: ________ 2. No SKIP TO NEXT PAGE 7. What were the names and tenures of the most recent previous auditor(s) of record your company has employed since 1987? Please name up to two of the most recent previous auditors and years employed. from (year)_____ to (year)_______ ________________________ Name of auditor from (year)_____ to (year)_______ 8. Which of the following reasons explain why your company changed auditor of record one or more times since 1987? Please check all that apply. 1. Our company had a mandatory rotation policy 2. Expansion of our company required an auditor of record that could meet new demands 3. New regulations forbidding use of auditor for management consulting and other services 4. Fees for audit and attest services 5. Concern about reputation of our auditor of record 6. Our auditor of record was going out of business 7. Our auditor of record resigned 8. Relationship with our auditor of record was no longer working 9. Page 3 of 15 9. If your company previously employed Arthur Andersen as your auditor of record and switched to another firm in the past two years, did you switch to the firm to which your previous Arthur Andersen partner moved? Please check one box. 1. Not applicable – did not employ Arthur Andersen 2. Yes, switched to partner’s new firm 3. No, switched to other firm – Consolidation in the Accounting Profession We are focusing on the trend toward consolidation that has occurred in the public accounting profession starting in 1987, when consolidation activity among the largest firms began, primarily the consolidation of the “Big 8” into the “Big 4.” This section asks you to consider how your company’s relationship with its auditor of record, and the audit services it provides, has changed over this time frame. Although a number of factors may have influenced these changes, we would like you to assess the influence of consolidation in the accounting profession in particular. Please base your answers on your experience in the past decade or, if this is not possible, on the time frame that reflects your experience. 10. How have the fees that your company pays for audit and attest services changed over the past decade? If it is not possible for you to answer for the past decade, please base your answer on the time frame that best reflects your experiences. Please check one box. 1. 2. 3. 4. 5. Page 4 of 15 11. If your company changed auditors within the last two years, how have the fees your company pays your current auditor of record changed compared to the fees paid to your previous auditor? Please check one box. 1. Not applicable – have not changed auditors ----------------------------------------------------------- 2. 3. 4. 5. 6. 12. In your opinion, how has the consolidation of the largest accounting firms over the past decade influenced the fees that your company pays for auditing and attest services? 1. 2. 3. 4. 5. ---------------------------------------------------------- 6. 13. Audit quality is often thought to include the knowledge and experience of audit firm partners and staff, the capability to efficiently respond to a client’s needs, and the ability and willingness to appropriately identify and surface material reporting issues in financial reports. Do you believe that the overall quality of audit services your company receives has gotten better or worse over the past decade? Please check one box. 1. 2. 3. 4. 5. ---------------------------------------------------------- 6. Page 5 of 15 14. If your company changed auditors within the last two years, do you believe that the overall quality of audit services your company receives from your current auditor is better or worse than the overall quality of audit services your company received from its previous auditor? Please check one box. 1. Not applicable – have not changed auditors ---------------------------------------------------------- 2. 3. 4. 5. 6. ----------------------------------------------------------- 7. 15. In your opinion, how has the consolidation of the largest accounting firms over the past decade influenced the quality of audit and attest services that your company receives? 1. 2. 3. 4. 5. ---------------------------------------------------------- 6. 16. If you have experienced a change in audit quality, please explain: If you have not experienced a change, please enter “none.” Page 6 of 15 17. Auditor independence is often thought to relate to the accounting firm’s ability and willingness to appropriately deal with (a) financial reporting issues that may indicate materially misstated financial statements; (b) the appearance of independence in terms of the other services a firm is allowed to and chooses to provide to their clients; and (c) how much influence clients appear to have in the audit decisions. Do you believe that your company’s auditor(s) has become more or less independent over the past decade? Please check one box. 1. 2. 3. 4. 5. ---------------------------------------------------------- 6. 18. If your company changed auditors within the last two years, do you believe that your current auditor is more or less independent than your previous auditor? Please check one box. 1. Not applicable – have not changed auditors ---------------------------------------------------------- 2. 3. 4. 5. 6. ---------------------------------------------------------- 7. Page 7 of 15 19. In your opinion, how has the consolidation of the largest accounting firms over the past decade influenced the ability of your auditor of record to maintain independence in the audit and attest services it provides to your company? Please check one box. 1. 2. 3. 4. 5. ---------------------------------------------------------- 6. 20. How satisfied are you with your current auditor of record? 1. 2. 3. 4. 5. ---------------------------------------------------------- 6. N=146 Other - please describe: Page 11 of 15 28. Has the consolidation of the largest accounting firms over the past decade made it harder or easier for your company to satisfactorily select an auditor and maintain a relationship with that auditor? Please check one box. 1. 2. 3. 4. 5. ---------------------------------------------------------- 6. 29. How, if at all, has the consolidation of the largest accounting firms over the past decade affected competition in the provision of audit and attest services? If it is not possible for you to answer for the past decade, please base your answer on the time frame that best reflects your experiences. Please check one box. 1. 2. 3. Little or no effect SKIP TO QUESTION 31 4. 5. ---------------------------------------------------------- 6. 30. How, if at all, has this change in competition affected each of the following areas? (1) (2) (3) (4) (5) (6) N=71 Other - please describe: Page 12 of 15 31. What do you believe is the minimum number of accounting firms necessary to provide audit and attest services to large national and multinational public companies? Please enter a number. 32. What do you believe is the optimal number of accounting firms for providing audit and attest services to large national and multinational public companies? Please enter a number. Page 13 of 15 33. Do you suggest that any actions be taken to increase competition in the provision of audit and attest services for large national and multinational public companies? Please check one box. 1. 2. 3. 34. Would you favor or oppose the following actions to increase competition to provide audit and attest services for large national and multinational clients? Please check one box in each row. (1) (2) (3) (4) (5) (6) Page 14 of 15 35. Do you have any additional comments on any of the issues covered by this survey? Please use the space below to make additional comments or clarifications of any answers you gave in this survey. Thank you for your assistance with this survey! Please return it in the envelope provided. Arthur Andersen Case Study Background In 2001, Arthur Andersen LLP (Andersen) was the fourth-largest public accounting firm in the United States, with global net revenues of over $9 billion. On March 7, 2002, Andersen was indicted by a federal grand jury and charged with obstructing justice for destroying evidence relevant to investigations into the 2001 financial collapse of Enron. At the time of its indictment, Andersen performed audit and attest services for about 2,400 public companies in the United States, including many of the largest public companies in the world. In addition, Andersen served private companies and provided additional professional services such as tax and consulting services. This appendix is an analysis of 1,085 former Andersen public company clients that switched to a new public accounting firm between October 1, 2001, and December 31, 2002.In addition to identifying the new public accounting firms of the former Andersen clients, we determined which firms attracted the largest clients and how many Andersen clients switched to non-Big 4 firms. Most Andersen Clients Switched to a Big 4 Firm Between October 2001 and December 2002, 1,085 public companies audited by Andersen switched to a new auditor of record. As figure 10 illustrates, of the 1,085 companies reviewed, 938 switched to one of the Big 4 (87 percent), and 147 switched to a non-Big 4 firm (13 percent). Among the Big 4, Ernst & Young attracted the largest number of former Andersen clients, followed by KPMG, Deloitte & Touche, and PricewaterhouseCoopers (see fig. 11). Of the former Andersen clients who switched to a non-Big 4 firm, 45 switched to Grant Thornton (4 percent) and 23 switched to BDO Seidman (2 percent). Largest Clients Switched to Big 4 Firms We found that almost all former Andersen clients with total assets above $5 billion switched to a Big 4 firm. The one exception, Global Crossing, switched to Grant Thornton. We found that the Big 4 audited approximately 98 percent of the total assets of the 1,085 former Andersen clients that switched auditors between October 1, 2001, and December 31, 2002. As illustrated in figure 12, PricewaterhouseCoopers, although attracting the smallest number of Andersen clients (159), tended to attract the largest clients based on average total company asset size ($3.9 billion). Comparatively, former Andersen clients that switched to Deloitte & Touche and KPMG averaged total assets of $3.0 billion and $2.4 billion, respectively. In addition, Ernst & Young, although attracting the largest number of Andersen clients, tended to attract smaller clients based on average total company asset size ($1.5 billion). We also analyzed former Andersen clients by asset size and determined how many of its clients switched to Big 4 versus other firms. As table 8 illustrates, the vast majority of the largest former Andersen clients switched to one of the Big 4 firms. With the exception of the smallest asset class, 90 percent or more of the former Andersen clients switched to one of the Big 4 firms. We also looked at the movement of former Andersen clients to the Big 4 firms within the asset range groups. As table 9 shows, KPMG was hired by the highest percentage of former Andersen clients in both the largest and smallest asset groups, while Ernst & Young was hired by the highest percentage of former Andersen clients with assets between $100 million and $5 billion. Thirteen Percent of Former Andersen Clients Switched to Non-Big 4 Firms Of the former Andersen clients, 147 (13 percent) switched to a non-Big 4 firm. Of the 147 firms, 31 percent switched to Grant Thornton and 16 percent switched to BDO Seidman (fig. 11). The average asset size of a company that switched to a non-Big 4 firm was $309 million, which is approximately $2.2 billion less than the average asset size of a company that switched to a Big 4 firm. As table 10 illustrates, the average asset size of a company that switched to Grant Thornton was $644 million, and the average asset size of a company that switched to BDO Seidman was $54 million. The 147 public company clients that did not engage a Big 4 firm switched to one of 52 non-Big 4 firms. Former Andersen Clients by Industry Sectors Of the 1,085 former Andersen clients, we were able to classify 926 companies into 56 different industry sectors. We observed that former Andersen clients in 22 industry sectors stayed with a Big 4 firm, while former Andersen clients in 34 industry sectors switched to a non-Big 4 firm. Within some industries certain accounting firms were hired more often than others. For example, Ernst & Young attracted former Andersen clients in more industry sectors overall than any other firm (49 of the 56 industry sectors). We also observed that within 16 industries KPMG attracted more former Andersen clients than other firms (see table 11). It is important to review this analysis in the context of its limitations. Specifically, defining markets by SIC codes can exaggerate the level of concentration because, like the audit market, a few large companies dominate many industry sectors (see table 2). To mitigate the potential for bias, we limited our analysis to the 2-digit SIC codes rather than the 4-digit codes. There are additional methodological issues with defining markets by SIC codes. First, the audited companies’ lines of business, not the business of the accounting firms, defines the markets. Second, some companies that could be included in a particular industry are not included because no SIC code identifier was provided in the database that we used. Moreover, assignment of a company to a particular SIC code sometimes involves judgment, which may create bias. Analysis of Big 4 Firms’ Specialization by Industry Sector The concentration that exists across accounting firms that audit public companies is even more pronounced in certain industry sectors. For example, in certain industry sectors, two firms audit over 70 percent of the assets. Because public companies generally prefer auditors with established records of industry expertise and requisite capacity, their viable choices are even more limited than the Big 4. This appendix provides additional descriptive statistics on selected industries in the U.S. economy using U.S. Standard Industry Classification (SIC) codes—numerical codes designed by the federal government to create uniform descriptions of business establishments. Limitations of SIC Analysis The purpose of this analysis is to illustrate that certain firms dominate particular industries or groups, and companies may consider only these firms as having the requisite expertise to provide audit and attest services for their operations. However, it is important to review this analysis in the context of its limitations. Specifically, defining markets by SIC codes can exaggerate the level of concentration because, like the audit market, a few large companies dominate many industry sectors (see table 2). For example, in the petroleum industry, we were able to identify only 25 publicly listed companies in 2002, 20 of which were audited by the Big 4. Because PricewaterhouseCoopers and Ernst & Young audit the six largest companies, they audit 95 percent of the assets in this industry. To mitigate the potential for bias, we limited our analysis to the 2-digit SIC codes rather than the more specific 4-digit codes. There are additional methodological issues with defining markets by SIC codes. First, the audited companies’ lines of business, not the business of the accounting firms, defines the markets. Second, some companies that could be included in a particular industry are not included because no SIC code identifier was provided in the database that we used. Moreover, assignment of a company to a particular SIC code sometimes involves judgment, which may create bias. Finally, the methodology assumes different accounting firms are in separate markets and cannot easily move from auditing one type of industry to another. The total assets data come from the 1997 and 2002 editions of Who Audits America, which has detailed information on public companies, including current and former auditor and SIC code. Because some companies are not classifiable establishments, others do not list SIC codes because they operate in many lines of business, or the necessary information might have been missing in some cases, the data only include companies that had a 4- digit, 3-digit or 2-digit SIC code in the 1997 and 2002 versions of the database (8,724 companies in 1997 and 9,569 companies in 2002). All SIC codes were converted to 2-digit codes (major group) for analysis. Table 12 lists and defines each SIC major economic group analyzed here and in the body of the report. In computing concentration ratios for each accoounting firm in the various industry groups, we used total assets audited. However, the results generally are not sensitive to the use of a different measure (such as total sales). Industry Specialization Can Limit Public Company Choice As figure 13 shows, in selected industries specialization can often limit the number of auditor choices to two—in each case, two auditors account for over 70 percent of the total assets audited in 2002. As a result, it might be difficult for a large company to find an auditor with the requisite industry expertise and staff capacity. Figure 13 also shows that while a few firms dominated certain industries in 1997 before the merger of Price Waterhouse and Coopers & Lybrand and dissolution of Arthur Andersen, there were fewer industries where two firms accounted for more than 70 percent of the total sales audited; and in most cases, at least one of the remaining Big 6 firms audited a significant share (greater than 10 percent) of the industry. The dissolution of Andersen in 2002 and the merger of Price Waterhouse and Coopers & Lybrand in 1998 appear to have impacted many industries, including those in the primary metals, general building contractors, furniture and fixtures, petroleum and coal products, transportation by air, and electric, gas, and sanitary services groups included in figure 13. Moreover, figure 14 shows the remaining major economic groups with 20 or more companies for which Andersen audited roughly 25 percent or more of the total assets in the industry or Price Waterhouse and Coopers & Lybrand both had significant presence in 1997. As the figure indicates, in many of these sectors Ernst & Young and Deloitte & Touche acquired significant market share by 2002. Because the Big 4 firms have increased their presence in these industries formerly dominated by Andersen or Price Waterhouse and Coopers & Lybrand, the number of firms with industry expertise appears to have remained unchanged in most cases. The mergers between Price Waterhouse and Coopers & Lybrand did not impact choice in most industries because the firms generally dominated different industries as figure 13 and figure 14 show. This highlights that one of the factors contributing to the mergers was the desire to increase industry expertise. However, there are some industries (petroleum and coal products, communications, primary metals, and fabricated metals among others) that may have experienced a reduction in the number of viable alternatives for companies that consider industry expertise important when choosing an auditor. Table 13 provides a list of industries defined by 2-digit SIC codes with 25 or more companies and also indicates where each of the Big 4 firms audit at least 10 percent of the total industry assets. As the table illustrates, there are very few industries where all four of the top-tier firms have a major presence. In many industries, only two or three of the Big 4 firms audit 10 percent or more of the total assets in an industry. Of the 49 industries represented, less than one-third (16) have a significant presence (10 percent or more) of all four firms. Moreover, as table 14 illustrates, if the threshold is increased to 25 percent or more of total assets audited, then almost all (48 of 49) of the industries have a significant presence of only one or two firms. GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those individuals named above, Martha Chow, Edda Emmanuelli-Perez, Lawrance Evans, Jr., Marc Molino, Michelle Pannor, Carl Ramirez, Barbara Roesmann, Derald Seid, Jared Stankosky, Paul Thompson, Richard Vagnoni, and Walter Vance made key contributions to this report. Glossary GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
The audit market for large public companies is an oligopoly, with the largest firms auditing the vast majority of public companies and smaller firms facing significant barriers to entry into the market. Mergers among the largest firms in the 1980s and 1990s and the dissolution of Arthur Andersen in 2002 significantly increased concentration among the largest firms, known as the "Big 4." These four firms currently audit over 78 percent of all U.S. public companies and 99 percent of all public company sales. This consolidation and the resulting concentration have raised a number of concerns. To address them, the Sarbanes-Oxley Act of 2002 mandated that GAO study (1) the factors contributing to the mergers; (2) the implications of consolidation on competition and client choice, audit fees, audit quality, and auditor independence; (3) the impact of consolidation on capital formation and securities markets; and (4) barriers to entry faced by smaller accounting firms in competing with the largest firms for large public company audits. Domestically and globally, there are only a few large firms capable of auditing large public companies, which raises potential choice, price, quality, and concentration risk concerns. A common concentration measure used in antitrust analysis, the Hirschman-Herfindahl Index (HHI) indicates that the largest firms have the potential for significant market power following mergers among the largest firms and the dissolution of Arthur Andersen. Although GAO found no evidence of impaired competition to date, the significant changes that have occurred in the profession may have implications for competition and public company choice, especially in certain industries, in the future. Existing research on audit fees did not conclusively identify a direct correlation with consolidation. GAO found that fees have started to increase, and most experts expect the trend to continue as the audit environment responds to recent and ongoing changes in the audit market. Research on quality and independence did not link audit quality and auditor independence to consolidation and generally was inconclusive. Likewise, GAO was unable to draw clear linkages between consolidation and capital formation but did observe potential impacts for some smaller companies seeking to raise capital. However, given the unprecedented changes occurring in the audit market, GAO observes that past behavior may not be indicative of future behavior, and these potential implications may warrant additional study in the future, including preventing further consolidation and maintaining competition. Finally, GAO found that smaller accounting firms faced significant barriers to entry--including lack of staff, industry and technical expertise, capital formation, global reach, and reputation--into the large public company audit market. As a result, market forces are not likely to result in the expansion of the current Big 4. Furthermore, certain factors and conditions could cause a further reduction in the number of major accounting firms.
Background The LDA, as amended by HLOGA, requires lobbyists to register with the Secretary of the Senate and the Clerk of the House and file quarterly reports disclosing their lobbying activity. Lobbyists are required to file their registrations and reports electronically with the Secretary of the Senate and the Clerk of the House through a single entry point (as opposed to separately with the Secretary of the Senate and the Clerk of the House as was done prior to HLOGA). Registrations and reports must be publicly available in downloadable, searchable databases from the Secretary of the Senate and the Clerk of the House. No specific requirements exist for lobbyists to generate or maintain documentation in support of the information disclosed in the reports they file. However, guidance issued by the Secretary of the Senate and the Clerk of the House recommends that lobbyists retain copies of their filings and supporting documentation for at least 6 years after they file their reports. The LDA requires that the Secretary of the Senate and the Clerk of the House provide guidance and assistance on the registration and reporting requirements of the LDA and develop common standards, rules, and procedures for compliance with the LDA. The Secretary of the Senate and the Clerk of the House review the guidance semiannually. The guidance was last reviewed and revised in February 2013. The guidance provides definitions of terms in the LDA, elaborates on the registration and reporting requirements, includes specific examples of different scenarios, and provides explanations of why certain scenarios prompt or do not prompt disclosure under the LDA. The Secretary of the Senate and Clerk of the House previously told us they consider information we report on lobbying disclosure compliance when they periodically update the guidance. The LDA defines a lobbyist as an individual who is employed or retained by a client for compensation, who has made more than one lobbying contact (written or oral communication to a covered executive or legislative branch official made on behalf of a client), and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during the quarter. Lobbying firms are persons or entities that have one or more employees who lobby on behalf of a client other than that person or entity. Lobbying firms are required to file a registration with the Secretary of the Senate and the Clerk of the House for each client if the firms receive or expect to receive over $3,000 in income or $12,500 in incurred expenses from that client for lobbying activities. Lobbyists are also required to submit a quarterly report, also known as an LD-2 report, for each registration filed. The registration and subsequent LD-2 reports contain the following elements, if applicable: the name of the organization, lobbying firm, or self-employed individual that is lobbying on that client’s behalf; a list of individuals who acted as lobbyists on behalf of the client during the reporting period; whether any lobbyists served as covered executive branch or legislative branch covered officials in the previous 20 years; the name of and further information about the client, including a general description of its business or activities; information on the specific lobbying issue areas and corresponding general issue codes used to describe lobbying activities; any foreign entities that have an interest in the client; whether the client is a state or local government; information on which federal agencies and houses of Congress the lobbyist contacted on behalf of the client during the reporting period; the amount of income related to lobbying activities received from the client (or expenses for organizations with in-house lobbyists) during the quarter rounded to the nearest $10,000; and a list of constituent organizations that contribute more than $5,000 for lobbying in a quarter and actively participate in planning, supervising, or controlling lobbying activities, if the client is a coalition or association. The LDA, as amended, also requires lobbyists to report certain contributions semiannually in the LD-203 report. These reports must be filed 30 days after the end of a semiannual period by each lobbying firm registered to lobby and by each individual listed as a lobbyist on a firm’s lobbying reports. The lobbyists or lobbying firms must list the name of each federal candidate or officeholder, leadership political action committee, or political party committee to which they made contributions equal to or exceeding $200 in the aggregate during the semiannual period; report contributions made to presidential library foundations and presidential inaugural committees; report funds contributed to pay the cost of an event to honor or recognize a covered official, funds paid to an entity named for or controlled by a covered official, and contributions to a person or entity in recognition of an official or to pay the costs of a meeting or other event held by or in the name of a covered official; and certify that they have read and are familiar with the gift and travel rules of the Senate and House and that they have not provided, requested, or directed a gift or travel to a member, officer, or employee of Congress that would violate those rules. The Secretary of the Senate and the Clerk of the House, along with the U.S. Attorney’s Office for the District of Columbia are responsible for ensuring compliance with the LDA. The Secretary of the Senate and the Clerk of the House notify lobbyists or lobbying firms in writing that they are not complying with reporting requirements in the LDA, and subsequently refer those lobbyists who fail to provide an appropriate response to the Office. The Office researches these referrals and sends additional noncompliance notices to the lobbyists, requesting that the lobbyists file reports or correct reported information. If the Office does not receive a response after 60 days, it decides whether to pursue a civil or criminal case against each noncompliant lobbyist. A civil case could lead to penalties up to $200,000, while a criminal case—usually pursued if a lobbyist’s noncompliance is found to be knowing and corrupt—could lead to a maximum of 5 years in prison. Documentation to Support Some LD-2 Report Elements Varied, but Most Newly Registered Lobbyists Met Disclosure Reporting Requirements Lobbyists Provided Documentation for Most LD-2 Reports, but Documentation for Some Report Elements Did Not Match Their Disclosure Reports As in our prior reviews, most lobbyists reporting $5,000 or more in income or expenses were able to provide documentation to varying degrees for the reporting elements in their disclosure reports. Lobbyists for an estimated 97 percent of LD-2 reports (97 out of 100) were able to provide documentation to support the income and expenses reported for the third and fourth quarters of 2011 and the first and second quarters of 2012. Lobbyists most commonly provided documentation in the form of invoices and contracts. Last year, lobbyists were able to provide documentation for income and expenses for an estimated 93 percent of LD-2 reports for the quarters under review. Table 1 compares the number of LD-2 reports with differences in the amount of income and expenses reported by at least $10,000 and those with rounding errors in documentation for income and expenses provided for LD-2 reports from 2010 through 2012. Figure 1 illustrates the extent to which lobbyists were able to provide documentation to support selected elements on the LD-2 reports. Of the 100 LD-2 reports in our sample, 51 disclosed lobbying activities at executive branch agencies with lobbyists for 30 of these reports providing documentation to support lobbying activities at all agencies listed. Table 2 lists common reasons why some lobbyists we interviewed said they did not have documentation for some of the elements of their LD-2 reports. The LDA requires a lobbyist to disclose previously held covered positions when first registering as a lobbyist for a new client, either on the LD-1 or on the LD-2 quarterly filing when added as a new lobbyist. Based on our analysis, we estimate that a minimum of 15 percent of all LD-2 reports did not properly disclose one or more previously held covered positions compared to 11 percent for 2011 and 9 percent for 2010. These results are generally consistent from 2010 through 2012. Of those that failed to disclose properly, 11 LD-2 amendments and 2 LD-1 amendments were filed to properly disclose covered positions and two lobbying firms addressed the omitted covered positions on subsequent LD-2 filings. Two lobbyists said they were confused as to whether intern positions are covered positions. One of those lobbyists amended the LD-2 report to disclose an unpaid internship. However, officials from the Office of the Secretary of the Senate and the Clerk of the House clarified that unpaid internships are not considered covered official positions, and are not required to be disclosed. Two other lobbyists in our sample said they were unaware of the HLOGA requirement to disclose covered positions held within the last 20 years of first acting as a lobbyist for a client. Lobbyists for an estimated 85 percent (85 of 100) of LD-2 reports filed year-end 2011 or midyear 2012 LD-203 contribution reports for all lobbyists and lobbying firms listed on the report as required. This finding is consistent with previous reports. All individual lobbyists and lobbying firms reporting specific lobbying activity are required to file LD-203 reports semiannually, even if they have no contributions to report, because they must certify compliance with the gift and travel rules. More Lobbying Firms Indicated That They Planned to Amend Their LD-2 Reports as a Result of GAO’s Review Compared to our last review, more lobbying firms indicated that they planned to amend their LD-2 reports as a result of our review. This year, for 28 of the LD-2 reports in our sample, lobbyists indicated they planned to amend their LD-1 or LD-2 reports as a result of our review. As of March 2013, 16 of those 28 lobbying firms had filed an amended LD-2 report and 3 lobbying firms amended their LD-1 report to make changes to information that was previously reported. Last year, for 17 of the LD-2 reports in our sample, lobbyists indicated they planned to amend their LD- 2 reports, and as of March 2012, 9 had done so. Table 3 lists reasons lobbying firms in our sample cited for planning to amend their LD-1 or LD-2 reports and the number of amendments filed. In addition, 2 lobbying firms did not indicate plans to file an amendment at the time of our review, but later filed amended reports after meeting with us to add an issue area code and remove a lobbyist. Similar to our 2012 report, lobbying firms filed amendments for 3 of the LD-2 reports in our sample after being notified that their LD-2 reports were selected as part of our random sample, but prior to our review. Some LD-203 Contribution Reports Omitted Political Contributions Listed in the FEC Database As part of our review, we compared contributions listed on lobbyists and lobbying firms’ LD-203 reports against political contributions reported in the FEC database to identify whether political contributions were omitted on LD-203 reports in our sample. The sample of LD-203 reports we reviewed contained 80 reports with contributions and 80 reports without contributions. We estimate that overall, a minimum of 6 percent of reports failed to disclose one or more contributions. Table 4 compares the number of LD-203 reports that omitted political contributions for 2010 through 2012. Of the 3,074 new registrants we identified from fiscal year 2012, we were able to match 2,753 reports filed in the first quarter in which they were registered. This is a match rate of 90 percent of registrations, which is consistent with our prior reviews. To determine whether new registrants were meeting the requirement to file, we matched newly filed registrations in the third and fourth quarters of 2011 and the first and second quarters of 2012 from the House Lobbyists Disclosure Database to their corresponding quarterly disclosure reports using an electronic matching algorithm that allows for misspelling and other minor inconsistencies between the registrations and reports. While Most Lobbying Firms Reported that the Disclosure Requirements Were Very Easy or Somewhat Easy to Meet, A Few Lobbyists Reported Challenges in Complying with the Act As part of our review, 90 different lobbying firms were included in our sample. Of the 90 different lobbying firms in our sample, 32 reported that the disclosure requirements were “very easy” to comply with, 39 reported they were “somewhat easy” and 19 reported that the disclosure requirements were “somewhat difficult” or “very difficult”. Last year, we also asked the lobbying firms in our sample if they found the disclosure requirements easy to meet. Of those 90 firms, 61 agreed that the requirements were “easy” to meet, 25 reported that requirements were “somewhat easy” to meet, and 4 reported that the disclosure requirements were “not easy” to meet. In addition, some lobbyists provided feedback identifying specific challenges to compliance, as shown in figure 2. The most frequently cited challenges were differentiating between lobbying and non-lobbying activities and determining the most appropriate issue code to use. Most lobbyists we interviewed rated the terms associated with LD-2 reporting requirements as “very easy” or “somewhat easy” to understand with regard to meeting their reporting requirements. Figure 3 shows how lobbyists rated the ease of understanding the terms associated with LD-2 reporting. U.S. Attorney’s Office Actions to Enforce the LDA The Office’s Authorities, Processes, and Resources to Enforce LDA Compliance The Office stated that it has sufficient authority and resources to enforce compliance with LDA requirements, including imposing civil or criminal penalties for noncompliance. Noncompliance of LDA reporting requirements refers to the lobbying firm’s failure to file its quarterly LD-2 disclosure reports and semiannual LD-203 reports on certain political contributions by the filing deadline. In our 2012 report, we described the Office’s process for addressing referrals received from the Secretary of the Senate and Clerk of the House. Additionally, we described the Office’s staff and use of its LDA database to pursue enforcement actions and centralize the process of checking and resolving referrals. The LDA database allows the Office to track when LD-2 and LD-203 referrals are received, record reasons for referrals, record actions taken to resolve them, and assess the results of actions taken. To enforce LDA compliance, the Office has primarily focused on sending letters to lobbyists who have potentially violated the LDA by not filing disclosure reports as required. The letters request lobbyists to comply with the law by promptly filing the appropriate disclosure reports, and inform lobbyists of potential civil and criminal penalties for not complying. In addition to sending letters, a contractor sends e-mails and calls lobbyists to inform them of their need to comply with LDA reporting requirements. Not all referred lobbyists receive noncompliance letters, e- mails, or phone calls because some of the lobbyists have terminated their registrations or filed the required financial disclosure reports before the Office received the referral. Typically, lobbyists resolve their noncompliance issues by filing the reports or terminating their registration. As we previously reported, resolving referrals can take anywhere from a few days to years depending on the circumstances. During this time, the Office continues to monitor and review all outstanding referrals and uses summary reports from the database to track the overall number of referrals that become compliant as a result of receiving an e-mail, phone call, or noncompliance letter. According to officials from the Office, more referred lobbyists are being contacted by e-mail and phone calls, which has decreased the number of noncompliance letters the Office sends to lobbyists. Officials from the Office stated that the majority of these e-mails and calls result in the registrant becoming compliant without sending a letter. Currently, the system collects information on contacts made by e-mail and phone calls in the notes section of the referral entry in the database, but does not automatically tabulate the number of e-mails and phone calls to lobbyists, as it does for letters sent. Officials stated they would consider developing a mechanism for tracking e-mails and phone calls. Status of LD-2 Enforcement Efforts for the 2009, 2010, 2011 and 2012 Reporting Periods As of March 5, 2013, the Office had received approximately 2,062 referrals from both the Secretary of the Senate and the Clerk of the House for noncompliance with LD-2 requirements for the 2009, 2010, 2011, and 2012 reporting periods. Table 5 shows the number of referrals the Office received and the number of noncompliance letters the Office sent during these reporting periods. The number of referrals received will not match the number of letters sent because some referred lobbyists receive a phone call or e-mail instead of a noncompliance letter. Additionally, letters sent includes those sent to referred registrants who may have been referred for noncompliance with more than one client. According to officials from the Office, the Office has not sent any noncompliance letters for the 2012 reporting period because it is still processing the referrals it received for prior reporting periods. As shown in figure 4, about 63 percent (1,311 of 2,062) of all the lobbyists who were referred by the Secretary of the Senate and Clerk of the House for noncompliance for the 2009, 2010, 2011, and 2012 reporting periods are now considered compliant because lobbyists either filed their reports or terminated their registrations. In addition, some of the referrals were found to be compliant when the Office received the referral, and therefore no action was taken. This may occur when lobbyists have responded to the contact letters from the Secretary of the Senate and Clerk of the House after the Office has received the referrals. About 36 percent (734 of 2,062) of referrals are pending action because the Office was unable to locate the lobbyist, did not receive a response from the lobbyist, or plans to conduct additional research to determine if it can locate the lobbyist. The remaining 1 percent (17 of 2,062) of referrals did not require action or were suspended because the lobbyist or client was no longer in business or the lobbyist was deceased. The Office suspends enforcement actions against registrants that are repeatedly referred for not filing disclosure reports, but do not have any lobbying activity. The suspended registrants are periodically monitored to determine whether the registrants actively lobby in the future. As a part of this monitoring, the Office checks the lobbying disclosure databases maintained by the Secretary of the Senate and the Clerk of the House. Also, the Office’s Civil Division staff discusses the status of pending and suspended referrals with the Secretary of the Senate and Clerk of the House contacts to determine whether to continue enforcement actions, which includes considering legal actions or dismissing certain referrals. As of March 5, 2013, the Office has also received approximately 2,472 referrals from the Secretary of the Senate and the Clerk of the House for noncompliance with LD-203 requirements for the 2009, 2010, and 2011 reporting periods. For LD-203 referrals, the Office sends noncompliance letters to the registered organizations and includes the names of the lobbyists who did not comply with the requirement to report federal campaign and political contributions and certify that they understand the gift rules. As of February 25, 2013, the Office has mailed LD-203 noncompliance letters to approximately 62 percent (482 of 773) of the referrals for the 2009 reporting period and 21 percent (270 of 1,296) of the referrals for the 2010 reporting period. According to officials from the Office, the Office is still processing the LD-203 referrals it received for the 2011 reporting period and has not yet sent noncompliance letters. Officials said they have not addressed the 2011 referrals because they have been focusing on the referrals for prior years. Table 6 shows the number of referrals the Office received for noncompliance with the LD- 203 reports filed for the 2009, 2010, and 2011 reporting periods and the number of letters sent by the Office. As shown in figure 5, about 45 percent (1,122 of 2,472) of the lobbyists who were referred by the Secretary of the Senate and Clerk of the House for noncompliance for the 2009, 2010, and 2011 reporting periods are now considered in compliance because lobbyists either have filed their reports or have terminated their registrations. About 55 percent (1,349 of 2,472) of the referrals are pending action because the Office was unable to locate the lobbyist, did not receive a response from the lobbyist, or plans to conduct additional research to determine if it can locate the lobbyist. Many of the pending LD-203 referrals represent lobbyists who no longer lobby for the organizations affiliated with the referrals, even though these organizations may be listed on the original lobbyist registration. Office officials stated that they continue to experience challenges with increasing LD-203 compliance because the Office has little leverage to bring individual lobbyists into compliance. Office officials said that there have been complaints within the lobbying community regarding responsibility for responding to letters of noncompliance with LD-203 requirements. Although firms are not responsible for an individual lobbyist’s failure to comply with the LD-203 disclosure requirement, nor are firms required to provide contact information for the noncompliant lobbyist, Office officials stated that many firms have assisted them by providing contact information for lobbyists, and only a few firms have not been willing to provide contact information for noncompliant lobbyists. Officials said they have often suggested to registrants to terminate or inactivate lobbyists from the client and firm registration when the lobbyists leave the firm. Many of the LD-203 referrals remain open in an attempt to locate individual lobbyists, and may take years to resolve. Enforcement Settlement Actions We previously reported that the Office developed a system to track lobbyists and lobbying firms that have a history of chronic noncompliance and have repeatedly been referred by the Senate and House for failing to file disclosure reports. Officials reported that as a result of the tracking system and the actions of staff assigned to these cases, the Office has been able to identify more noncompliant lobbyists for civil enforcement action. In 2011, the Office settled its first enforcement case since the enactment of HLOGA in 2007, reaching a $45,000 settlement with a lobbying firm. The firm has fully complied with its outstanding and ongoing reporting requirements. HLOGA increased the penalties for offenses committed after January 1, 2008. As stated earlier, a civil case could lead to penalties up to $200,000, while a criminal case—usually pursued if lobbyists’ noncompliance is found to be knowing and corrupt— could lead to a maximum of 5 years in prison. Officials reported that for the 2012 reporting period, the Office sent demand letters, made phone contacts or sent emails to eight registrants on the chronic offenders list. Demand letters list the number of times the registrant was referred by the Secretary of the Senate and Clerk of the House; describe the number of occasions that lobbying disclosure reports were not filed by the deadline; and request registrants to immediately file the outstanding reports and contact the Office within 10 days to resolve the matter. Four of the registrants filed the outstanding reports or terminated their registration after being contacted by an Assistant U.S. Attorney. Additionally, in September 2012, the Office reached settlement agreements with two of the registrants from the chronic offenders list. One firm agreed to pay $50,000 and the other $30,000 in civil penalties for repeatedly failing to file disclosure reports. As of March 2013, both firms have paid the fines in full and complied fully with their outstanding and ongoing reporting requirements. The Office sent demand letters to the remaining two registrants from the chronic offenders list on February 4, 2013. As of March 5, 2013, the two registrants have not responded to the demand letters. The Assistant U.S. Attorney is preparing a memorandum to request legal authority to pursue civil or criminal penalties against both registrants. Civil Division management will review this request and determine the appropriate action. The Office continues to monitor and review the chronic offenders list to determine whether to continue enforcement actions, which includes considering legal actions or dismissing certain cases. Agency Comments We provided a draft of this report to the Attorney General for review and comment. The Assistant U.S. Attorney for the District of Columbia responded on behalf of the Attorney General that the Department of Justice had no comments. We are sending copies of this report to the Attorney General, Secretary of the Senate, Clerk of the House of Representatives, and interested congressional committees and members. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected] . Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Consistent with the audit mandates in the Honest Leadership and Open Government Act (HLOGA), our objectives were to determine the extent to which lobbyists are able to demonstrate compliance with the Lobbying Disclosure Act of 1995 as amended (LDA) by providing documentation to support information contained on registrations and reports filed under the LDA; identify challenges and potential improvements to compliance, if any; and describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (the Office) and the efforts the Office has made to improve enforcement of the LDA. To respond to our mandate, we used information in the lobbying disclosure database maintained by the Clerk of the House of Representatives (Clerk of the House). To assess whether these disclosure data were sufficiently reliable for the purposes of this report, we reviewed relevant documentation and spoke to officials responsible for maintaining the data. Although registrations and reports are filed through a single web portal, each chamber subsequently receives copies of the data and follows different data cleaning, processing, and editing procedures before storing the data in either individual files (in the House) or databases (in the Senate). Currently, there is no means of reconciling discrepancies between the two databases caused by the differences in data processing. For example, Senate staff told us during previous reviews that they set aside a greater proportion of registration and report submissions than the House for manual review before entering the information into the database, and as a result, the Senate database would be slightly less current than the House database on any given day pending review and clearance. House staff told us during previous reviews that they rely heavily on automated processing, and that while they manually review reports that do not perfectly match in formation file for a given registrant or client, they will approve and upload such reports as originally filed by each lobbyist even if the reports contain errors or discrepancies (such as a variant on how a name is spelled). Nevertheless, we do not have reasons to believe that the content of the Senate and House systems would vary substantially. For this review, we determined that House disclosure data were sufficiently reliable for identifying a sample of quarterly disclosure (LD-2) reports and for assessing whether newly filed registrants also filed required reports. We used the House database for sampling LD-2 reports from the third and fourth quarters of calendar year 2011and the first and second quarters of calendar year 2012, as well as for sampling year-end 2011 and midyear 2012 political contributions (LD-203) reports and finally for matching quarterly registrations with filed reports. We did not evaluate the Offices of the Secretary of the Senate or the Clerk of the House, both of which have key roles in the lobbying disclosure process, although we consulted with officials from each office, and they provided us with general background information at our request and detailed information on data processing procedures. To assess the extent to which lobbyists could provide evidence of their compliance with reporting requirements, we examined a stratified random sample of 100 LD-2 reports from the third and fourth quarters of 2011 and the first and second quarters of 2012. We excluded reports with no lobbying activity or with income less than $5,000 from our sampling frame. We drew our sample from 49,286 activity reports filed for the third and fourth quarters of 2011 and the first and second quarters of 2012 available in the public House database, as of our final download date for each quarter. One LD-2 report in the sample was amended after the lobbyist was notified of being selected for the sample but prior to the review. As a result, we excluded this report from our sample and replaced it with another LD-2 report for the same quarter. Our sample was not designed to detect differences over time and we did not conduct tests of significance for changes from 2010 to 2012. Our sample is based on a stratified random selection, and it is only one of a large number of samples that we may have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This interval would contain the actual population value for 95 percent of the samples that we could have drawn. All percentage estimates in this report have 95 percent confidence intervals of within plus or minus 10.5 percentage points or less of the estimate itself, unless otherwise noted. When estimating compliance with certain of the elements we examined, we base our estimate on a one- sided 95 percent confidence interval to generate a conservative estimate of either the minimum or the maximum percentage of reports in the population exhibiting the characteristic. the amount of income reported for lobbying activities, the amount of expenses reported on lobbying activities, the names of those lobbyists listed in the report, the houses of Congress and federal agencies that they lobbied, and the issue codes listed to describe their lobbying activity. Prior to each interview, we conducted an open source search to identify lobbyists on each report who may have held a covered official position. We reviewed the lobbyists’ previous work histories by searching lobbying firms’ websites, LinkedIn, Leadership Directories, Who’s Who in American Politics, Legistorm, and U.S. newspapers through Nexis. Prior to 2008, lobbyists were only required to disclose covered official positions held within 2 years of registering as a lobbyist for the client. HLOGA amended that time frame to require disclosure of positions held 20 years before the date the lobbyists first lobbied on behalf of the client. Lobbyist are required to disclose previously held covered official positions either on the client registration (LD-1) or on the first LD-2 report when the lobbyist is added as “new.” Consequently, those who held covered official positions may have disclosed the information on the LD-1 or a LD-2 report filed prior to the report we examined as part of our random sample. Therefore, where we found evidence that a lobbyist previously held a covered official position, we conducted an additional review of the publicly available Secretary of the Senate or Clerk of the House database to determine whether the lobbyist properly disclosed the covered official position. Finally, if a lobbyist appeared to hold a covered position that was not disclosed, we asked for an explanation at the interview with the lobbying firm to ensure that our research was accurate. Despite our rigorous search, it is possible that we failed to identify all previously held covered official positions for all lobbyists listed. Thus, our estimate of the proportion of reports with lobbyists who failed to disclose properly covered official positions is a lower-bound estimate of the minimum proportion of reports that failed to report such positions. In addition to examining the content of the LD-2 reports, we confirmed whether year-end 2011 or midyear 2012 LD-203 reports had been filed for each firm and lobbyist listed on the LD-2 reports in our random sample. Although this review represents a random selection of lobbyists and firms, it is not a direct probability sample of firms filing LD-2 reports or lobbyists listed on LD-2 reports. As such, we did not estimate the likelihood that LD-203 reports were appropriately filed for the population of firms or lobbyists listed on LD-2 reports. To determine if the LDA’s requirement for registrants to file a report in the quarter of registration was met for the third and fourth quarters of 2011 and the first and second quarters of 2012, we used data filed with the Clerk of the House to match newly filed registrations with corresponding disclosure reports. Using direct matching and text and pattern matching procedures, we were able to identify matching disclosure reports for 2,753, or 90 percent, of the 3,074 newly filed registrations. We began by standardizing client and registrant names in both the report and registration files (including removing punctuation and standardizing words and abbreviations, such as “company” and “CO”). We then matched reports and registrations using the House identification number (which is linked to a unique registrant-client pair), as well as the names of the registrant and client. For reports we could not match by identification number and standardized name, we also attempted to match reports and registrations by client and registrant name, allowing for variations in the names to accommodate minor misspellings or typos. For these cases, we used professional judgment to determine whether cases with typos were sufficiently similar to consider as matches. We could not readily identify matches in the report database for the remaining registrations using electronic means. To assess the accuracy of the LD-203 reports, we analyzed stratified random samples of LD-203 reports from the 31,894 total LD-203 reports. The first sample contains 80 reports of the 10,948 reports with political contributions and the second contains 80 reports of the 20,946 reports listing no contributions. Each sample contains 40 reports from the year- end 2011 filing period and 40 reports from the midyear 2012 filing period. The samples from 2012 allow us to generalize estimates in this report to either the population of LD-203 reports with contributions or the reports without contributions to within a 95 percent confidence interval of plus or minus 8.6 percentage points or less, and to within 4.7 percentage points of estimate when analyzing both samples together. Our sample was not designed to detect differences over time and we did not conduct tests of significance for changes from 2010 to 2012. We analyzed the contents of the LD-203 reports and compared them to contribution data found in the publicly available Federal Elections Commission’s (FEC) political contribution database. We interviewed staff at the FEC responsible for administering the database and determined that the data reliability is suitable for the purpose of confirming whether a FEC-reportable disclosure listed in the FEC database had been reported on an LD-203 report. We compared the FEC-reportable contributions reporting on the LD-203 reports with information in the FEC database. The verification process required text and pattern matching procedures, and we used professional judgment when assessing whether an individual listed is the same individual filing an LD-203. For contributions reported in the FEC database and not on the LD-203 report, we asked the lobbyists or organizations to explain why the contribution was not listed on the LD-203 report or to provide documentation of those contributions. As with covered positions on LD-2 disclosure reports, we cannot be certain that our review identified all cases of FEC-reportable contributions that were inappropriately omitted from a lobbyist’s LD-203 report. Our estimates of the percentage of reports that omit contributions is a lower-bound estimate. We did not estimate the percentage of other non-FEC political contributions that were omitted because they tend to constitute a small minority of all listed contributions and cannot be verified against an external source. To identify challenges to compliance, we used structured interviews and obtained the views from 90 different lobbying firms included in our sample of 100 LD-2 reports rather than on the total number of interviews conducted, on any challenges to compliance. To obtain their views, we asked them to rate their ease with complying with the LD-2 disclosure requirements using a scale, of “very easy”, “somewhat easy”, “somewhat difficult” or “very difficult.” In addition, using the same scale we asked them to rate the ease of understanding the terms associated with LD-2 reporting requirements. To describe the resources and authorities available to the Office and its efforts to improve its enforcement of the LDA, we interviewed officials from the Office and obtained updated information on the capabilities of the system they established to track and report compliance trends and referrals, and other practices established to focus resources on enforcement of the Act. The Office provided us with updated reports from the tracking system on the number and status of referrals and chronically noncompliant offenders. The mandate does not include identifying lobbyists who failed to register and report in accordance with LDA requirements, or whether for those lobbyists who did register and report all lobbying activity or contributions were disclosed. We conducted this performance audit from June 2012 through April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: List of Registrants and Clients for Sampled Lobbying Disclosure Reports The random sample of lobbying disclosure reports we selected was based on unique combinations of registrant lobbyists and client names (see table 7). Appendix III: List of Sampled Lobbying Contribution Reports with Contributions and No Contributions Listed See table 8 for a list of lobbyists and lobbying firms from our random sample of lobbying contribution reports with contributions. See table 9 for a list of lobbyists and lobbying firms from our random sample of lobbying contribution reports without contributions. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact J. Christopher Mihm, (202) 512-6806 or [email protected]. Staff Acknowledgments In addition to the contact named above, Bill Reinsberg, Assistant Director; Shirley Jones, Assistant General Counsel; Crystal Bernard, Amy Friedlander, Robert Gebhart, Lois Hanshaw, Stuart Kaufman, Natalie Maddox, and Anna Maria Ortiz made key contributions to this report. Assisting with lobbyist file reviews were Vida Awumey, Peter Beck, Benjamin Crawford, Alexandra Edwards, Hayley Landes, Latesha Love, Alan Rozzi, Stacy Spence, Megan Taylor, Daniel Webb, Jason Wildhagen, and Weifei Zheng.
HLOGA requires lobbyists to file quarterly lobbying disclosure reports and semiannual reports on certain political contributions. HLOGA also requires that GAO annually (1) audit the extent to which lobbyists can demonstrate compliance with disclosure requirements, (2) identify challenges to compliance that lobbyists report, and (3) describe the resources and authorities available to the U.S. Attorney's Office for the District of Columbia and the efforts the Office has made to improve its enforcement of the LDA, as amended. This is GAO's sixth report under the mandate. GAO reviewed a stratified random sample of 100 quarterly disclosure LD-2 reports filed for the third and fourth quarters of calendar year 2011 and the first and second quarters of calendar year 2012. GAO also reviewed two random samples totaling 160 LD-203 reports from year-end 2011 and midyear 2012. This methodology allowed GAO to generalize to the population of 49,286 disclosure reports with $5,000 or more in lobbying activity and 31,894 reports of federal political campaign contributions. GAO also met with officials from the Office to obtain updated statuses on the Office's efforts to focus resources on lobbyists who fail to comply. GAO provided a draft of this report to the Attorney General for review and comment. The Assistant U.S. Attorney for the District of Columbia responded on behalf of the Attorney General that the Department of Justice had no comments on the draft of this report. Most lobbyists were able to provide documentation to demonstrate compliance with the disclosure requirements of the Lobbying Disclosure Act of 1995 (LDA), as amended by the Honest Leadership and Open Government Act of 2007 (HLOGA). For lobbying disclosure reports (LD-2), GAO estimates that 97 percent could provide documentation to support reported income and expenses; 74 percent of the reported income and expenses were properly rounded to the nearest $10,000; 85 percent filed year-end 2011 or midyear 2012 federal political campaign (LD-203) reports as required; and a minimum of 15 percent of all LD-2 reports did not properly disclose formerly held covered positions as required. The LDA defines several types of covered positions, including members of Congress and their staff and certain executive branch officials. These findings are consistent with reviews from prior years. For LD-203 reports, GAO estimates that a minimum of 6 percent of all LD-203 reports omitted one or more reportable political contributions that were documented in the Federal Election Commission (FEC) database. Twenty-eight lobbyists in GAO's sample, compared to17 last year, stated that they planned to amend their lobbying registration (LD-1) or LD-2 report following GAO's review to correct one or more data elements. Of these, 19 lobbyists had filed an amended report as of March 2013. The majority of newly registered lobbyists filed LD-2 reports as required. Lobbyists are required to file LD-2 reports for the quarter in which they first register. GAO could identify corresponding reports on file for lobbying activity for 90 percent of registrants, which is similar to last year's findings. Most lobbyists in our sample rated the terms associated with LD-2 reporting as "very easy" or "somewhat easy" to understand with regard to meeting their reporting requirements. However, a few cited challenges to complying with the LDA, as amended, such as differentiating between lobbying and non-lobbying activities. The U.S. Attorney's Office for the District of Columbia (the Office) stated that it has sufficient authority and resources to enforce compliance with LDA requirements, including imposing civil or criminal penalties for noncompliance. Officials reported that during the 2012 reporting period, the Office took steps to pursue legal action, made phone contacts, or sent emails to eight registrants that had been repeatedly referred for failure to file required disclosure reports. Four of the registrants filed the outstanding reports or terminated their registration after being contacted by an Assistant U.S. Attorney. Additionally, in September 2012, the Office reached settlement agreements with two of the registrants for $50,000 and $30,000 in civil penalties. As of March 2013, both firms have paid their fines in full and complied with their ongoing reporting requirements. In February 2013, the Office sent demand letters to the two other registrants who, as of March 2013, have not responded.
Background INKSNA requires the President to provide reports on March 14 and September 14 of each year to the Senate Committee on Foreign Relations and the House Committee on Foreign Affairs, in which he or she identifies every foreign person for whom there is credible information that the person has transferred to or from Iran, North Korea, or Syria certain goods, services, or technologies mostly those controlled through four multilateral export control regimes and one treaty. Table 1 provides details on the purpose and items restricted in each one. In addition to these controlled items, INKSNA also includes a category of reportable items for goods, services, or technology, on a case-by-case basis, have the potential to make a material contribution to the development of nuclear, biological, conventional, or chemical weapons, or of ballistic or cruise missile systems. According to State officials, INKSNA’s broad list of reportable transfers and acquisitions and discretionary authority to impose sanctions provide the U.S. government an important and flexible tool to achieve its nonproliferation objectives and sanctioning capabilities found in no other U.S. law. INKSNA’s scope includes any transfers to or from Iran on or after January 1, 1999; Syria on or after January 1, 2005; and North Korea on or after January 1, 2006. INKSNA also authorizes the President to apply a range of measures against any foreign person the President has identified in a report he or she has provided to the congressional committees. The measures include (1) a prohibition on U.S. government procurement of goods or services from the person and a ban on imports of products produced by that person, except to the extent the Secretary of State otherwise may determine; (2) a prohibition on U.S. government provision of assistance, except to the extent the Secretary of State otherwise may determine; (3) a prohibition on U.S. government sales of any item on the U.S. Munitions List, and the termination of any ongoing sales of any defense articles, defense services, or design and construction services controlled under the Arms Export Control Act; and (4) that new licenses will be denied, and any existing licenses suspended, for transfers of items controlled under the Export Administration Act of 1979 or the Export Administration Regulations. Once imposed, INKSNA sanctions are in effect for 2 years at State’s discretion. In addition, INKSNA requires the President to notify the congressional committees of his or her rationale for not imposing sanctions against foreign persons identified in the report. Under INKSNA, the President cannot apply sanctions to reported persons if he or she finds that (1) the person did not “knowingly transfer to or acquire from Iran, North Korea, or Syria” reportable items; (2) the goods, services, or technology “did not materially contribute to the efforts of Iran, North Korea or Syria, as the case may be, to develop nuclear, biological, or chemical weapons, or ballistic or cruise missile systems, or weapons listed on the Wassenaar Arrangement Munitions List,” (3) the person is subject to the jurisdiction of a government that is an adherent to “one or more relevant nonproliferation regimes” and the transfer was consistent with such regime’s guidelines; or (4) the government of jurisdiction “has imposed meaningful penalties” on the identified person. The President has delegated INKSNA authorities to State. The Deputy Secretary of State exercises this authority by making sanctions determinations, and authorizing delivery of INKSNA reports to the committees. State arranges to have the names of the foreign persons deemed to have engaged in the sanctioned transfers or acquisitions published in the Federal Register soon after it delivers the reports to the committees. From 2006 to May 2015, State imposed sanctions on 82 foreign persons under INKSNA deemed to have engaged in reportable transfers to or acquisitions from Iran, North Korea, and Syria, primarily on persons located in China, Iran, Syria, and Sudan (see table 2). Seventeen of these foreign persons had INKSNA sanctions imposed on them more than once. State Is Not Providing Reports to Congressional Committees Every 6 Months as Required by INKSNA State is not providing reports to the two cognizant congressional committees in accordance with INKSNA’s 6-month reporting requirements. Since 2006, it has provided six reports covering a 6-year period (2006 through 2011), instead of 18 reports covering a 9-year period (2006 through 2014), as required by INKSNA. If State had submitted a report every 6 months during this 6-year period as required by law, they would have produced 11 reports. Instead, each of the six reports covered a period spanning an entire calendar year and focused on transfers that first came to State’s attention in one of the six calendar years occurring between 2006 and 2011(see fig. 1). State provided these six reports at irregular intervals that have averaged 16 months, ranging between 7 and 22 months apart. It provided its most recent report in December 2014, 22 months after its previous report. The interval between the last two reports was the longest interval between reports since the beginning of 2006. State Has Not Established a Process That Allows It to Comply with INKSNA’s Required 6-Month Reporting Cycle State has not established a process that would allow it to comply with the 6-month reporting cycle required by INKSNA. State uses a complex and lengthy process that involves multiple interagency and internal reviews to compile credible information about a group of reportable transfers that first came to its attention in a single calendar year, and to determine whether to impose sanctions on foreign persons associated with those transfers. Because its process focuses on a group of transfers that came to its attention in a single year, State delays providing a report to the committees until it has resolved concerns it may have regarding any of the transfers in the group covered in the report and determined whether to sanction persons associated with any of those transfers. State officials begin preparing a new report every December, regardless of whether they have completed and provided all previous reports. State officials have told GAO they sometimes must delay work on one draft report to work on another, and that they can make only a limited amount of progress toward completing a new report before they have completed earlier reports. According to State, they use this approach because each report builds on the previous installment, including any determinations to defer a decision on sanctions and any determinations on whether to add nonlisted items to reportability on a case-by-case basis. As a result, State required almost 3 years to prepare its December 2014 report, which addressed transfers that first came to its attention in 2011. State Uses a Complex Process Involving Multiple Interagency and Internal Reviews According to officials in the office responsible for producing the report— State’s Bureau of International Security and Nonproliferation’s Office of Missile, Biological, and Chemical Nonproliferation (ISN/MBC)—State’s process for implementing INKSNA consists of the12 following steps, as depicted interactively in figure 2 and described in appendix II. State officials told us that while the four State-led interagency working groups (named in figure 2 above) meet on a regular basis to evaluate reporting from a wide variety of sources on transfers and flag activity that might trigger INKSNA or other legal authorities, State typically begins the report preparation process, starting with compiling the activity for the draft report, once the relevant calendar year ends. The State Bureau of International Security and Nonproliferation/State Office of Missile, Biological, and Chemical Nonproliferation (ISN/MBC), working with other agencies and the Intelligence Community, compiles a list of transfers that first came to its working groups’ attention during the previous calendar year and then provides the list along with any diplomatic histories associated with each transfer to the Intelligence Community for fact checking and to determine whether the names of the foreign persons associated with the transfers are releasable to the Federal Register if State imposes sanctions. State then distributes the corrected package of transfers and any other information to the relevant interagency working group that includes the other federal departments involved in this process—the Department of Defense (DOD), the Department of Energy (DOE), and the Department of Commerce (DOC). Next, State chairs an interagency Policy Committee meeting (held at the deputy assistant secretary or office director level), where State and other members of the interagency working groups provide advice on whether each transfer is reportable under INKSNA and whether it should result in sanctions. This meeting is followed by reviews by State officials in geographic and functional bureaus. ISN/MBC includes the result of these reviews in an action memo that it sends to the Deputy Secretary of State for the final determination as to which transfers to include in the report and which persons to sanction in connection with those transfers. Following the Deputy Secretary’s determinations, State officials prepare the final version of the report, transmit it to the cognizant congressional committees, and arrange to have sanctions notices published in the Federal Register. State’s Process Requires on Average More than 2 Years to Complete a Report Using this process, State has required, on average, more than 2 years to produce each of the six INKSNA reports that it provided to the cognizant congressional committees between 2006 and 2015. It required almost 3 years to complete the report it provided to the committees in December 2014 covering calendar year 2011. Our analysis of the production times of State’s six INKSNA reports indicates that the three longest stages of State’s process involve State’s compilation of potential reportable transfers into a single list (steps 1 and 2); State’s scheduling and holding of the sub-Interagency Policy Committee meeting (held at the deputy assistant secretary or office director level) to discuss the transfers (steps 4 and 5); and the Deputy Secretary’s review of the action memo in making his or her determinations (steps 8 and 9). For example, concerning the report State provided in December 2014, the Deputy Secretary required more than a year to review the action memo for transfers State learned of in 2011 and to determine which persons to identify in the report and whether to apply sanctions. State officials told us that a variety of political concerns, such as international negotiations and relations with countries involved in transfers, can delay State’s INKSNA process. They stated that these concerns can particularly delay the steps that involve internal State approvals, including the Deputy Secretary’s review and sanctions determination. State’s practice of focusing each report on a group of transfers that first came to its attention in a single calendar year also contributes to the length of time State’s process requires to complete a report. State does not provide a report to the congressional committees until it has resolved concerns it may have about every one of the transfers in the group covered in the report and determined whether to impose sanctions on persons associated with each of the transfers in that group. As a result, a single problematic case in a group can delay State’s provision of the report, which may include other INKSNA-reportable transfers that State may be otherwise ready to report to Congress. As a result of this practice of focusing each report on a single year’s group of transfers and acquisitions, State officials must either complete a report within a year or manage the preparation of a backlog of multiple reports, each covering a different calendar year and each in a different stage of State’s process. Under State’s process, State officials begin preparing a new report every December, regardless of whether they have completed and provided all previous reports. State data indicate that State officials were simultaneously processing three reports, covering calendar years 2011, 2012, and 2013, in the last 6 months of 2014. State officials have told us that they sometimes must delay work on one report to work on another. For example, State officials told GAO that they delayed work on the report State issued in December 2014 (which covered calendar year 2011) for 4 months so that they could focus on completing delivery of the report to Congress covering calendar year 2010. As a result of its process, State’s delays in reporting on transfers and acquisitions have recently increased. As shown in figure 3, State’s report on transfers that first came to its attention in 2010 was provided 26 months after the end of 2010, while its report on transfers that first came to its attention in 2011 was provided 36 months after the end of 2011—a nearly 40 percent increase in the time elapsed between the year addressed and the date that State provided the report. State’s draft report on transfers it first learned of in 2012 is now in its 30th month of preparation and, as of April 2015, had fallen 9 months behind the pace set by its predecessor. State officials cited two reasons for State’s decision to review and report on transfers in groups covering a single year. The parties involved in the complex, multistep process can review and clear a single group of transfers per year in sequence more quickly and with less confusion than would be possible with the 6-month cycle required by INKSNA. Officials stated, for example, a shorter cycle could be confusing, as it could require these parties to make decisions on overlapping groups of transfers in different stages of the process in the same time frame. While State officials stated they intend to institute 6-month reports once they have cleared the backlog, they acknowledged they might still find it difficult to meet this requirement. INKSNA allows State to add to reportability transfers of items (goods, services, or technologies) not on any of the multilateral control lists that nonetheless make material contributions to WMD. State officials stated that they must complete reports sequentially to ensure that they correctly identify transfers of newly reportable items. State’s Process Limits Its Ability to Minimize the Time Required to Impose INKSNA Sanctions By using a process that does not comply with INKSA’s 6-month reporting cycle, State has limited its ability to minimize delays affecting the potential imposition of INKSNA sanctions. INKSNA does not allow State to impose INKSNA sanctions on foreign persons until State has identified them in a report to the congressional committees. Because State does not have a process enabling it to provide INKSNA reports every 6 months as required, it cannot impose INKSNA sanctions on foreign persons within the time frames established by INKSNA. Those time frames would allow State to impose sanctions on a foreign person between 6 and 12 months after it first obtained credible information of the person’s involvement in a reportable transfer. For example, in any given year in which State decided to sanction a person for a reported transfer or acquisition, the sanction would be effective no later than December if State had learned about the transfer between January 1 and June 30 of that year, if it had identified that person in a report provided to the committees in September as required by INKSNA. However, State’s delay in providing its reports to congressional committees between 2006 and 2014 may undermine its ability to impose potential INKSNA sanctions in accordance within the time frames defined in INKSNA. Because State may not impose INKSNA sanctions on foreign persons until it has identified them in a report, its late reports may have delayed by more than 2 years State’s imposition of sanctions on some of these foreign persons. Our analysis of the reports covering the calendar years 2006 through 2011 indicates that State was not able to impose sanctions on foreign persons deemed responsible for transactions included in the reports until an average of 28 months after the end of that reporting period. The intervals ranged between 22 and 36 months. State’s delay in providing its most recent report may have imposed the longest delay on State’s ability to impose INKSNA sanctions, which are discretionary. State imposed sanctions on 23 foreign persons in December 2014, when it provided its report on transfers it first learned of in 2011. The sanctions pertained to transfers that had first come to State’s attention between 36 and 48 months earlier. If State had established a process enabling it to provide reports to the committees every 6 months, it would have had the ability to impose sanctions on one or more of these 23 persons more than 2 years earlier. State officials acknowledged these delays, but told us that they believe that the threat of imposing sanctions can be as effective as the imposition of sanctions in achieving the behavior changes that sanctions are intended to motivate. They stated that at various times in the reporting cycle, State may use the information it is compiling to meet the INKSNA reporting requirement to notify foreign governments about suspected transfers taking place within their jurisdictions and request that they take appropriate action. This use is in accordance with provisions in INKSNA that (1) encourage State to contact foreign governments with jurisdiction over the person, in order to afford the government the opportunity to provide explanatory, exculpatory, or additional information with respect to the transfer, and (2) exempt foreign persons from INKSNA sanctions if the foreign government has imposed meaningful penalties on that person. They noted that the threat of INKSNA sanctions itself can prompt foreign governments to take actions to halt transfers or to penalize or deter persons within their jurisdiction who are suspected of conducting these transfers, which may stop the activity before it meets the threshold for reporting under INKSNA. Conclusions State officials praise INKSNA as a valuable tool in combating proliferation of WMD associated with Iran, Syria, and North Korea. However, State has established a complex and lengthy reporting process that prevents it from providing INKSNA reports on a 6-month schedule to the Senate Committee on Foreign Relations and the House Foreign Affairs Committee, as required by INKSNA. This process may limit State’s ability to impose potential sanctions at an earlier date, in accordance with the time frames established in INKSNA. While State officials state that their process of reviewing and reporting on transfers in groups covering a single calendar year allows them to prepare reports more quickly and with less confusion than groups covering 6 months, our analysis demonstrates that State is falling further and further behind in providing the reports and is now juggling a backlog of draft reports at different stages of that process. In addition, State officials told us that the threat of INKSNA sanctions can be an effective deterrent. However, State’s current process has increased the interval of time between the occurrence of a reportable transfer and State’s decision to impose sanctions on the foreign persons identified by State as responsible for those transfers. The imposition of sanctions no sooner than 3 or more years after the transfer occurred may diminish the credibility of the threatened sanction. In addition, reporting delays of this magnitude are not consistent with the time frames established by Congress when it enacted INKSNA. Recommendation for Executive Action The Secretary of State should reconsider State’s INKSNA process to ensure that it (1) complies with INKSNA’s 6-month reporting cycle, and (2) minimizes delays in its ability to opt to impose sanctions. Agency Comments and Our Evaluation We provided a draft of this report to the Departments of State, Commerce, Defense, Energy and Treasury for comment. State provided written comments, which we reprinted in appendix III, as well as technical comments, which we incorporated, as appropriate. Commerce, Defense, Energy, and Treasury declined to provide comments. In its written comments, State concurred with our recommendation but said they need to clear their backlog before delivering reports semi- annually. Moreover, they expressed concern that the draft report does not take into account the inherent difficulties of meeting the law’s very tight deadlines and the substantial increases in scope of reportable activity. In addition, State said that the report does not place sufficient priority on the need for careful preparation and thorough vetting. In response, GAO noted that the report shows that the time State requires to produce the reports for Congress has increased since 2006, the period covered by our report, despite no additional changes to the scope of the law over that period. We also recognize State’s need to carefully prepare and thoroughly vet each INKSNA report. We also recognize that some transfers that are reportable under INKSNA may require several years to investigate and vet prior to being included in an INKSNA report. However, our review found that State’s process could allow a single such problematic transfer to delay State’s reporting to Congress of other transfers that State may have already investigated and vetted. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, will send copies to the appropriate congressional committees and the Secretaries of State, Commerce, Defense, Energy, and Treasury. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report (1) examines the Department of State’s (State) timeliness in providing Iran, North Korea, and Syria Nonproliferation Act (INKSNA) reports; (2) reviews State’s reporting process; and (3) identifies the potential impact of State’s reporting timeliness on its imposition of sanctions. To examine State’s timeliness in providing INKSNA reports, we reviewed the reporting requirements established under section 2(b) of INKSNA, the six reports provided by State to the House Committee on Foreign Affairs and the Senate Committee on Foreign Relations covering the period between calendar year 2006—when transfers and acquisitions involving North Korea were first incorporated into the INKSNA reporting requirements—and calendar year 2011, when the latest report was provided by State to the two committees in December 2014. We reviewed the Federal Register entries announcing the sanctions on 82 of the foreign persons named in the six reports and the dates those sanctions became effective. We also interviewed officials from the office within State responsible for producing the reports—the Office of Missile, Biological, and Chemical Nonproliferation in the Bureau of International Security and Nonproliferation (ISN/MBC) —the Department of Defense (DOD), and the Department of Energy (Energy) to confirm the timing of these reports. To review State’s reporting process, we reviewed State documents and interviewed officials at State and the Departments of Defense (DOD) and Energy (DOE) to determine the extent to which each agency participated in the State-led interagency working groups that identify transfers potentially meeting INKSNA’s reporting and sanctions criteria and their role in the sub-Interagency Policy Committee meetings that voted on which transfers to recommend for reporting and for sanctions. Using the information from these interviews and documents provided by State, we developed a graphic to depict State’s process. We requested data from State on the length of time it took to accomplish particular steps in the process for the last six reports and analyzed that data to determine where delays in the process were occurring. We also identified the date that State provided each report and determined the number of months separating that date from the end of the calendar year each report addresses. On the basis of our review, we determined that the data received from the State Department were sufficiently reliable for our analysis of State’s process. In addition, we also interviewed Department of Commerce (Commerce) and Department of the Treasury (Treasury) officials to identify their participation in the INKSNA reporting process. To identify the potential impact of the timeliness of the INKSNA reports on the imposition of sanctions, we reviewed the deadlines for the imposition of sanctions established in sections 2(b) and 3(c) of INKSNA, the 2006- 2011 calendar year INKSNA reports, and the House report that accompanied the bill that became the Iran Nonproliferation Act of 2000. We also interviewed officials from State to discuss the timing and effectiveness of the sanctions. Appendix II: State’s Iran, North Korea, and Syria Nonproliferation Act (INKSNA) Process According to officials from the Department of State (State) Office of Missile, Biological, and Chemical Nonproliferation in the Bureau of International Security and Nonproliferation (ISN/MBC) State’s process for producing the Iran, North Korea, Syria Nonproliferation Act (INKSNA) reports consists of the following steps. 1. Four State-led interagency working groups meet on a regular basis to evaluate reporting from a wide variety of sources on transfers of proliferation concern. The groups also identify activity relevant to INKSNA or other legal authorities. 2. ISN/MBC solicits lists of transfers deemed potentially reportable under INKSNA from the four working groups based on information received during the reporting year. ISN/MBC adds the diplomatic history describing efforts to address transfers with relevant foreign governments, creating a package of information on transfers. 3. ISN/MBC sends the package of transfers to the Intelligence Community for its members to check the information for accuracy and determine whether foreign persons’ names are releasable to the Federal Register if State decides to impose sanctions on them. 4. ISN/MBC receives a corrected package from the Intelligence Community, sends it out to the federal departments involved in the interagency process ( the Departments of Defense, Energy, and Commerce), and the National Security Council (NSC) calls for a sub- Interagency Policy Committee (IPC) meeting to be scheduled to discuss the transfers. 5. Sub-IPC discusses each transaction. Attendees provide advice on whether each transfer is reportable under INKSNA and whether it should result in sanctions. 6. ISN/MBC sends the package of transfers, along with the results of the sub-IPC meeting, to other relevant State regional and functional bureaus to obtain their views and approval. 7. ISN/MBC compiles a draft action memo that contains the recommended outcome for each transfer. The memo also contains the views of the attendees from the sub-IPC meeting. ISN and other relevant management levels clear the memo. 8. ISN sends the action memo to the Office of the Deputy Secretary (D) to review the transfers and the recommended actions and conduct iterative rounds of questions and consultations on certain transfers with other State offices before the memo is ready for the Deputy Secretary of State. 9. The Deputy Secretary of State approves the action memo once he or she makes a decision on every transfer for the given calendar year, and D sends it back to ISN/MBC. 10. ISN/MBC prepares (1) the final INKSNA report for the committees, and (2) the draft Federal Register notice. It then sends them to the State Bureau of Legislative Affairs (H). 11. H adds a cover letter and provides the report to the clerks/security officers of recipient committees: the House Committee on Foreign Affairs and the Senate Committee on Foreign Relations. 12. Within days, the Federal Register publishes the notice announcing the names of the foreign persons who have been sanctioned. Appendix III: Comments from the Department of State GAO Comment Comment 1: The scope of INKSNA, as currently written, has not changed since 2006, which was the start time for GAO’s analysis. The report shows that the time State requires to produce the reports for Congress has increased since 2006, despite no additional changes to the scope of the law. While INSKNA’s six month reporting deadlines may be tight, the report demonstrates that the State Department should consider more efficient processes for meeting those deadlines. For example, State’s practice of reporting transfers in entire groups could allow a single problematic transfer to delay the reporting of other transfers that State may have already investigated and vetted. Comment 2: We recognize State’s need to carefully prepare and thoroughly vet each INKSNA report. We also recognize that some transfers that are reportable under INKSNA may require several years to investigate and vet prior to being included in an INKSNA report. However, our review found that State’s process could allow a single such problematic transfer to delay State’s reporting to Congress of other transfers that State may have already investigated and vetted. Comment 3: The report highlights the fact that State has opted to submit annual reports instead of the six-month reports required by law. However, it does not assume that State’s decision to do so is the key driver of the current backlog. The report instead calls attention to State’s current process that could allow a single problematic case in a group to delay its reporting on other transfers within that group. We also note the report demonstrates that the backlog is growing and is not, as State suggests, being eliminated. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Pierre Toureille (Assistant Director), B. Patrick Hickey, Jennifer Young, Ashley Alley, Tina Cheng, Debbie Chung, Justin Fisher, and Judy McCloskey made key contributions to this report.
The United States uses sanctions to curb weapons of mass destruction proliferation. Under INKSNA, the President is required every 6 months to provide reports to two congressional committees that identify every foreign person for whom there is credible information that the person has transferred certain items to or from Iran, North Korea, or Syria. INKSNA authorizes the President to impose sanctions on the identified person and requires him to provide justification to the two committees if sanctions are not imposed. The President has delegated this authority to State. State's Deputy Secretary makes determinations about whether to impose sanctions. GAO was asked to review State's INKSNA implementation. This report (1) examines State's timeliness in providing INKSNA reports, (2) reviews State's reporting process, and (3) identifies the potential impact of its reporting timeliness on the imposition of sanctions. GAO analyzed data and met with officials from the Departments of State, Defense, and Energy, and met with officials from the Department of Commerce. The Department of State (State) is not providing reports to congressional committees in accordance with the 6-month reporting requirements of the 2006 Iran, North Korea, and Syria Nonproliferation Act (INKSNA). Since 2006, it has provided six reports covering a 6-year period (2006 through 2011), instead of 18 reports covering a 9-year period (2006 through 2014), as required by INKSNA. State provided these six reports at irregular intervals averaging 16 months. It provided its most recent report in December 2014, 22 months after it had provided the prior report. State has not established a process that would allow it to comply with the 6-month reporting cycle required by INKSNA. It uses a complex and lengthy process that involves multiple interagency and internal reviews. Because it processes cases in calendar-year groups, State delays providing a report to the committees until it has resolved all concerns and determined whether to impose sanctions for each transfer in the group. It begins preparing a new report every December, regardless of whether it has completed all previous reports, with the result that State officials sometimes work on several reports simultaneously and may delay work on one report to work on another. State required nearly 3 years to prepare its December 2014 report on transfers that first came to its attention in 2011. Officials told GAO that negotiations and relations with countries can delay the process and assessing transfers in annual groups reduces prospects for confusion among the parties involved in the process (see figure). By not complying with INKSA's 6-month reporting cycle, State may have limited its ability to minimize delays in choosing to impose INKSNA sanctions. INKSNA requires State to identify foreign persons in a report before opting to impose sanctions on them. As a result, State did not impose INKSNA sanctions on 23 persons for 2011 transfers until December 2014, when it provided its report addressing 2011 transfers. While officials told GAO that threats of possible sanctions can deter questionable transfers, prolonged delays in eventually imposing potential INKSNA sanctions could erode the credibility of such threats and INKSNA's utility as a tool in helping to curb weapons of mass destruction proliferation associated with Iran, Syria, and North Korea.
Background The ACTD process is intended to be much more flexible and streamlined than DOD’s formal acquisition process and in turn to save time and money. Under the ACTD program, prototypes are developed and provide users with the opportunity to demonstrate and assess the prototypes’ capabilities in realistic operational scenarios. From these demonstrations, users can refine operational requirements, develop an initial concept of operations, and determine the military utility of the technology before deciding whether additional units should be purchased. Not all projects are selected for transition into the normal acquisition process. Specifically, potential users can conclude that the technology (1) does not have sufficient military utility and that acquisition is not warranted or (2) has sufficient utility but that only the residual assets of the demonstration are needed and no additional procurement is necessary. Separate technologies within one project may even have varied outcomes. DOD’s traditional approach to developing and buying weapons—which takes an average of 10 to 15 years—is marked by four phases: exploring various weapon concepts, defining what the specific weapon system will look like, refining plans through systems development and demonstration, and producing the equipment in larger-scale quantities and operating and supporting it in the field. Before a program can proceed to each phase, defense officials review its progress to evaluate the ability to meet performance goals and whether risk is under control. The ACTD process is marked by three phases: selection of the projects, demonstration of the technologies, and residual use of prototypes and/or the transition of them to acquisition programs if the services or defense agencies decide to acquire more. The selection process begins via a data call to both the research and development and warfighting communities. The “Breakfast Club,” a panel of technology experts from various organizations, reviews the potential candidates. Candidates selected by this panel are submitted to the Joint Requirements Oversight Council for prioritization and then to the Under Secretary of Defense for Acquisition, Technology and Logistics for a final selection. Decisions to move from stage-to-stage, are less formal than the formal acquisition process, and the process is managed by a set of Office of the Secretary of Defense (OSD) guidelines, which contain advice and suggestions, as opposed to formal directives and regulations. While ACTD teams are to prepare management plans for the projects that spell out roles and responsibilities, objectives, and approaches, these plans are supposed to be flexible, short (less than 25 pages), and high level. Figure 1 illustrates the major phases of the ACTD process. The ACTD demonstration phase typically lasts an average of 2 to 4 years, with an added 2-year residual phase. According to OSD, this provides ample time to develop fieldable prototypes and to allow users to evaluate them. For less complex systems or systems that are available quickly (e.g., commercial-off-the-shelf systems), the time line may be significantly shorter. Similarly, for very complex systems that require extensive integration and developmental testing, more time may be required. A key to keeping the time frame short, according to DOD, is beginning the demonstration with mature technology. This prevents delays associated with additional development and rework. The ACTD process places the highest priority on addressing joint military needs, although some ACTDs focus on service specific capabilities. For example, DOD has found that combat identification systems across the services needed to be enhanced to reduce fratricide so that systems belonging to individual services and components, and even allies, could work together more effectively. As a result, it undertook an ACTD project that tested new technology designed to improve the capability of combat forces to positively identify hostile, friendly, and neutral platforms during air-to-surface and surface-to-surface operations. Another ACTD project was designed to demonstrate the capability to conduct joint amphibious mine countermeasure operations. Recently, some ACTD programs have focused on enhancing homeland security with domestic agencies. For example, DOD is now testing a command and control system that will allow emergency personnel first responding to the scene of an attack to talk to each other and have a better situational awareness. ACTDs are funded by a variety of sources, including the office within OSD with the oversight responsibility for the ACTD program and the military services or defense agencies responsible for conducting the demonstrations and/or the transitions. In fiscal year 2001, a total of $546 million was budgeted for ACTDs—$120 million from OSD and $426 million from the services and agency partners. Participating combatant commands provide additional resources through their support of training, military exercises, and other resources. Funding to acquire and maintain additional units comes from service and agency budgets. Twenty-one of 24 Projects Transitioned at Least Some Technologies to Users Of the 24 projects we reviewed, 21 transitioned at least some technologies to users, meaning that users found that these had some level of military utility and that a military service or a defense agency chose to accept and fund their transition in the form of residual assets or as an acquisition. For 13 of these projects, the services or agencies decided to acquire more of the items tested, and as a result, transitioned the items into formal acquisition programs. Two of the 13 had no residual assets in use. For 8 projects, the services/agencies decided not to acquire additional items, but to continue using the residual assets. Three projects had no residual assets and no acquisition planned. However, some of these projects experienced mixed outcomes—e.g., some technologies may have ended up in residual use while others were acquired or rejected altogether or the lead military service may have rejected the technology while other components decided to acquire it. For example: The Counterproliferation I project consisted of a variety of technologies, including sensors, targeting systems, and advanced weapons, designed to find and destroy nuclear, biological, and chemical facilities. The technologies were used in military operations in Kosovo. For example, an improved infrared sensor that can assess bomb damage to facilities was accepted by the Air Force as an upgrade to its standard targeting pod. Two other technologies—a hard target- penetrating bomb and a fuzing system—have transitioned to production and are expected to achieve initial operational capability in fiscal year 2003. However, the project’s weapon borne sensor technology did not prove to be mature enough and was dropped from the ACTD prior to any demonstrations. The Link-16 project demonstrated an interoperability between the Link- 16 communications link and other variable message format systems to improve situational awareness, interdiction, surveillance, and close air support. No service has adopted it for formal acquisition, but some regional combatant commanders and lower-level commands have purchased additional systems. Since the system was not adopted across DOD, its utility could not be optimized. The Military Operations in Urban Terrain project field-tested 128 items designed to enhance operations in urban environments—such as attacking and clearing buildings of enemy troops. Of these, 32 technologies were determined to have merit and were kept as residual items to be further evaluated. Some of these have already transitioned or are planned for transition to acquisition programs, including a door- breaching round, a man-portable unmanned aerial vehicle, elbow and kneepads, explosive cutting tape, ladders, body armor, and flexible restraining devices. Some Factors Can Hamper the ACTD Process Though the majority of the projects we examined transitioned technologies to users, we identified a range of factors that hampered this process. Specifically: The technology has been too immature to be tested in a realistic setting, leading to possible cancellation of the demonstration. The military services and defense agencies have been reluctant to fund acquisition of ACTD-proven technologies, especially those focusing on joint requirements, because of competing priorities. Appropriate expertise has not been employed for demonstrations and transitions. Transition for software projects has not been adequately planned. DOD lacks specific criteria to evaluate demonstration results, which may cause acquisition decisions to be based on too little knowledge. At times, top-level support can overcome these barriers. But more systemic improvements focused on transition planning and funding commitment could reduce the need for high-level intervention. Figure 3 highlights the specific factors we identified. Technology Maturity Because ACTDs are often conducted during large-scale, force-on-force military exercises, any new systems being tested must be dependable, able to perform as intended, and available on schedule in order not to negatively affect the exercises. As such, DOD has stressed that new technologies proposed for ACTDs should be “mature,” that is, they should have already been demonstrated to perform successfully at the subsystem or component level. The technology of the ACTDs in our sample was not always mature. In some cases, problems were fairly basic, such as a technology having inadequate power supply or being too heavy and bulky to carry out its intended operation. In other cases, technologies had not reached a point where they could be tested in a realistic setting, forcing users to forego certain parts of a test. For example: The Joint Countermine project tested 15 technologies, including detection systems and clearance/breaching systems. During demonstration, users found that detection technologies had unacceptably high false alarm rates and a mine and heavy obstacle clearing device was simply too heavy, bulky, slow and difficult to operate remotely. Moreover, several systems could not be demonstrated on their intended platforms, or even associated with a suitable substitute platform. Further, a number of critical operational sequences, such as launch/recovery, ordnance handling, and system reconfiguration, had not been demonstrated. As a result, only some technologies in this project have transitioned. The Consequence Management project examined 15 technologies designed to identify and respond to a biological warfare threat. During demonstration, users found that some of the items used to collect samples failed to operate and did not have sufficient battery capability and that switches broke. None of the other technologies performed flawlessly, and limitations such as size and weight made it apparent that they were not field ready. None of the technologies from this project entered into the acquisition process, nor did DOD continue to use any of the residual assets. Technologies supporting the Joint Modular Lighter System, a project testing a modular causeway system, failed during the demonstration because they had not been properly designed to withstand real world sea conditions. Consequently, the ACTD was concluded without a demonstration. The Navigation Warfare project, which focused on validating technologies for electronic warfare countermeasures, was terminated after DOD found that some of the technologies for the project could not be demonstrated. Some of the jamming technologies associated with this project are still being evaluated. The technical maturity of software is also vital to successful demonstrations. If software is not able to work as intended, a project’s demonstration may be limited as a consequence. For this reason, one ACTD operations manager stressed that software technologies should be as mature as possible at the start of the ACTD. One ACTD included in our review experienced problems with software immaturity going into demonstration. Because software technologies in the Battlefield Awareness and Data Dissemination ACTD were not mature, certain planned exercises could not be concluded. Before fiscal year 2002, OSD’s guidance only generally described the expectations for technology maturity and OSD did not use a consistent, knowledge-based method for measuring technology maturity of either hardware or software technologies. Specifically, OSD officials selecting the ACTDs used simple ranking schemes to capture the degree of technical risk after consulting with subject area experts. The results of these efforts were not usually documented. Studies conducted by the Congressional Budget Office in 1998 and DOD’s Inspector General in 1997 also found that without guidelines on how to assess maturity, DOD officials defined mature technology in widely contrasting ways. In the last year, OSD has changed its guidance to address this problem. Specifically, it now requires technology maturity to be assessed using the same criteria—technology readiness levels (TRLs)—that DOD uses to assess technical risk in its formal acquisition programs. This change is discussed in more detail later in this report. Sustaining Commitment Although OSD provides start-up funding for ACTDs, the military services and defense agencies are ultimately responsible for financing the acquisition and support of equipment or other items that may result from an ACTD. At times, however, the military services did not want to fund the transition process. This action either slowed down the acquisition process or resulted in no additional procurements. Projects that were particularly affected by this reluctance included those that tested unmanned aerial vehicles and software applications for enhancing the performance of a system to defeat enemy artillery. In other cases, DOD leaders stepped in to support the projects since there was a strong need for the technology and/or an extremely successful demonstration. The Predator is a medium-altitude unmanned aerial vehicle used for reconnaissance that progressed from a concept to a three-system operational capability in less than 30 months. The Predator ACTD was initiated in 1995. Since then, the Predator has been deployed in a range of military operations, most recently in the war in Afghanistan. Twelve systems, each containing four air vehicles, are being procured. The Air Force was designated as the lead service for the ACTD, even though it had shown no interest in this or other unmanned aerial vehicle programs. A transition manager was never assigned to this project. The Defense Airborne Reconnaissance Office was also reluctant to field and support the system beyond the test-bed phase. Further, at one point, the project almost ran out of funds before its end. Nevertheless, the Joint Staff directed the Air Force to accept the system from the Army and the Navy, which had acted as co-lead services throughout the demonstration phase. The Global Hawk is a high-altitude unmanned aerial vehicle designed for broad-area and long-endurance reconnaissance and intelligence missions. It has also been successfully used in recent military missions. The Air Force was also reluctant to fund this program. Nevertheless, eventually the Air Force had to accept the system since the system answered a critical need identified during the Gulf War, was considered to be a success in demonstration, and received support from the President, the Secretary of Defense, and the Congress. In at least one case, the Precision/Rapid Counter Multiple Rocket Launcher ACTD, DOD did not overcome reluctance and, in turn, missed out on an opportunity to acquire important warfighting capabilities with joint applications. This project successfully demonstrated improved capability in rocket launch detection, command and control, and counterfire necessary for countering the threat from North Korean multiple rocket artillery with a system called the Automated Deep Operations Coordination System (ADOCS). Following the demonstration, the Army—the lead service for the project—decided not to formally acquire technologies since it was pursuing a similar development program. Moreover, the Navy, the Air Force, and the United States Forces, Korea, have acquired and deployed their own unique versions of the software. The military services may not want to fund technologies focusing on meeting joint requirements either because they do not directly affect their individual missions and/or because there are other service-specific projects that the services would prefer to fund. At the same time, OSD officials told us that they lack a mechanism for ensuring that decisions on whether to acquire items with proven military utility are made at the joint level, and not merely by the gaining organizations, and that these acquisitions receive the proper priority. DOD’s Joint Requirements Oversight Council, which is responsible for validating and prioritizing joint requirements, plays a role in deciding which ACTD nominees are selected for demonstration, but it does not have a role in the transition decision process, and is not currently concerned with transition outcomes. Moreover, no other DOD organization appears to have been given authority and responsibility for decisions regarding joint acquisition, integration, and support issues. Another factor hindering transition funding has been the lack of alignment of the ACTD transition process with the DOD planning process. The planning process requires the services/agencies to program funds for technology transition long before the services/agencies assuming transition responsibilities know whether a candidate technology is useful to them. Consequently, at times, the services/agencies had to find funds within their own budgets to fund the transition. ACTD Management The problem of not involving the staff with the appropriate expertise to carry out demonstrations and transition planning —in all phases of the ACTD process—may also affect ACTD outcomes. OSD’s guidance recommends that ACTDs use Integrated Product Teams to organize and conduct ACTDs. Integrated Product Teams bring together different skill areas (such as engineering, purchasing, and finance). By combining these areas of expertise into one team, there is no need to have separate groups of experts work on a product sequentially. We have reported in the past that this practice improved both the speed and quality of the decision- making process in developing weapon systems. Conversely, not involving the acquisition, test, and sustainment communities precludes the opportunity for OSD to understand during the demonstrations the significant issues that will arise after transition. In some cases, ACTD projects did not employ a “transition manager” as called for by OSD’s guidance. This manager, working for the service or the agency leading the demonstration, is to prepare the transition plan and coordinate its execution. When a manager was not designated, these duties often fell to a technical manager, who was primarily responsible for planning, coordinating, and directing all development activities through the demonstration. One ACTD—the Human Intelligence and Counterintelligence Support Tools—experienced high turnover in the “operational manager” position. Specifically, it had five different operational managers over its life. The operational manager, who represents the ACTD sponsoring command, is responsible for planning and organizing demonstration scenarios and exercises, defining a concept of operations for the ACTD, assessing whether the project has military utility, and making recommendations based on that assessment. In addition to not involving the right people, at times ACTDs simply did not anticipate issues important to a successful transition early in the process. OSD’s guidance calls on teams to prepare a transition strategy that includes a contracting strategy and addresses issues such as interoperability, supportability, test and evaluation, affordability, funding, requirements, and acquisition program documentation. The guidance also suggests that the transition strategy anticipate where in the formal acquisition process the item would enter (e.g., low rate initial production or system development and demonstration) or even whether the item could be acquired informally, for example, through small purchases of commercially available products. Specifically, the lead service has the responsibility to determine the transition timing, nature, and funding methodology. In two ACTDs, a transition strategy was never developed. Both of these projects ended up transitioning only as residual assets. The 1998 Congressional Budget Office study identified similar problems with transition planning. The study specifically noted that while DOD calls for each management plan to include some discussion of possible acquisition costs, few plans did so. The Congressional Budget Office asserted that this was probably because so little was known about a project’s future at its start. Even when more was known later in the demonstration, however, plans remained sketchy. Software Challenges Software technologies present special planning challenges for transition. Because of the fast-paced nature of advanced technology, it is critical to move software ACTD projects through the demonstration and transition phases quickly so that they are not outdated by the time they are acquired or integrated into existing software programs and databases. At the same time, transition might be slowed by incompatibilities between the operating systems and/or language of the technologies of the ACTD candidate(s) and those of the intended host. This can be difficult since newer applications, particularly commercial-off-the-shelf systems, may be built to different technical standards or use different languages or supporting programs. It was apparent in several ACTDs that there were technical difficulties in integrating the new technologies into their intended platforms. For example, the Adaptive Course of Action project tested software tools intended to enhance DOD’s Global Command and Control System (GCCS) specifically by facilitating near real-time collaborative joint planning by multiple participants during crisis action planning. In this case, transition has been slowed and may possibly not occur because the software module cannot be easily integrated into GCCS (partially due to its use of a different database program) and DOD has not analyzed other functionality and security issues associated with adding the new module. In another project, Battlefield Awareness and Data Dissemination, which focused on providing a synchronized, consistent battlespace description to warfighters, the transition had a mixed outcome. One collection of software applications was successfully transitioned to GCCS, but the transition of others was not as successful. The software application that was successfully integrated was an update of existing GCCS applications and the developers of the software had good working relationships with GCCS managers. The software that experienced problems was not as compatible. Military Utility Assessments Another factor potentially affecting the outcomes of ACTDs is the lack of specific criteria for making assessments of military utility. These assessments evaluate the technologies of ACTD projects after the demonstrations. It is important that OSD have some assurance that the assessments are fact-based, thorough, and consistent, because they provide the basis upon which the military users can base their transition recommendations. OSD’s guidance calls for measures of effectiveness and performance to help gauge whether an item has military utility. It defines measures of effectiveness as high-level indicators of operational effectiveness or suitability and measures of performance as technical characteristics that determine a particular aspect of effectiveness or suitability. But the guidance does not suggest how detailed the measures should be, what their scope should be, or what format they should take. Consequently, we found that the scope, content, and quality of military utility assessments varied widely. For some of the ACTDs we reviewed, no documentation on military utility could be found. Without more specific criteria, customized for each ACTD, there is a risk that decisions on whether to acquire an item will be based on unsound data. Initiatives Are Underway to Improve ACTD Outcomes DOD has undertaken several initiatives to improve the ACTD process, including adopting criteria to ensure technology is sufficiently mature; evaluating how the ACTD process can be improved; and placing more attention on transition planning and management (rather than on simply the selection and demonstration phases) through additional guidance, training, and staffing. These initiatives target many of the problems that can hinder success; however, DOD has not addressed the need to establish specific criteria for assessing the military utility of each of the candidate technologies and to establish a mechanism to ensure funding is made available for the transition. Specifically, DOD headquarters, commands, military services, and a defense agency have undertaken the following efforts. OSD has adopted the same TRL criteria for fiscal year 2003 ACTD projects that DOD uses for assessing technical risks in its formal acquisition programs. These criteria apply to hardware as well as software. Adhering to this standard should help DOD to determine whether a gap exists between a technology’s maturity and the maturity demanded for the ACTD. TRLs measure readiness on a scale of one to nine, starting with paper studies of the basic concept, proceeding with laboratory demonstrations, and ending with a technology that has proven itself on the intended item. According to a senior OSD official, projects must be rated at least at TRL 5 when they enter the demonstration phase. This means that the basic technological components of the item being demonstrated have been integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. An example would be when initial hand-built versions of a new radio’s basic elements are connected and tested together. We reviewed submissions for the final 16 fiscal year 2003 ACTD candidates and found that actual and projected TRLs of each technology ranged from 4 to 9. According to a senior OSD official, during the review of fiscal year 2003 candidates, there were some technologies with a TRL rating of 4 were accepted for demonstration because the need for them was compelling. In early 2002, OSD reviewed the ACTD process to examine current ACTDs for relevancy in a changing military environment and identify ways to make sure projects are value-added as well as to enhance transition. The results of this review included recommendations for additional discipline and informational requirements in the ACTD candidate selection phase, increased program management focus on the execution phase, and more emphasis on management oversight. OSD has also designated a staff member to manage transition issues and initiated a training program for future ACTD managers. This training will emphasize technology transition planning and execution. To enhance future technology transitions, OSD has taken action to better align the ACTD selection and the DOD planning and programming process. Moreover, OSD has issued new guidance for the fiscal year 2004 ACTD candidates that calls on the gaining military or defense agencies to identify funds specifically for the demonstration and the transition, appoint a dedicated transition manager, and develop a transition plan before it will approve future ACTD candidates. The combatant commanders, military services, and a defense agency are also strengthening their guidance for conducting ACTDs. For example, the U.S. European Command has updated its guidance and the U.S. Joint Forces Command has developed detailed guidance for selecting and managing ACTDs. Additionally, the U.S. Pacific Command has developed definitive policies, procedures, and responsibilities for sponsoring and co-sponsoring ACTD programs. The U.S. Special Operations Command issued a policy memorandum for ACTD participation. The Army has begun development of an ACTD tracking system. It is also requiring ACTD candidate submissions to include TRL and other quantitative information. The Air Force has drafted both a policy directive and an instruction regarding ACTDs. The four services have begun meetings amongst themselves to discuss and review their future ACTD candidates. The Defense Information Systems Agency is also engaged in an effort to improve the transition of software technologies to users of systems such as GCCS. Collectively, these efforts target many of the factors that can impede the ACTD process. However, OSD has not yet taken steps to develop specific criteria for assessing whether each of the ACTD candidates meet military needs. More guidance in this regard, particularly with respect to the scope and depth of these assessments and the need to document their results, can help to make sure (1) decisions are based on sound information and (2) items that could substantially enhance military operations are acquired. Moreover, while OSD is requiring services and agencies to identify funds for demonstration and acquisition early in the process, it does not have a mechanism for ensuring that this funding will be provided. As a result, it may continue to experience difficulty in getting the services to fund projects that meet joint needs but do not necessarily fit in with their own unique plans. Conclusions The ACTD process has achieved some important, positive results in terms of developing and fielding new technologies to meet critical military needs quickly and more cost-effectively. DOD recognizes that further improvements are needed to increase opportunities for success. Its efforts to strengthen assessments of technology readiness and management controls—combined with more consistent, fact-based assessments of military utility—should help ensure that the ACTD program will produce better candidates. However, DOD’s initiatives will be challenging to implement since they require decision makers to balance the need to preserve creativity and flexibility within the ACTD process against the need for structure and management control. Moreover, to fully capitalize on the improvements being made, DOD needs to ensure that the services sustain their commitment to projects, especially those shown to meet critical joint military needs. This will also be a challenge because it will require DOD to overcome the services and agencies’ cultural resistance to joint initiatives and its lack of a programming and funding process for joint acquisitions. A place to make a good start in this regard may be to require the services and agencies to designate funding for ACTD transition activities and to have the Secretary of Defense weigh in on decisions on whether to continue to acquire technologies that are tested and proven under the ACTD program. Recommendations for Executive Action To ensure that transition decisions are based on sufficient knowledge, we recommend that the Secretary of Defense develop and require the use of specific criteria for assessing the military utility of each of the technologies and concepts that are to be demonstrated within each ACTD. The criteria should at a minimum identify measurement standards for performance effectiveness and address how results should be reported in terms of scope, format, and desired level of detail. To ensure funding of the transition and its aftermath, we recommend that the Secretary of Defense explore the option of requiring the services or defense agencies to develop a category within their budgets specifically for ACTD transition activities, including procurement and follow-on support. To ensure that transition decisions reflect DOD’s priorities, we recommend that the Secretary of Defense require that the lead service or defense agency obtain the concurrence of the Secretary’s designated representative on any decision not to transition an ACTD that is based on joint requirements and determined to be militarily useful. Agency Comments and Our Evaluation In commenting on a draft of this report, DOD generally concurred with the first two recommendations and outlined the actions to be taken to (1) define ACTD measurement standards and reporting formats for military utility assessments, and (2) work with the services to enhance their ability to enable follow-on transition and support of ACTD products. DOD partially concurred with our recommendation on the transition of militarily useful technology intended to address joint requirements. DOD stated that it would work to provide more information to the Joint Staff on specific ACTD results and evaluate quarterly meetings between the service acquisition executives and the Under Secretary of Defense for Acquisition, Technology and Logistics as a possible forum to raise issues on specific ACTDs. These actions may not address the intent of the recommendation, which is to provide the joint warfighter the opportunity to influence the DOD’s investment decisions. The ACTD program offers a good opportunity in the DOD acquisition system to evaluate equipment and concepts in the joint warfighting environment. However, while ACTDs often start based on a joint requirement, that perspective and priority may change when it comes to transition issues. For the DOD actions to effectively address this condition, the joint perspective should be more effectively represented in ACTD transition issues. DOD’s comments are reprinted in appendix II. Scope and Methodology Between fiscal year 1995 and 2002, DOD initiated 99 ACTDs. As we began our review, 46 of these had completed their demonstration phase or had been canceled. We reviewed 24 of these in detail. We could not review the remainder to the same level of detail because their military utility assessments were incomplete or not available and because we did not choose to present information on those projects that were highly classified. To assess the results of the completed ACTDs, we examined each project’s military utility assessment documents, final program reports, lessons learned reports, and other pertinent ACTD documents, such as the program acquisition strategies. We interviewed operational and technical managers and other knowledgeable program officials at the unified combatant commanders, defense agencies, and the services to discuss the phases of each ACTD project and its transition status. Specifically, we interviewed officials at the Science and Technology Office of the United States Pacific Command, Camp Smith, Hawaii; the European Command, Stuttgart, Germany; the Central Command, Tampa, Florida; the Special Operations Command, Tampa, Florida; the Joint Forces Command, Norfolk, Virginia; the Air Combat Command, Hampton, Virginia; the Army Training and Doctrine Command, Hampton, Virginia; and the Marine Corps Warfighting Lab, Quantico, Virginia. We also contacted ACTD officials at the Program Executive Office of the Air Base and Port Biological Program Office, Falls Church, Virginia; the Defense Information Systems Agency, Falls Church, Virginia; the Defense Advanced Research Projects Agency, Arlington, Virginia; the Defense Threat Reduction Agency, Fort Belvoir, Virginia; and the Defense Intelligence Agency, Arlington, Virginia. To determine the factors that affected the transition outcomes of completed ACTD projects, we met with the operational and technical managers for each ACTD as well as other knowledgeable program officials and the designated ACTD representatives from each of the services. We compared information gathered on the individual ACTDs to discern those factors that were salient in a majority of the cases. In order to better understand ACTD program guidance, funding, and management that can affect transition outcomes, we spoke with relevant officials within the office of the Deputy Undersecretary of Defense, Advanced Systems and Concepts (DUSD (AS&C)), including staff responsible for funding and transition issues, and the Executive Oversight Manager for each ACTD. We also discussed ACTD management and transition issues with representatives of the DUSD (AS&C), Comptroller; the Joint Staff; and the Director, Defense Research and Engineering; the Defense Advanced Research Projects Agency; and the Defense Information Systems Agency. We did not conduct a detailed review of the users’ acceptance or satisfaction with the items of the ACTD process. We conducted our review between October 2001 and October 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Subcommittee on Defense, Senate Committee on Appropriations; the House Committee on Armed Services; and the Subcommittee on Defense, House Committee on Appropriations; and the Secretaries of Defense, the Army, the Navy, and the Air Force. We are also sending copies to the Director, Office of Management and Budget. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions concerning this report, please contact me at (202) 512-4841. Others who made key contributions to this report include William Graveline, Tony Blieberger, Cristina Chaplain, Martha Dey, Leon Gill, and Nancy Rothlisberger. Appendix I: Technology Readiness Levels and Their Definitions 2. Technology concept and/or application formulated. 6. System/subsystem model or prototype demonstration in a relevant environment. 7. System prototype demonstration in an operational environment. Description Lowest level of technology readiness. Scientific research begins to be translated into applied research and development. Examples might include paper studies of a technology’s basic properties Invention begins. Once basic principles are observed, practical applications can be invented. The application is speculative and there is no proof or detailed analysis to support the assumption. Examples are still limited to paper studies. Active research and development is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Basic technological components are integrated to establish that the pieces will work together. This is relatively “low fidelity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory. Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include “high fidelity” laboratory integration of components. Representative model or prototype system, which is well beyond the breadboard tested for technology readiness level (TRL) 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high fidelity laboratory environment or in a simulated operational environment. Prototype near or at planned operational system. Represents a major step up from TRL 6, requiring the demonstration of an actual system prototype in an operational environment, such as in an aircraft, vehicle or space. Examples include testing the prototype in a test bed aircraft. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this TRL represents the end of true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifications. Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluation. In almost all cases, this is the end of the last “bug fixing” aspects of true system development. Examples include using the system under operational mission conditions. Appendix II: Comments from the Department of Defense
The Advanced Concept Technology Demonstration (ACTD) program was started by the Department of Defense (DOD) as a way to get new technologies that meet critical military needs into the hands of users faster and less cost. GAO was asked to examine DOD's process for structuring and executing ACTDs. Since the ACTD program was started in 1994, a wide range of products have been tested by technology experts and military operators in realistic settings--from unmanned aerial vehicles, to friend-or-foe detection systems, to biological agent detection systems, to advanced simulation technology designed to enhance joint training. Many of these have successfully delivered new technologies to users. In fact, 21 of 24 projects we examined that were found to have military utility delivered at least some technologies to users that meet military needs. Though the majority of the projects we examined transitioned technologies to users, there are factors that hamper the ACTD process. For example, (1)Technology has been too immature to be tested in a realistic setting, leading to cancellation of the demonstration; (2) Military services and defense agencies have been reluctant to fund acquisition of ACTD-proven technologies, especially those focusing on joint requirements, because of competing priorities; and (3) ACTDs' military utility may not have been assessed consistently. Some of the barriers we identified can be addressed through efforts DOD now has underway, including an evaluation of how the ACTD process can be improved; adoption of criteria to be used to ensure technology is sufficiently mature; and placing of more attention on the end phases of the ACTD process. Other barriers, however, will be much more difficult to address in view of cultural resistance to joint initiatives and the requirements of DOD's planning and funding process.
Background LB&I is responsible for the tax compliance of partnerships and S and C corporations with assets of $10 million or more, as well as individuals with high wealth or international tax implications. LB&I reports that its taxpayers typically employ large numbers of workers, deal with complicated issues involving tax law and accounting principles, and conduct their operations in an expanding global environment. According to IRS, these LB&I taxpayers filed 352,264 corporate and partnership tax returns in fiscal year 2015. LB&I’s stated mission is to provide taxpayers “quality service by helping them understand and meet their tax responsibilities and by applying the tax law with integrity and fairness to all.” In supporting that mission, LB&I audits tax returns to determine whether taxpayers correctly report information such as income, expenses, and credits. During an audit, LB&I staff review a taxpayer’s books and records. The objective of audits, in turn, is to promote the highest degree of voluntary taxpayer compliance. In 2015, IRS reported that LB&I completed audits on more than 11 percent of large corporations—those with assets in excess of $10 million. By comparison, the rate was 0.9 percent for all other corporations and 0.8 percent for individual returns. LB&I has nine audit components focusing on five practice areas and four geographical areas. Figure 1 shows LB&I’s organizational structure for audit activities. Each practice area has Planning and Special Programs (PSP) staff, who are responsible for controlling, monitoring, and assigning audit inventory to field groups. The term “selection methods” refers to all of the programs that LB&I uses to identify and review tax returns to include in the pool of possible audits, as well as the decisions made to audit tax returns by auditors and audit managers in LB&I field offices. LB&I uses a variety of methods to select returns for audit. Appendix 2 contains more detail on the selection methods that LB&I officials provided to us, including the methods that we focus on in this report. Figure 2 provides a conceptual overview, based on our analysis of LB&I documentation and interviews with relevant officials, of how LB&I narrows the pool of tax returns for audit consideration, including how LB&I uses its audit selection methods. After a tax return is filed with IRS, selection methods that involve computerized scoring models and filters identify tax returns that are likely to have compliance issues. According to LB&I officials, the higher the score, the greater the likelihood that a tax change will result from an audit. Computerized selection methods also may identify tax returns with specific compliance concerns, such as a particular value or combination of values reported on certain tax return lines. According to IRS documentation, LB&I picks returns with specific issues for compliance initiative projects (CIP) or returns that are mandated for audit, such as refund returns that are subject to Joint Committee on Taxation review. IRS also has a program, in which LB&I participates, to identify tax returns with known abusive tax schemes. LB&I officials said that they give additional scrutiny to individual tax returns with certain international tax issues. After returns are scored by computers or pulled for special projects and mandatory work, LB&I conducts another review called classification, in which LB&I staff identify whether the return merits an audit as well as specific issues for audit consideration. This portion of the process is focused on identifying potential audit issues on returns that are already considered at risk for noncompliance. After the identified returns have been classified or otherwise reviewed for specific tax issues, they are listed in a queue for audit managers to assign to auditors, as shown in the bottom of the funnel in figure 2. Auditors in the field assess whether the queued returns have large, unusual or questionable (LUQ) features. According to LB&I officials, other factors that guide which returns from the queue are selected include targets set in LB&I’s annual audit plan, which prioritizes tax return types and tax issues as well as resources and auditor’s skills and experience. Even if a return is ultimately selected for audit, auditors or their managers may decide upon closer examination not to proceed with the audit, a process called surveying. In addition to the process shown in figure 2, LB&I may initiate an audit based on taxpayers’ requests to amend their own returns, a special type of audit LB&I calls a claim. LB&I also may begin an audit based on facts from an ongoing audit, called a related pick up. Once selected, LB&I audits fall into two categories, Coordinated Industry Cases (CIC) or Industry Cases (IC). According to LB&I officials, the CIC program puts large enterprises under continual audits. LB&I categorizes tax returns as CIC based on factors that include assets, gross receipts, and operating entities. CIC taxpayers are audited by a team of LB&I staff while IC returns usually are audited by a single auditor. Generally, LB&I officials said they use their most experienced and highest graded auditors to review the tax return of LB&I taxpayers to address issues that are often complex, involving multiple years and potentially ambiguous laws, regulations, or related guidance in determining the correct tax treatment. Auditors may be assisted by specialists to help review technical issues, such as transactions that are international in scope and raise valuation issues. LB&I is in the process of changing the way it addresses compliance including how it identifies tax returns for audit and is moving toward implementing issue-based projects it calls “campaigns.” According to LB&I, a campaign is a compliance project focused on a specific compliance issue, such as partnerships underreporting income, rather than on using characteristics of the whole tax return for audit consideration. According to LB&I officials, campaigns could consist of an audit, or a less burdensome treatment, such as letters asking taxpayers to consider changing how they report the issue or additional guidance to help taxpayers accurately report the issue on their returns. LB&I first released its plan for adopting campaigns in late 2014 and announced the initial 13 issues for campaigns in January 2017. According to its plan, LB&I developed the campaign approach because of an increasingly difficult tax environment in which its budget and resources are shrinking and tax laws are growing more complex. While LB&I implements campaigns, officials said the existing selection methods it uses will continue to operate until LB&I decides whether to replace them. LB&I officials also said that existing selection methods may be repurposed to operate within campaigns, as well. For example, they said that a computer filtering effort previously conducted as a standalone project could be used to identify tax returns for audit under a specific campaign. LB&I officials said they have no set date for terminating the selection methods and starting the campaign process because developing the campaign process is iterative. LB&I Documentation on Audit Selection Methods We Reviewed Generally Reflected Some, but Not All, Internal Control Principles In reviewing LB&I’s methods for identifying and selecting tax returns for audit, we determined that LB&I’s related documentation generally reflected 4 of the 10 internal control principles we reviewed but was incomplete for the remaining 6 principles. Without complete documentation, LB&I lacks reasonable assurance that selection methods are being implemented as designed and therefore whether its return selection processes and procedures are supporting its objectives. Documentation Generally Reflected Internal Control Principles on Ethics, Organizational Structure, Commitment to Competence, and Implementation of Control Activities for Selection Methods Reviewed Internal Control Principles that LB&I Documentation Generally Reflected in Audit Selection Methods Reviewed LB&I has documented a commitment to promoting ethical behavior among staff, which provides some high-level assurance that the way it selects returns for audit may contribute to its strategic goal of treating taxpayers with integrity and fairness. For example, classifiers who identify whether a tax return should be considered for audit and which items on the return merit audit attention are prohibited from auditing those returns and from assigning them to specific auditors. Also, IRS’s ethics training and an annual certification of that training help to assure that IRS staff members are aware of the need to act ethically and impartially. All LB&I staff were certified as successfully completing the training in 2015, the latest available data. In addition, LB&I provided documentation to indicate that all of the selection methods we reviewed have a defined structure, and designated persons have the necessary responsibility and delegated authority to do their jobs in meeting the selection objectives. The documentation for all of the reviewed selection methods showed which LB&I staff members have been assigned responsibility for selecting returns for audit and have been delegated authority by management to oversee the process, including identifying and reviewing the potential returns and then selecting returns for audit. For example, once the Global High Wealth (GHW) unit identifies an individual taxpayer for possible audit, the related returns, such as partnership and S corporation returns, are linked together, and a classifier is responsible for assessing the compliance risk on the return. A manager is tasked with overseeing this work before it is sent to the field for audit. LB&I documented its commitment to competence for staff members involved with audit selection. Congress enacted and the President signed a statute in 2004 that gives federal agencies additional flexibility to help recruit new staff and retain employees with needed skills by providing enhanced recruitment and retention bonus authority. With many LB&I employees close to retirement age or considering moving on and hiring limited by budget constraints, LB&I officials said these provisions provide them with additional tools to help meet its human capital needs and assure the necessary skills are retained by its selection workforce. In terms of training, LB&I’s procedures and manuals generally documented its training to help assure the competence of staff involved in audit selection. IRS has courses to teach key staff about needed basic skills. For example, revenue agents—among the highest graded IRS auditors—are taught to look for returns with LUQ items that may merit an audit. In addition, the documentation showed training to instruct these auditors and other staff about specific knowledge to consider when reviewing returns for potential audit. Finally, LB&I generally has documented the goals of and responsibilities related to the selection methods we reviewed to assure that the objectives and related risks are addressed. The documentation across the selection methods generally identified who is responsible for reviewing procedures to assure that the goals of the selection method are met. For example, the filtering selection method has detailed documentation describing the use of filters to identify the returns with the highest compliance risk, and the role of managers in reviewing the returns that have been selected. LB&I Documentation for Selected Methods We Reviewed Did Not Generally Reflect Six Other Internal Control Principles We Reviewed Table 1 shows gaps in the documentation related to 6 of the internal control principles for all of the audit selection methods we reviewed. For all six principles we reviewed, the documentation showed some support for adherence for most of the selection methods. In summary, LB&I provided documentation showing that the reviewed selection methods generally reflected the six specified internal control principles to some extent. The evidence provided, however, did not completely document adherence to all parts of each principle. The gaps in documentation on these six principles leave LB&I vulnerable to inconsistently selecting tax returns for audit, or the perception of it. Throughout our work, LB&I officials sought clarification on what kind of documentation would generally reflect the internal controls principles and acknowledged that they would look to add additional documentation. Without complete documentation, LB&I cannot be assured that its existing audit selection methods are being used consistently. LB&I Does Not Have a Standard Process for Monitoring Audit Selection Decisions LB&I does not have a process to monitor the final decisions about which tax returns will be audited. In general, field managers and their auditors make the ultimate audit selection decisions about the tax returns, which generally have been reviewed by other IRS staff for audit consideration. Although our discussions with LB&I staff indicated that some of these audit selection decisions may be reviewed at the discretion of the managers, LB&I’s procedures do not document a systematic, standard process to regularly monitor field audit decisions. In addition, LB&I does not have standardized criteria to explain the reasons for selecting a return for audit, which would be necessary to regularly monitor audit selection decisions. Lacking a standard monitoring process for audit selection decisions is not consistent with internal control standards for monitoring. Under internal control principle 16, management should establish and operate activities to monitor the internal control system and evaluate the results. Such monitoring may be built into the operations and activities and done continually which could assist LB&I respond to change and help ensure that the controls align with changing objectives, laws, and risks. It assesses the quality of performance and points to corrective actions necessary to achieve the objectives. LB&I uses an audit monitoring system, but its review procedures and steps do not cover audit selection decisions. The LB&I Quality Measurement System (LQMS) is used to routinely monitor the examinations and adherence to technical audit standards. LB&I reviewers analyze a sample of tax returns that were audited and rate the quality of those audits. The technical standards include actions taken after a return has been assigned to an auditor, including planning the audit steps, implementing those steps to collect evidence, and developing audit findings. However, these monitoring activities and standards do not assess the quality of audit selection decisions or whether the decision processes have deficiencies that need to be addressed. According to officials responsible for the LQMS program, LB&I previously included a sample of surveyed returns in its annual review; however this was discontinued several years ago and officials could not recollect why this decision was made. LQMS does not cover the selection decisions because the system was designed to measure the quality of the actual audit activities. The results of LQMS reviews are a part of the Balanced Measurement System, which measures customer satisfaction, employee engagement, and business results, including performance goals related to audits such as their quality. LB&I also has reviews that cover field operations, but these reviews do not require monitoring audit selection decisions. For example, managers in each IRS territory are required to conduct one operational review each year of field audit managers to facilitate discussion and feedback on routine group operations. Each territory review is developed at the discretion of the manager and is non-standardized. One LB&I executive we interviewed indicated that when he did these territory reviews, he sometimes asked about how selection decisions were made. However he acknowledged that he chose to ask those questions and that other territory managers may do their reviews differently. LB&I staff members also conduct Process and Issue Assessments that focus on providing management a better understanding of the processes and procedures being used and the issues being developed in audit, but these reviews do not cover audit selection decisions. During our focus groups, we discussed who reviews and approves the audit selection decisions as well as gives feedback on the quality of the decisions. A number of managers in our focus groups concurred that, they have the authority to make most final selection decisions. Our focus groups with auditors did not indicate that auditors regularly receive feedback on their recommendations to managers on selecting or surveying specific tax returns, although auditors who participated in the groups commented that they sometimes received such feedback. By not monitoring the processes used in the field offices to select specific tax returns for audit, LB&I management risks relying on processes that may lead to inconsistent selection decisions. The lack of routine monitoring of selection decisions also can hinder LB&I management from identifying deficiencies in these processes and evaluating them for remediation. Furthermore, there is a risk that selection decisions may be perceived as not supporting the mission to apply the tax law with integrity and fairness to all taxpayers. Audit Starts and Closures Declined Overall from Fiscal Years 2011 through 2015, but Data on Audit Results in Terms of Dollars Cannot Be Clearly Aligned with All Selection Methods Audit Starts and Closures Declined Overall and Referrals Accounted for Small Portion of Starts and Closures Based on our analysis of IRS data, LB&I audit starts and closures across all selection methods generally declined between fiscal years 2011 and 2015, from 37,443 audit starts in 2011 to 34,180 in 2015 and 65,794 closures in 2011 to 34,763 in 2015 (see figure 3). IRS officials told us that reductions in staffing over the 5-year period contributed to the overall downward trend in starts and closures. Despite the overall downward trend, audit starts increased between fiscal years 2012 and 2013. IRS officials explained that the increase in audit starts in fiscal year 2013 was caused by an influx of returns in the 2011 and 2012 offshore voluntary disclosure programs. As previously discussed, LB&I uses various selection methods to identify returns for audit. Figure 4 shows that the 7 methods on which our analysis focused accounted for more than half of all LB&I audit starts and closures between fiscal years 2011 and 2015 while the remaining audit starts and closures come from many other selection methods. Compared to the seven methods, these other methods do not require the same level of professional judgment by LB&I staff when selecting returns for audit. For example, the audits can arise from taxpayer claims (such as when a taxpayer requests to be audited for a refund); mandatory work (such as audits to be reviewed by the Joint Committee on Taxation); or related pick-ups (when an auditor begins auditing another tax return based on what is observed in a different audit). Specifically, two selection methods in our analysis collectively accounted for the bulk of LB&I starts and closures. The Other Miscellaneous International Individual Compliance (IIC) method accounted for 30 percent of audit starts and 35 percent of audit closures, while the Offshore IIC method accounted for 21 percent of starts and of closures. The remaining 5 selection methods in our analysis together accounted for only about 1 percent of closures and 5 percent of audit starts. For a statistical summary of LB&I audit starts and closures by selection method, see appendix III. As illustrated in table 2, the number of audit starts and closures associated with internal and external referrals generally increased over the 5-year period from fiscal year 2011 to 2015, but referrals accounted for a very small portion of LB&I’s starts and closures overall. For example, in fiscal year 2015, LB&I had just under 35,000 audit closures, of which 40 closures—or less than one percent of LB&I audit closures— resulted from referrals. LB&I Tracks Results for Some of Its Selection Methods but Does Not Clearly Identify All Audit Selection Methods and Align Selection Methods with Audit Results Federal Internal Control Standards state that quality information is vital to achieving agency objectives. These standards further define quality information as being appropriate, current, complete, accurate, and accessible. Management should use quality information to make informed decisions and evaluate performance in achieving key objectives. Unlike our analyses of the data on audit starts and closures, when we tried to analyze LB&I’s audit results data by each selection method, we encountered difficulties that prevented us from easily analyzing the results. An initial difficulty was that LB&I’s selection methods were not clearly defined or documented. LB&I officials originally identified 14 methods that they used to select returns for audit; however, we found a large portion of the division’s full audit inventory could not be categorized within those 14 methods. We worked closely with IRS data managers to identify the correct project and tracking codes and categorize the data by selection method; these steps enabled us to generate reliable data on audit starts and closures. However, other difficulties unrelated to our ability to sort the data by selection method prevented us from analyzing and comparing data on audit results, such as additional tax dollars recommended, for the selection methods. For example: LB&I does not have project or tracking codes for DAS. DAS can only be identified in the data by a source code, while the selection methods can only be identified with project and tracking codes, causing overlap between DAS and other selection methods in the data and prevent direct comparison between DAS and the other LB&I selection methods in terms of the related audit results. The data cannot link a selection method used for entities such as partnerships and S corporations that pass through their tax liabilities to their partners and shareholders. As such, analyzing the results of the selection method’s effect on changes to tax liability cannot easily be done. The data do not account for net operating losses (NOL), which makes the results of the audit difficult to assess. Linkages between the cost of an audit—such as auditors’ compensation and contractor fees—and audit results were not readily available. This information would be needed to calculate cost-benefit information to compare selection methods. LB&I does periodically track data on results for its selection methods, but we could not rely on that data to support our analysis. For example, LB&I produces a monthly report to track data and performance measures for 9 of the 14 selection methods it originally identified as using. According to LB&I officials, this report is used to analyze the performance of certain selection methods against a baseline of audit results for Industry Case (IC) returns and determine whether a particular method is achieving its objectives. Examples of the measures that LB&I tracks in its monthly report include: number of audits closed; additional tax dollars recommended overall and per audit hour; agreed recommended dollars (dollar amounts in additional tax recommended that taxpayers agreed with); no-change rate (percentage of audits closed without changing the amount of taxes currently owed); and audit cycle time (the time that returns are under audit). While an IRS official told us that the monthly report helps inform decisions on which selection methods to use, several limitations prevent it from being used to compare how LB&I’s selection methods perform. Such comparisons would help inform decisions on whether one method is achieving better audit results in terms of adjustments made and hours invested than another method, if given more resources. Such limitations with the report include: The data for certain selection methods are not mutually exclusive, meaning comparing methods could be made more complicated by duplication. For example, IRS officials told us that data in the report for the Form 1065 and 1120-S modeling programs overlap, as well as data on GHW selection results. The report excludes key selection methods. Specifically, the two methods that account for the bulk of LB&I’s audit work—Offshore IIC and Other Miscellaneous IIC—for fiscal years 2011 through 2015 are excluded. It also excluded the Compliance and Workload Delivery method, which was included in the initial list of methods that LB&I said it uses. While LB&I staff review the report informally, it has no documented guidance or criteria for how to assess the performance of its workload selection methods in order to make decisions or take actions. The audit results are not arrayed to easily review other potentially important considerations, such as ratios of direct revenue yield per dollar of cost across LB&I selection methods. Without data that aligns the selection methods to the audit results, LB&I has less assurance that it is allocating its limited resources most effectively as it selects more returns to audit. LB&I officials acknowledged that being able to more easily identify selection methods within the audit inventory would enhance IRS’ ability to assess audit results by selection method, an assessment that could be used to inform decision making, as discussed later in the report. LB&I Efforts to Plan and Implement New Compliance Approach to Audits Remain Incomplete LB&I plans to conduct issue-based projects it calls “campaigns” to address taxpayer compliance. Campaigns may involve audits of tax returns or other types of compliance efforts, such as taxpayer outreach or tax form changes. According to LB&I, ideas for campaigns come from staff members who submit proposals, a process which started during our work. As part of this submission, staff must identify campaign goals, metrics, training, and resource needs. A governing board called the Compliance Integration Council (the Council) is to decide which campaigns are initiated and monitor campaign results. If the campaign includes audits, LB&I may use the same or similar audit selection methods, as discussed previously in this report. The concept for campaigns was established in a plan released in 2014; however, as discussed below, LB&I has not fully implemented that plan as of March 2017 and had not started work on any of the 13 campaigns announced in January 2017. Using our prior work, we identified 5 key principles for effectively planning new projects and initiatives like LB&I’s new compliance approach involving campaigns, as shown in the first column of figure 5. Although LB&I has made progress in meeting all 5 principles, Figure 5 shows which parts of each of the five principles that LB&I plans did not meet as of December 2016. Generally, LB&I officials said that planning and implementing campaigns was an iterative process without a baseline for how long the process would take, and, consequently, they adapted as they worked toward fulfilling the five principles. However, by not fully meeting all of the principles, LB&I lacks reasonable assurance that its new compliance approach will succeed in accomplishing LB&I’s overall audit objective of encouraging voluntary compliance and fair treatment of taxpayers. LB&I Did Not Document a Clear Timetable with Deadlines for Carrying Out Its Plan or Establish Metrics for Measuring Progress toward Its Overall Goals LB&I’s plan contains a conceptual roadmap for standing up the operation of its new compliance approach involving campaigns and includes elements such as compliance risk identification and resource allocation. An LB&I executive said that some of LB&I’s plans have been completed already, such as restructuring, implementing the campaign submission process, and revising audit position descriptions. As noted earlier, LB&I announced an initial list of 13 campaigns in January 2017, and LB&I officials said they will continue to consider new campaigns in the future. The plan does not, however, contain specific dates for implementing the new compliance approach. The absence of specific dates is not consistent with the project planning principles that call for having a plan with a schedule. Elements of the plan that have no specific timetable include establishing criteria for choosing upcoming campaigns or eliminating existing selection methods that campaigns are meant to replace. The plan also has no specific timetable for approving proposed campaigns. LB&I officials said this was because they are transitioning from the traditional selection methods to the campaign approach. The officials said implementing the new campaign approach is iterative in order to make adjustments as they gain experience with campaigns. In March 2017 LB&I officials said they were working on a timeline they believe will be consistent with the project planning principles. Without specific timetables however, LB&I is less assured that it will stay on track in executing its plan. IRS officials told us that the overall goal for campaigns is preventing noncompliance. LB&I’s guiding principles also specifically say that LB&I will maintain a flexible well-trained workforce, select better work, use an effective mix of compliance techniques (such as audits), and employ a robust feedback loop. However, LB&I’s plan has not established metrics for measuring progress toward those overall goals, although individual campaign projects are to include measurable goals. Without metrics to track progress across campaigns, LB&I is limited in its ability to determine whether its new approach is meeting its stated goals. LB&I Did Not Initially Evaluate Human Resource Needs for Implementing Its New Compliance Approach, and Lacks a Documented Plan for Such Analysis Moving Forward LB&I’s plan discusses the need to assess human resources in three ways: skill assessment (ensuring that staff have the proper skills); workforce visibility (ensuring that management understands staff capability and capacity), and issue finalization (the process of deciding which issues will be audited by staff). For example, the plan is intended to “provide a comprehensive, real-time understanding of workforce capability and capacity.” However, inconsistent with the planning principles, LB&I officials did not evaluate human resource needs to implement the campaign approach overall in its plan, in part because LB&I did not have the ability to measure the resource investment. Based on our discussion with LB&I officials on ways to track resource investments, LB&I approved in January 2017 repurposing an old database code to allow them to analyze staff time charged to preparing campaigns. According to LB&I officials, the data was not available as of March 2017 but they said they plan to use the data to conduct return on investment analysis, though no plan is documented. Developing and documenting a plan for analyzing how staff time is being used on campaign activities can better position LB&I to determine how it is using resources as it implements its new approach to compliance. LB&I Did Not Document How Stakeholder Input Was Used or the Lessons Learned through Evaluating Past Performance LB&I’s plan cites a variety of IRS stakeholders involved in developing the campaign process, consistent with planning principles. For example, stakeholders included two commissioners, directors from the field, the General Counsel’s office, and IRS’s finance and technology offices, as well as executives for topical areas, such as the financial services industry, natural resources construction, and GHW. LB&I officials also said that these discussions covered past audit performance based on the available data that LB&I had been generating, including Business Performance Reviews (BPR), which list results by several measures for the division overall, and monthly reports that compare certain selection method results with results from audits selected primarily through DAS. However, LB&I officials said the discussions with stakeholders to formulate the campaign approach were not meant to be formal, and the stakeholder input and any lessons learned from evaluating past performance were not specifically documented, as called for in internal control standards. Without the documentation of those discussions and evaluations, LB&I cannot demonstrate that it has leveraged lessons learned and contributions made by stakeholders and those evaluations for future reference. LB&I Plans to Monitor Individual Campaign Performance but Its Plan Does Not Cover Monitoring Across Campaigns or Identify Criteria for Choosing Selection Methods for Particular Projects According to its plans, LB&I intends to monitor how individual campaigns progress. If implemented, these division-level monitoring efforts would help align campaigns with LB&I audit goals. In particular, LB&I plans to make evaluating issue selection part of its performance feedback loop to refine key models and decision points to improve issue selection, as shown in the “adapt” portion of figure 6. The system of analysis that LB&I plans partially satisfies the fourth project planning principle by setting up a monitoring process on the performance of individual campaigns. However, these plans do not address evaluating the performance of selection methods used across campaigns. Furthermore, the data analyses that LB&I has used to monitor the performance of its selection methods are not sufficient to compare results from campaigns using audits because of data problems discussed earlier. For example, the reports used to monitor selection methods had overlapping categories and the selection methods themselves were not always clearly identified in the data. LB&I officials said that the way LB&I captures audit data makes it challenging to compare audit results. Without analyzing and monitoring results by selection methods across campaigns, LB&I faces a greater risk of not using the most effective selection method within its campaigns. LB&I’s plan also did not include measuring costs, such as auditor pay, travel expenses, and specialists that could be compared to the effectiveness of a selection method used in a specific campaign. Nor does the plan include an estimate for how much LB&I would spend on campaigns overall. According to LB&&I officials, this level of detail was not deemed necessary when the plan was written. In 2012, we found that IRS could more effectively target audit resources by measuring the marginal benefit and costs of auditing certain tax returns. Research by IRS and other experts has found that although it may be complex, a marginal cost-benefit analysis could help IRS allocate resources to increase net revenues. LB&I faces several challenges in its efforts to monitor progress. First, the planned database for monitoring individual campaigns is the Issue Based Management Information System (IBMIS), which is populated with data from the Issue Management System (IMS), Audit Information Management System, and the Specialist Referral System. The Treasury Inspector General for Tax Administration (TIGTA) last year found reliability issues with IMS. In response to TIGTA, an LB&I compliance executive said the division had assembled a team to fix the IMS issues TIGTA identified and plans to improve the issue codes needed for evaluation, though those efforts were not complete as of January 2017. LB&I cannot be assured that it will draw appropriate conclusions about improving compliance through campaigns until the underlying data are in better order. Furthermore, as previously discussed, LB&I officials said existing selection methods will continue to operate until LB&I decides whether to replace them or repurpose them to operate within campaigns. If LB&I chooses to discontinue any selection methods once campaigns are fully implemented, it would not make sense to compare them to other methods. LB&I officials also said some selection methods should not be compared. For example, they said it may not be appropriate to compare selection methods used to choose an audit of a large corporation that may take years and require multiple staff members with the audit of a high wealth individual that takes less time and fewer resources. The range of taxpayers covered by methods also differs. For example, the tax shelters rely on disclosures from the public, making the possible universe of coverage small compared to DAS, which is applied to all Form 1120 submissions. LB&I officials said they had received more than 700 campaign proposals by December 2016, but IRS has not developed criteria for choosing the most effective audit selection methods for campaigns with audits beyond the discretion of the Council, which IRS deemed sufficient. As of March 2017, LB&I said that they are developing such criteria. Without criteria to choose audit selection methods for campaigns using audits, LB&I lacks reasonable assurance the campaigns will meet LB&I’s audit objectives. LB&I Intends to Address Potential Risks but Plans Lack a Specific Timetable and Metrics for Risk Mitigation LB&I officials said they held internal discussions about potential risks and have a plan stating LB&I intends to analyze risks as the campaigns are implemented. Areas of risk that LB&I has identified include an increasingly difficult environment in which its budget and resources are shrinking, tax laws are growing more complex, and taxpayers who are continuing to evolve. In addition, part of the Council’s mission in overseeing and analyzing campaigns is to discuss and make decisions on risks. While LB&I’s plans to assess risk associated with campaigns show progress toward meeting the fifth project planning principle, LB&I officials did not provide us with documentation to support how the planned risks will be assessed and mitigated. Given the data limitations LB&I faced when it was developing the plan, officials would have been challenged to analyze the risks identified. To mitigate specific risks, LB&I plans to identify and develop staff with needed skills, create a function to conduct environmental scans, and develop the ability to gather, manage, and analyze data. These plans for analyzing and mitigating risks, however, lack a set timetable, and do not include specific metrics for assessing whether progress is being made toward goals. Without these metrics and a timetable for developing them, LB&I will be less assured that it is addressing risks faced by audit-focused campaigns. Conclusions LB&I has documented policies and procedures that generally reflect 4 of the 10 internal control principles that we reviewed. However, gaps in documentation related to six of the principles leave LB&I without reasonable assurance that its selection methods are being implemented as designed and whether tax return selection supports the division’s audit objectives Ensuring that policies and procedures of audit selection methods are fully documented will continue to be important for LB&I as it implements its new campaign approach for selecting audits. Similarly, LB&I’s lack of a standard process for monitoring field-level selection decisions, the most direct step in audit selection, may hinder management’s ability to identify any inconsistencies across decisions and remediate any deficiencies in its audit processes. LB&I’s efforts to plan and implement its new compliance approach have partially met five key principles for effectively planning projects. However, opportunities exist to make improvements. In particular, LB&I has not fully established a specific timetable for implementing its new approach overall or completed plans to monitor those projects overall—only individual projects. LB&I also faces challenges in ensuring that data to conduct the monitoring is sufficient to assess any selection methods used in the new compliance approach moving forward. Without taking the steps to fully meet all five planning principles in implementing its new approach, LB&I management will lack reasonable assurance that its new compliance approach will succeed in accomplishing LB&I’s overall audit objective of encouraging voluntary compliance and fair treatment of taxpayers. Recommendations for Executive Action As LB&I finishes implementing its new approach and decides which selection methods will be used with the campaigns, we recommend that the Commissioner of Internal Revenue ensure that the documentation gaps in policies and procedures are addressed for the following six internal control principles for the selection methods that will be used: define objectives to identify risk and define risk tolerances; identify, analyze, and respond to risks to achieving the objectives; design control activities to achieve objectives and respond to risks; use quality information to achieve objectives; communicate internally the necessary quality information about the objectives; and evaluate issues and remediate identified internal control deficiencies on a timely basis. Also in accordance with federal internal control standards, we recommend that the Commissioner direct LB&I to adopt a standard process for monitoring audit selection decisions in the field, such as by modifying the existing quality control system. To further ensure that the new campaigns under LB&I’s new approach for addressing tax compliance are implemented successfully, we recommend that the Commissioner take these actions: create a timetable with specific dates for implementing its new compliance approach; establish metrics to help determine whether the campaign effort overall meets LB&I’s goals; finalize and document plans to evaluate the human resources expended on campaign activities; document lessons learned from stakeholder input and past monitor overall performance across future campaigns, not just individual compliance projects, and in doing so ensure that the data used for monitoring accounts for the costs beyond the auditor’s time can clearly be linked with specific selection methods, including the Discriminant Analysis System (DAS) method, to the extent that the selection methods continue to operate; develop and document criteria to use in choosing selection methods for campaigns using audits; and set a timetable to analyze and mitigate risks and document specific metrics for assessing mitigation of identified risks. Agency Comments and Our Evaluation We provided a draft of this report to the Commissioner of Internal Revenue for review and comment. On March 9, 2017, the Deputy Commissioner for Services and Enforcement provided written comments stating that IRS agreed with all of the GAO’s recommendations and is identifying the specific actions to be taken to effectively implement them. In the letter, which is reprinted in appendix V, the Deputy Commissioner said that the GAO report properly highlights the importance of addressing documentation, both in the traditional selection processes, as well as the new campaign process; developing a standard process to monitor audit selection systems; and fully addressing project planning principles in implementing the campaign approach. The Deputy Commissioner also said that LB&I has begun taking steps to improve its documentation and monitoring processes and that GAO’s findings, along with the implementation of its recommendations, will improve this process. Lastly, the Deputy Commissioner said that IRS will provide a more detailed description of its actions, responsible officials, and implementation timelines in its response to the final report. At that time, we will review these details in determining IRS’s progress in implementing our recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies of the report to the Secretary of the Treasury, Commissioner of Internal Revenue, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Appendix I: Objectives Scope and Methodology This report (1) assesses the extent to which the Large Business and International (LB&I) division’s documented procedures and policies on audit selection methods generally reflect relevant internal control principles; (2) assesses the extent to which LB&I has a standard process to monitor audit selection decisions; (3) describes statistical information on audit starts and closures for LB&I’s selection methods, including LB&I’s use of audit referrals, and assesses how the Internal Revenue Service (IRS) evaluates its audit results from its selection methods; and (4) assesses to what extent LB&I has planned and implemented its new approach to address compliance. To assess the extent to which LB&I documented procedures and policies generally reflect relevant internal control principles, we reviewed the various LB&I’s selection methods and related internal controls that are intended to help LB&I achieve its stated goal for audits to promote voluntary compliance. We also reviewed IRS’s Strategic Plan FY2014- 2017, Internal Revenue Manual (IRM) sections related to LB&I’s mission statement and statement for audits, LB&I’s fiscal year 2016 Business Performance Review, and other IRS documentation related to LB&I’s audit selection process. At the start of our work, LB&I provided a list of its 14 audit selection methods which are listed in appendix II. Based on LB&I officials’ input and our review of relevant documentation, we decided to focus our analysis on whether eight of these selection methods meet the relevant internal controls standards. Our decisions, to which LB&I officials agreed, include the following. Our analysis excluded three selection methods that were shut down during our work. LB&I has ceased operating the 1065 and 1120-S modeling program selection methods and merged the international risk assessment program (IRAP) with filtering. LB&I officials said they made these decisions based on experience with the programs and on the expected nature of new selection methods that LB&I is developing. We also excluded the compliance and workload delivery (CWD) and foreign payment program (FPP) selection methods because neither method involves decisions about whether to select tax returns for audit. Rather, their workload consists of referrals from other parts of IRS, or returns that are mandatory to audit. We combined the Form 8886 disclosures and Form 8918 material advisor disclosures into one return selection method because they are handled by the same office—the Office of Tax Shelter Analysis (OTSA). For the remaining 8 selection methods, we compared relevant LB&I procedures with 10 internal control principles from Standards for Internal Control in the Federal Government (Standards). We assessed whether documentation on LB&I’s selection methods generally reflected the 10 internal control principles by reviewing documentation and interviewing LB&I officials familiar with the return selection methods. The internal control principles we used for our evaluation are noted below. Principle 1: Demonstrate commitment to integrity and ethical values Principle 3: Establish structure, assign, responsibility, and delegate authority to achieve objectives. Principle 4: Demonstrate commitment to competence through recruiting, training and development, and retention. Principle 6: Define objectives to identify risk and define risk tolerances. Principle 7: Identify, analyze, and respond to risks to achieving the objectives. Principle 10: Design control activities to achieve objectives and respond to risks. Principle 12: Implement control activities through policies and reviews. Principle 13: Use quality information to achieve objectives. Principle 14: Communicate internally the necessary quality information about objectives. Principle 17: Evaluate issues and remediate deficiencies. We selected these 10 principles based on our previous work on IRS audit selection and our review of Green Book internal controls. We consulted with GAO stakeholders with knowledge about the principles and evaluation methodology. We shared our identification of the relevant principles with LB&I officials, who agreed with the criteria. We also discussed the type of documentation we were seeking to support the internal controls with LB&I. We had three GAO analysts independently review LB&I’s documentation and reach consensus on whether the documented policies and procedures generally reflect the principles. To assess the extent to which LB&I had a standard process to monitor audit selection decisions, we reviewed documentation on the standards for audit selection from the IRM, as well as documentation on the LB&I Quality Measurement System, which LB&I officials told us was the primary method to assess how well auditors follow IRS standards. We also interviewed relevant LB&I officials on practices in selecting audits in the field. Additionally, we held seven focus groups—three with field audit managers and four with field auditors—to collect examples of field staff’s experiences following audit standards. We acquired complete lists of field auditors and managers and randomly selected focus group participants from those lists. The selected participants are a nonprobability sample, and their views cannot be generalized to their respective populations. The focus groups were conducted by telephone and were facilitated by a GAO methodologist. We compiled the comments made during the focus group and identified common themes. For our assessment, we compared the documentation we reviewed and the information we collected from interviews and focus groups with the Green Book internal control on monitoring. To describe statistical information on audit starts and closures from LB&I’s audit selection methods, we acquired the complete dataset from A-CIS, an IRS system used to track LB&I audit activity, for fiscal years 2011 through 2015, the most recent complete data available during our analysis. We identified the codes corresponding to the 14 selection methods that LB&I officials told us they use to help select tax returns for audit, with the exception of IRAP for which LB&I did not have any codes. We worked with IRS data managers and identified 35 other selection methods within LB&I’s audit inventory, which we determined to be outside of the scope of this review and refer to as “Other Methods” for comparative purposes in this report (see Appendix II for examples of other selection methods). We made other scoping decisions to ultimately arrive at the seven selection methods on which we focused our data analysis. We removed the Discriminant Analysis System (DAS) because it is identified in the data using source codes, which are not comparable with the project and tracking codes used for the other selection methods. For consistency with our internal controls analysis, we combined the codes for Form 8886 disclosures and Form 8918 material advisor disclosures and presented those results as one return selection method, Tax Shelters. We did not include the CWD and FPP programs as we determined they do not require the professional judgment of LB&I staff in making audit selection decisions. Lastly, the results we present do not include the Compliance Initiative Projects (CIP) and Coordinated Industry Cases (CIC) as selection methods because the majority of the audit results were too small to report without revealing taxpayer information. We used the codes to measure the annual number of starts and closures for these 7 remaining selection methods: 1065 Modeling, 1120-S Modeling, Tax Shelters, LB&I Filtering, Global High Wealth (GHW), Offshore IIC, and Other Miscellaneous IIC. In addition, to describe LB&I’s use of audit referrals, IRS identified the coding for two groups of audit referrals that LB&I receives—internal and external—and we used these codes to count the number of audits that listed these referrals by source codes. We assessed the reliability of the data by reviewing existing information, including the A-CIS data dictionary and related documentation and conducted interviews with LB&I officials knowledgeable about the data. In addition, we compared our results to selected system control totals provided by IRS, and had our code and results confirmed by relevant IRS data experts. We also ran summary statistics for each selection method. Based on these steps, we determined that the data we generated were sufficiently reliable for calculating statistics on audit starts and closures. To assess how IRS evaluates its audit results from its selection methods, we reviewed a monthly tracking report on LB&I’s individual selection methods that LB&I officials told us they used to monitor selection methods. We also interviewed relevant LB&I officials. We compared LB&I’s report with the Green Book internal control Principle 13: Use Quality Information. To assess the extent to which LB&I planned and implemented its new approach for addressing compliance, we reviewed our prior work on projects similar to what LB&I had designed with campaigns. Based on our review of those reports, we determined the project planning principles listed in table 3 below were appropriate for our analysis because of their applicability to planning new approaches or projects. IRS agreed with these principles during April 2016. We reviewed LB&I’s plans to stand up campaigns and interviewed relevant officials then compared the information to the principles outlined in table 3. Given the status of LB&I’s plans, we did not assess LB&I’s decision to create the approach. To determine whether LB&I met the principles, two analysts independently compared the evidence with the criteria and recorded their assessments. A third analyst also reviewed the evidence and acted as a tie-breaker, if needed. The statements that analysts could make based on the evidence are “meets” or “does not meet.” To keep our conclusions as clear as possible, our definitions of the two assessments are as follows. Meets: The documented evidence supports all aspects of the criterion. Did not meet: The evidence did not support all aspects of the criterion, including cases in which some aspect of the criterion is met, but we did not have enough evidence to conclude that all aspects were met. We conducted this performance audit from January 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Description of Large Business and International (LB&I) Selection Methods and Other Methods Selection Methods LB&I officials initially identified fourteen selection methods that LB&I uses to help select tax returns for audit. The list of those selection methods, including definitions, follows: Discriminant Analysis System (DAS): DAS is a mathematical system that LB&I uses to identify corporate (Form 1120) returns that may merit selection for audit. DAS prioritizes returns based on their probability of being profitable to audit. DAS computes a score that allows LB&I to rank the returns. 1065 Modeling: The partnership (Form 1065) computer model combines results from mathematical formulas and business rules to identify partnership returns for audit consideration. 1120-S Modeling: As with the 1065 model, the S corporation (Form 1120 S) computer model combines results from mathematical formulas and business rule to identify S corporation returns for audit consideration. Form 8886 Disclosures: The Office of Tax Shelter Analysis (OTSA) in LB&I reviews this disclosure form for potentially abusive tax avoidance; taxpayers are to disclose particular transactions that may indicate such abuse. Form 8918 Material Advisor Disclosures: OTSA in LB&I reviews this disclosure form, which is to be filed by those who promote tax shelters, for inappropriate shelter schemes to lower tax liability. LB&I Filtering: Computer programs that LB&I staff develop to identify particular issues that have a tendency towards noncompliance on tax returns for audit consideration. International Risk Assessment Program (IRAP): IRAP is a program to identify particular international tax planning strategies that may pose a compliance risk. Coordinated Industry Case classification process: LB&I staff assign points to certain characteristics on a tax return to identify whether certain large corporate taxpayers should be under continuous audit. Global High Wealth (GHW): LB&I teams use computerized models to identify high wealth individuals with audit potential because of various ownership and investment interests. Compliance and Workload Delivery: This is a process that identifies and classifies returns for potential audit based on issues that LB&I lists in its annual letter on audit priorities due to a compliance risk. Offshore International Individual Compliance: LB&I identifies individual tax returns with potential international compliance issues based on information received from third party information, such as banks complying with Internal Revenue Service (IRS) issued summonses for customer records Other Miscellaneous: Tax returns that are referred by the Whistleblower program for possible audit on the basis of information received from a whistleblower. Compliance Initiative Projects: LB&I staff identifies returns filed by specific types of taxpayers such as those engaging in certain activities to collect data about potential areas of noncompliance. Foreign Payment Program: A program responsible for coordinating all foreign payment functions such as income tax withholding and information reporting by third parties on payments made to taxpayers. Since the start of our engagement, LB&I has ceased operating the 1065 and 1120-S modeling programs and merged IRAP with filtering. According to LB&I officials, they made these decisions based on their experience with the programs and on the expected nature of new selection methods that LB&I is developing. We have excluded these three methods from our analyses of documented policies and procedures and calculations of audit starts and closures. We also excluded the compliance and workload delivery and foreign payment programs because we confirmed with IRS officials that neither of these programs include returns that require the discretion of LB&I officials in making an audit selection decision. Rather, their workload consists of referrals from other IRS divisions, or returns that are mandatory to audit. We also combined the Form 8886 disclosures and Form 8918 material advisor disclosures into one return selection method because they are handled by the same office—OTSA. Other Methods Appendix III: Statistical Summary of Large Business and International (LB&I) Audit Selection Method Performance Indicators Total- Selection Methods in GAO Analysis Total - Other Methods Total - All Methods Appendix IV: List of Announced Campaign Compliance Projects In January 2017, the Large Business and International (LB&I) division announced the following 13 compliance projects or “campaigns” that it will conduct. 1. Internal Revenue Code (IRC) 48C Energy Credit: LB&I said that this campaign will help ensure that the credit is claimed only by those taxpayers whose advanced energy projects were approved by the Department of Energy (DOE), and who have been allocated a credit by IRS. These credits must be pre-approved through application to the DOE. LB&I said that the treatment stream for this campaign will be soft letters and issue-focused audits. 2. Offshore Voluntary Disclosure Program (OVDP) Declines- Withdrawals: OVDP allows U.S. taxpayers to voluntarily resolve past non-compliance related to unreported offshore income and failure to file foreign information returns, according to LB&I. In the campaign, LB&I said it will address OVDP applicants who applied for pre- clearance into the program but were either denied access to OVDP or withdrew from the program. LB&I said that IRS will address continued noncompliance through a variety of treatment streams, including audits. 3. Domestic Production Activities Deduction, Multi-Channel Video Program Distributors (MVPD) and TV Broadcasters: MVPDs and TV broadcasters have claimed that “groups” of channels or programs are a qualified film eligible for the IRC Section 199 deduction for income attributable to domestic production activities, according to LB&I. They are asserting that they are the producers of a qualified film when distributing channels and subscriptions packages that include third-party produced content. Additionally, LB&I said that MVPD taxpayers maintain that they provide online access to computer software for the customers’ direct use. LB&I said that it has developed a strategy to identify taxpayers affected by these issues and will develop training to aid auditors and that the campaign will include potential published guidance and issue-based audits 4. Micro-Captive Insurance: LB&I said that this campaign addresses transactions in which a taxpayer attempts to reduce aggregate taxable income by using a contract with a related company that the parties treat as an insurance contract with a captive insurance company (i.e., an insurance company organized primarily to provide insurance protection to its owners or persons related to its owners). LB&I said that it has developed a training strategy for this campaign and the treatment stream will be issue-based audits. 5. Related Party Transactions: LB&I said that this campaign focuses on transactions between commonly-controlled entities that provide a means to transfer funds from a corporation to related pass through entities or shareholders. LB&I said it is seeking to determine the level of compliance in related party transactions and that the treatment stream for this campaign is expected to be issue-based audits. 6. Deferred Variable Annuity Reserves and Life Insurance Reserves Industry Issue Resolution (IIR): The IRS and Chief Counsel will develop guidance to address uncertainties about reserves for deferred variable annuities and for life insurance and related tax issues, according to LB&I. The campaign’s objective is to collaborate with industry stakeholders. Chief Counsel and the Department of the Treasury are to develop published guidance that provides certainty to taxpayers regarding these related issues. 7. Basket Transactions: This campaign addresses structured financial transactions in which a taxpayer attempts to defer and treat ordinary income and short-term capital gain as long-term capital gain, according to LB&I. LB&I said that it has developed a training strategy for this campaign. The treatment streams will be issue-based audits, soft letters, and outreach. 8. Land Developers - Completed Contract Method (CCM): Large land developers that construct in residential communities may be improperly using CCM accounting, according to LB&I. In some cases, developers are improperly deferring all gain until the entire development is completed. LB&I will provide training for auditors doing follow-up audits when warranted. The treatment stream also will include development of a practice unit and issuance of soft letters. 9. The Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA) Linkage Plan Strategy: As partnerships have become larger and more complex, LB&I has revised processes to assess tax on investors, according to LB&I. With recent legal changes, LB&I plans to focus on developing new procedures and technology to work collaboratively with auditors conducting TEFRA partnership audits to identify, link, and assess tax to the investors that pose the most significant compliance risk. 10. S Corporation Losses Claimed in Excess of Basis: S corporation shareholders report income, losses and other items passed through their corporation, according to LB&I. While the law limits losses and deductions to their cost basis in the corporation, LB&I said that it has found that shareholders claim losses and deductions in excess of their basis. LB&I also said that it has developed technical content for this campaign that will aid auditors. According to LB&I, the treatment streams for this campaign will be issue-based audits, soft letters encouraging voluntary self-correction, stakeholder outreach, and a new form for shareholders to assist in properly computing their basis. 11. Repatriation: LB&I said that it is aware of different structures being used by taxpayers for purposes of tax free repatriation of funds into the United States. LB&I has determined that many of the taxpayers do not properly report repatriations as taxable events on their filed returns. LB&I said that it plans to improve issue selection filters for conducting audits on identified, high-risk repatriation issues, increasing taxpayer compliance. 12. Form 1120-F Non-Filer: Foreign companies doing business in the U.S. are often required to file Form 1120-F, according to LB&I, data suggest that many of these companies are not meeting their filing obligations. In this campaign, LB&I said that it will use various external data sources to identify these foreign companies and encourage them to file their required returns. The treatment stream will involve soft letter outreach, according to LB&I. If the companies do not take appropriate action, LB&I will conduct audits to determine the correct tax liability. 13. Inbound Distributor: According to LB&I, U.S. distributors of goods from foreign-related parties have incurred losses or understated profits in their U.S. tax return reporting; these amounts are not commensurate with the functions performed and risks assumed. In many cases, the U.S. taxpayer would be entitled to higher returns in arms-length transactions. LB&I said that it has developed a training strategy that will aid auditors as they examine this in issue-based audits. Appendix V: Comments from the Internal Revenue Service Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact James R. McTigue, Jr. (202) 512-9110 or [email protected]. Staff Acknowledgments In addition to the contact named above, Tom Short (Assistant Director); Ann Czapiewski; Steven Flint; Robert Gebhart; Eric Gorman; George Guttman; John Hussey; Shirley Jones; Edward Nannenhorn; Ellen Rominger; Cynthia Saunders; Andrew J. Stephens; and Mackenzie Verniero made significant contributions to this review.
LB&I audits large partnerships and corporations with $10 million or more in assets and high wealth individuals. These entities pose compliance challenges. For example, IRS reported that the gross underreported income tax of large corporations alone averaged an estimated $28 billion annually between 2008 and 2010, the most recent data. It is important for LB&I to have adequate controls for its audit procedures and to properly plan and implement its new approach to address noncompliance. GAO was asked to evaluate how IRS selects returns and is implementing its new compliance approach. Among other objectives, this report (1) assesses the extent that LB&I's documented procedures and policies for its audit selection methods generally reflected relevant internal control principles, (2) assesses the extent that LB&I has a standard process to monitor audit selection decisions, and (3) assesses the extent that LB&I has planned and implemented its new approach to address compliance. GAO reviewed LB&I procedures and policies for eight selection methods that involved the use of discretion and its plans for implementing a new compliance approach. Given the status of LB&I's plans for and implementation of its new approach, GAO did not assess LB&I's decision to create the approach. GAO held focus groups with LB&I staff responsible for selecting audits, and interviewed IRS official The Internal Revenue Service's (IRS) Large Business and International division (LB&I) uses a variety of methods, such as computer models and staff reviews of returns, to identify tax returns for audit consideration. From the returns identified, managers and auditors in LB&I field offices select the returns to be audited. For the eight methods LB&I uses for identifying and selecting tax returns for audit (selection methods) that GAO analyzed, LB&I documentation on its procedures and policies generally reflected 4 of the 10 internal control principles GAO reviewed. For example: Related to the internal control principle of demonstrating commitment to integrity and ethical values, LB&I auditors who identify tax returns for audit consideration are prohibited from auditing those returns themselves or assigning them to specific individuals for audit. In addition, all LB&I staff completed a required training on ethics and impartiality in 2015, the latest available data. Related to the internal control principle of demonstrating a commitment to competence, LB&I's procedures and manuals generally documented its training to help assure the competence of staff involved in audit selection. This training included courses on basic skills as well as instruction on more specific topics. However, for the other 6 internal control principles GAO reviewed, there were gaps in documentation that limit LB&I's assurance that its selection methods are being implemented as designed and are supporting its objectives. For example: Related to the internal control principle of identifying, analyzing, and responding to risk, LB&I documentation did not specify procedures or a process for how to respond to changing circumstances, such as a change in the law, in selecting returns for audit. Related to the internal control principle of reporting on issues and remediating related deficiencies, LB&I documentation indicated that problems identified with selection methods were discussed in meetings, but not that corrective action was taken to address them. GAO also found that LB&I has monitoring directives, but it does not have a standard process for monitoring field staff's audit selection decisions. Without such a process, LB&I lacks reasonable assurance that decisions are made consistently. LB&I is in the process of implementing a new approach for addressing taxpayer compliance, including how it identifies tax returns for audit. LB&I plans to implement what officials call “campaigns,” which are projects focused on a specific compliance-related issue, such as partnerships underreporting certain income, rather than projects focused on the characteristics of whole tax returns. According to LB&I officials, campaigns could include conducting audits as well as other efforts, such as reaching out to taxpayers and tax professionals, issuing guidance, and participating in industry events. LB&I officials said certain audit selection methods that existed prior to the development of campaigns will operate while LB&I implements its campaign approach, and campaigns may subsume some of those methods. GAO found that LB&I made some progress in implementing its new compliance approach, such as by involving stakeholders in plans and implementing the process for submitting proposals for campaigns. However, LB&I has not fully met five project planning principles set forth in prior GAO work (see table below). Until it fully meets these principles, LB&I management lacks reasonable assurance that its new compliance approach will succeed in accomplishing its overall objectives of encouraging voluntary compliance and fair treatment of taxpayers.
Background An enterprise architecture is a blueprint that describes the current and desired state of an organization or functional area in both logical and technical terms, as well as a plan for transitioning between the two states. Enterprise architectures are a recognized tenet of organizational transformation and IT management in public and private organizations. Without an enterprise architecture, it is unlikely that an organization will be able to transform business processes and modernize supporting systems to minimize overlap and maximize interoperability. The concept of enterprise architectures originated in the mid-1980s; various frameworks for defining the content of these architectures have been published by government agencies and OMB. Moreover, legislation and federal guidance requires agencies to develop and use architectures. For more than a decade, we have conducted work to improve agency architecture efforts. To this end, we developed an enterprise architecture management maturity framework that provides federal agencies with a common benchmarking tool for assessing the management of their enterprise architecture efforts and developing improvement plans. Enterprise Architecture Description and Importance An enterprise can be viewed as either a single organization or a functional area that transcends more than one organization (e.g., financial management, homeland security). An architecture can be viewed as the structure (or structural description) of any activity. Thus, enterprise architectures are basically systematically derived and captured descriptions—in useful models, diagrams, and narrative. More specifically, an architecture describes the enterprise in logical terms (such as interrelated business processes and business rules, information needs and flows, and work locations and users) as well as in technical terms (such as hardware, software, data, communications, and security attributes and performance standards). It provides these perspectives both for the enterprise’s current or “as-is” environment and for its target or “to- be” environment, as well as a transition plan for moving from the “as-is” to the “to-be” environment. The importance of enterprise architectures is a basic tenet of both organizational transformation and IT management, and their effective use is a recognized hallmark of successful public and private organizations. For over a decade, we have promoted the use of architectures, recognizing them as a crucial means to a challenging end: optimized agency operations and performance. The alternative, as our work has shown, is the perpetuation of the kinds of operational environments that burden most agencies today, where a lack of integration among business operations and the IT resources supporting them leads to systems that are duplicative, poorly integrated, and unnecessarily costly to maintain and interface. Employed in concert with other important IT management controls (such as portfolio-based capital planning and investment control practices), architectures can greatly increase the chances that the organizations’ operational and IT environments will be configured so as to optimize mission performance. Brief History of Architecture Frameworks and Management Guidance During the mid-1980s, John Zachman, widely recognized as a leader in the field of enterprise architecture, identified the need to use a logical construction blueprint (i.e., an architecture) for defining and controlling the integration of systems and their components. Accordingly, Zachman developed a structure or framework for defining and capturing an architecture, which provides for six perspectives or “windows” from which to view the enterprise. Zachman also proposed six abstractions or models associated with each of these perspectives. Zachman’s framework provides a way to identify and describe an entity’s existing and planned component parts and the parts’ relationships before the entity begins the costly and time-consuming efforts associated with developing or transforming itself. Since Zachman introduced his framework, a number of frameworks have emerged within the federal government, beginning with the publication of the National Institute of Standards and Technology (NIST) framework in 1989. Since that time, other federal entities have issued frameworks, including the Department of Defense (DOD) and the Department of the Treasury. In September 1999, the federal Chief Information Officers (CIO) Council published the Federal Enterprise Architecture Framework (FEAF), which was intended to provide federal agencies with a common construct for their architectures, thereby facilitating the coordination of common business processes, technology insertion, information flows, and system investments among federal agencies. The FEAF described an approach, including models and definitions, for developing and documenting architecture descriptions for multi-organizational functional segments of the federal government. More recently, OMB established the Federal Enterprise Architecture Program Management Office (FEAPMO) to develop a federal enterprise architecture according to a collection of five reference models (see table 1). These models are intended to facilitate governmentwide improvement through cross-agency analysis and the identification of duplicative investments, gaps, and opportunities for collaboration, interoperability, and integration within and across government agencies. OMB has identified multiple purposes for the Federal Enterprise Architecture, such as the following: informing agency enterprise architectures and facilitating their development by providing a common classification structure and vocabulary; providing a governmentwide framework that can increase agency awareness of IT capabilities that other agencies have or plan to acquire, so that they can explore opportunities for reuse; helping OMB decision makers identify opportunities for collaboration among agencies through the implementation of common, reusable, and interoperable solutions; and providing the Congress with information that it can use as it considers the authorization and appropriation of funding for federal programs. Although these post-Zachman frameworks differ in their nomenclatures and modeling approaches, each consistently provides for defining an enterprise’s operations in both logical and technical terms, provides for defining these perspectives for the enterprise’s current and target environments, and calls for a transition plan between the two. Several laws and regulations address enterprise architecture. For example, the Clinger-Cohen Act of 1996 directs the CIOs of major departments and agencies to develop, maintain, and facilitate the implementation of information technology architectures as a means of integrating agency goals and business processes with information technology. Also, OMB Circular A-130, which implements the Clinger-Cohen Act, requires that agencies document and submit their initial enterprise architectures to OMB and that agencies submit updates when significant changes to their enterprise architectures occur. The circular also directs OMB to use various reviews to evaluate the adequacy and efficiency of each agency’s compliance with the circular. A Decade of GAO Work Has Focused on Improving Agency Enterprise Architecture Efforts We began reviewing federal agencies’ use of enterprise architectures in 1994, initially focusing on those agencies that were pursuing major systems modernization programs that were high risk. These included the National Weather Service systems modernization, the Federal Aviation Administration (FAA) air traffic control modernization, and the Internal Revenue Service tax systems modernization. Generally, we reported that these agencies’ enterprise architectures were incomplete, and we made recommendations that they develop and implement complete enterprise architectures to guide their modernization efforts. Since then, we have reviewed enterprise architecture management at other federal agencies, including the Department of Education (Education), the Customs Service, the Immigration and Naturalization Service, the Centers for Medicare and Medicaid Services, FAA, and the Federal Bureau of Investigation (FBI). We have also reviewed the use of enterprise architectures for critical agency functional areas, such as the integration and sharing of terrorist watch lists across key federal departments and DOD financial management, logistics management, combat identification, and business systems modernization. These reviews continued to identify the absence of complete and enforced enterprise architectures, which in turn has led to agency business operations, systems, and data that are duplicative, incompatible, and not integrated; these conditions have either prevented agencies from sharing data or forced them to depend on expensive, custom-developed system interfaces to do so. Accordingly, we made recommendations to improve the respective architecture efforts. In some cases progress has been made, such as at DOD and FBI. As a practical matter, however, considerable time is needed to completely address the kind of substantive issues that we have raised and to make progress in establishing more mature architecture programs. In 2002 and 2003, we also published reports on the status of enterprise architectures governmentwide. The first report (February 2002) showed that about 52 percent of federal agencies self-reported having at least the management foundation that is needed to successfully develop, implement, and maintain an enterprise architecture, and that about 48 percent of agencies had not yet advanced to that basic stage of maturity. We attributed this state of architecture management to four management challenges: (1) overcoming limited executive understanding, (2) inadequate funding, (3) insufficient number of skilled staff, and (4) organizational parochialism. Additionally, we recognized OMB’s efforts to promote and oversee agencies’ enterprise architecture efforts. Nevertheless, we determined that OMB’s leadership and oversight could be improved by, for example, using a more structured means of measuring agencies’ progress and by addressing the above management challenges. The second report (November 2003) showed the percentage of agencies that had established at least a foundation for enterprise architecture management was virtually unchanged. We attributed this to long-standing enterprise architecture challenges that had yet to be addressed. In particular, more agencies reported lack of agency executive understanding of enterprise architecture and the scarcity of skilled architecture staff as significant challenges. OMB generally agreed with our findings and the need for additional agency assessments. Further, it stated that fully implementing our recommendations would require sustained management attention, and that it had begun by working with the CIO Council to establish the Chief Architect Forum and to increase the information OMB reports on enterprise architecture to Congress. Since then, OMB has developed and implemented an enterprise architecture assessment tool. According to OMB, the tool helps better understand the current state of an agency’s architecture and assists agencies in integrating architectures into their decision-making processes. The latest version of the assessment tool (2.0) was released in December 2005 and includes three capability areas: (1) completion, (2) use, and (3) results. Table 2 describes each of these areas. The tool also includes criteria for scoring an agency’s architecture program on a scale of 0 to 5. In early 2006, the major departments and agencies were required by OMB to self assess their architecture programs using the tool. OMB then used the self assessment to develop its own assessment. These assessment results are to be used in determining the agency’s e- Government score within the President’s Management Agenda. GAO’s Enterprise Architecture Management Maturity Framework (EAMMF) In 2002, we developed version 1.0 of our Enterprise Architecture Management Maturity Framework (EAMMF) to provide federal agencies with a common benchmarking tool for planning and measuring their efforts to improve enterprise architecture management, as well as to provide OMB with a means for doing the same governmentwide. We issued an update of the framework (version 1.1) in 2003. This framework is an extension of A Practical Guide to Federal Enterprise Architecture, Version 1.0, published by the CIO Council. Version 1.1 of the framework arranges 31 core elements (practices or conditions that are needed for effective enterprise architecture management) into a matrix of five hierarchical maturity stages and four critical success attributes that apply to each stage. Within a given stage, each critical success attribute includes between one and four core elements. Based on the implicit dependencies among the core elements, the EAMMF associates each element with one of five maturity stages (see fig. 1). The core elements can be further categorized by four groups: architecture governance, content, use, and measurement. EAMMF Stages Stage 1: Creating EA awareness. At stage 1, either an organization does not have plans to develop and use an architecture, or it has plans that do not demonstrate an awareness of the value of having and using an architecture. While stage 1 agencies may have initiated some enterprise architecture activity, these agencies’ efforts are ad hoc and unstructured, lack institutional leadership and direction, and do not provide the management foundation necessary for successful enterprise architecture development as defined in stage 2. Stage 2: Building the EA management foundation. An organization at stage 2 recognizes that the enterprise architecture is a corporate asset by vesting accountability for it in an executive body that represents the entire enterprise. At this stage, an organization assigns enterprise architecture management roles and responsibilities and establishes plans for developing enterprise architecture products and for measuring program progress and product quality; it also commits the resources necessary for developing an architecture—people, processes, and tools. Specifically, a stage 2 organization has designated a chief architect and established and staffed a program office responsible for enterprise architecture development and maintenance. Further, it has established a committee or group that has responsibility for enterprise architecture governance (i.e., directing, overseeing, and approving architecture development and maintenance). This committee or group membership has enterprisewide representation. At stage 2, the organization either has plans for developing or has started developing at least some enterprise architecture products, and it has developed an enterprisewide awareness of the value of enterprise architecture and its intended use in managing its IT investments. The organization has also selected a framework and a methodology that will be the basis for developing the enterprise architecture products and has selected a tool for automating these activities. Stage 3: Developing the EA. An organization at stage 3 focuses on developing architecture products according to the selected framework, methodology, tool, and established management plans. Roles and responsibilities assigned in the previous stage are in place, and resources are being applied to develop actual enterprise architecture products. At this stage, the scope of the architecture has been defined to encompass the entire enterprise, whether organization-based or function-based. Although the products may not be complete, they are intended to describe the organization in terms of business, performance, information/data, service/application, and technology (including security explicitly in each) as provided for in the framework, methodology, tool, and management plans. Further, the products are to describe the current (as-is) and future (to-be) states and the plan for transitioning from the current to the future state (the sequencing plan). As the products are developed and evolve, they are subject to configuration management. Further, through the established enterprise architecture management foundation, the organization is tracking and measuring its progress against plans, identifying and addressing variances, as appropriate, and then reporting on its progress. Stage 4: Completing the EA. An organization at stage 4 has completed its enterprise architecture products, meaning that the products have been approved by the enterprise architecture steering committee (established in stage 2) or an investment review board, and by the CIO. The completed products collectively describe the enterprise in terms of business, performance, information/data, service/application, and technology for both its current and future operating states, and the products include a plan for transitioning from the current to the future state. Further, an independent agent has assessed the quality (i.e., completeness and accuracy) of the enterprise architecture products. Additionally, evolution of the approved products is governed by a written enterprise architecture maintenance policy approved by the head of the organization. Stage 5: Leveraging the EA to manage change. An organization at stage 5 has secured senior leadership approval of the enterprise architecture products and a written institutional policy stating that IT investments must comply with the architecture, unless granted an explicit compliance waiver. Further, decision makers are using the architecture to identify and address ongoing and proposed IT investments that are conflicting, overlapping, not strategically linked, or redundant. As a result, stage 5 entities avoid unwarranted overlap across investments and ensure maximum systems interoperability, which in turn ensures the selection and funding of IT investments with manageable risks and returns. Also, at stage 5, the organization tracks and measures enterprise architecture benefits or return on investment, and adjustments are continuously made to both the enterprise architecture management process and the enterprise architecture products. EAMMF Attributes Attribute 1: Demonstrates commitment. Because the enterprise architecture is a corporate asset for systematically managing institutional change, the support and sponsorship of the head of the enterprise are essential to the success of the architecture effort. An approved enterprise policy statement provides such support and sponsorship, promoting institutional buy-in and encouraging resource commitment from participating components. Equally important in demonstrating commitment is vesting ownership of the architecture with an executive body that collectively owns the enterprise. Attribute 2: Provides capability to meet commitment. The success of the enterprise architecture effort depends largely on the organization’s capacity to develop, maintain, and implement the enterprise architecture. Consistent with any large IT project, these capabilities include providing adequate resources (i.e., people, processes, and technology), defining clear roles and responsibilities, and defining and implementing organizational structures and process management controls that promote accountability and effective project execution. Attribute 3: Demonstrates satisfaction of commitment. Satisfaction of the organization’s commitment to develop, maintain, and implement an enterprise architecture is demonstrated by the production of artifacts (e.g., the plans and products). Such artifacts demonstrate follow through—that is, actual enterprise architecture production. Satisfaction of commitment is further demonstrated by senior leadership approval of enterprise architecture documents and artifacts; such approval communicates institutional endorsement and ownership of the architecture and the change that it is intended to drive. Attribute 4: Verifies satisfaction of commitment. This attribute focuses on measuring and disclosing the extent to which efforts to develop, maintain, and implement the enterprise architecture have fulfilled stated goals or commitments of the enterprise architecture. Measuring such performance allows for tracking progress that has been made toward stated goals, allows appropriate actions to be taken when performance deviates significantly from goals, and creates incentives to influence both institutional and individual behaviors. EAMMF Groups The framework’s 31 core elements can also be placed in one of four groups of architecture related activities, processes, products, events, and structures. The groups are architecture governance, content, use, and measurement. These groups are generally consistent with the capability area descriptions in the previously discussed OMB enterprise architecture assessment tool. For example, OMB’s completion capability area addresses ensuring that architecture products describe the agency in terms of processes, services, data, technology, and performance and that the agency has developed a transition strategy. Similarly, our content group includes developing and completing these same enterprise architecture products. In addition, OMB’s results capability area addresses performance measurement as does our measurement group, and OMB’s use capability area addresses many of the same elements in our governance and use groups. Table 3 lists the core elements according to EAMMF group. Overall State of Enterprise Architecture Management Is a Work- in-Progress, Although a Few Agencies Have Largely Satisfied Our Framework Most of the 27 major departments and agencies have not fully satisfied all the core elements associated with stage 2 of our maturity framework. At the same time, however, most have satisfied a number of core elements at stages 3, 4, and 5. Specifically, although only seven have fully satisfied all the stage 2 elements, the 27 have on average fully satisfied 80, 78, 61, and 52 percent of the stage 2, 3, 4, and 5 elements, respectively. Of the core elements that have been fully satisfied, 77 percent of those related to architecture governance have been fully satisfied, while 68, 52, and 47 percent of those related to architecture content, use, and measurement, respectively, have been fully satisfied. Most of the 27 have also at least partially satisfied a number of additional core elements across all the stages. For example, all but 7 have at least partially satisfied all the elements required to achieve stage 3 or higher. Collectively, this means efforts are underway to mature the management of most agency enterprise architecture programs, but overall these efforts are uneven and still a work- in-progress and they face numerous challenges that departments and agencies identified. It also means that some architecture programs provide examples from which less mature programs could learn and improve. Without mature enterprise architecture programs, some departments and agencies will not realize the many benefits that they attributed to architectures, and they are at risk of investing in IT assets that are duplicative, not well-integrated, and do not optimally support mission operations. The Degree to which Major Departments and Agencies Have Fully Satisfied Our Framework’s Core Elements Is Uneven and Their Collective Efforts Can Be Viewed as a Work-in- Progress To qualify for a given stage of maturity under our architecture management framework, a department or agency had to fully satisfy all of the core elements at that stage. Using this criterion, three departments and agencies are at stage 2, meaning that they demonstrated to us through verifiable documentation that they have established the foundational commitments and capabilities needed to manage the development of an architecture. In addition, four are at stage 3, meaning that they similarly demonstrated that their architecture development efforts reflect employment of the basic control measures in our framework. Table 4 summarizes the maturity stage of each architecture program that we assessed. Appendix IV provides the detailed results of our assessment of each department and agency architecture program against our maturity framework. While using this criterion provides an important perspective on the state of department and agency architecture programs, it can mask the fact that the programs have met a number of core elements across higher stages of maturity. When the percentage of core elements that have been fully satisfied at each stage is considered, the state of the architecture efforts generally shows both a larger number of more robust architecture programs as well as more variability across the departments and agencies. Specifically, 16 departments and agencies have fully satisfied more than 70 percent of the core elements. Examples include Commerce, which has satisfied 87 percent of the core elements, including 75 percent of the stage 5 elements, even though it is at stage 1 because its enterprise architecture approval board does not have enterprisewide representation (a stage 2 core element). Similarly, SSA, which is also a stage 1 because the agency’s enterprise architecture methodology does not describe the steps for developing, maintaining, and validating the agency’s enterprise architecture (a stage 2 core element), has at the same time satisfied 87 percent of all the elements, including 63 percent of the stage 5 elements. In contrast, the Army, which is also in stage 1, has satisfied but 3 percent of all framework elements. Overall, 10 agency architecture programs fully satisfied more than 75 percent of the core elements, 14 between 50 and 75 percent, and 4 fewer than 50 percent. These four included the three military departments. Table 5 summarizes for each department and agency the percentage of core elements fully satisfied in total and by maturity stage. Notwithstanding the additional perspective that the percentage of core elements fully satisfied across all stages provides, it is important to note that the staged core elements in our framework represent a hierarchical or systematic progression to establishing a well-managed architecture program, meaning that core elements associated with lower framework stages generally support the effective execution of higher maturity stage core elements. For instance, if a program has developed its full suite of “as- is” and “to-be” architecture products, including a sequencing plan (stage 4 core elements), but the products are not under configuration management (stage 3 core element), then the integrity and consistency of the products will be not be assured. Our analysis showed that this was the case for a number of architecture programs. For example, State has developed certain “as-is” and “to-be” products for the Joint Enterprise Architecture, which is being developed in collaboration with USAID, but an enterprise architecture configuration management plan has not yet been finalized. Further, not satisfying even a single core element can have a significant impact on the effectiveness of an architecture program. For example, not having adequate human capital with the requisite knowledge and skills (stage 2 core element), not using a defined framework or methodology (stage 2 core element), or not using an independent verification and validation agent (stage 4 core element), could significantly limit the quality and utility of an architecture. The DOD’s experience between 2001 and 2005 in developing its BEA is a case in point. During this time, we identified the need for the department to have an enterprise architecture for its business operations, and we made a series of recommendations grounded in, among other things, our architecture management framework to ensure that it was successful in doing so. In 2005, we reported that the department had not implemented most of our recommendations. We further reported that despite developing multiple versions of a wide range of architecture products, and having invested hundreds of millions of dollars and 4 years in doing so, the department did not have a well-defined architecture and that what it had developed had limited utility. Among other things, we attributed the poor state of its architecture products to ineffective program governance, communications, program planning, human capital, and configuration management, most of which are stage 2 and 3 foundational core elements. To the department’s credit, we recently reported that it has since taken a number of actions to address these fundamental weaknesses and our related recommendations and that it is now producing architecture products that provide a basis upon which to build. The significance of not satisfying a single core element is also readily apparent for elements associated with the framework’s content group. In particular, the framework emphasizes the importance of planning for, developing, and completing an architecture that includes the “as-is” and the “to-be” environments as well as a plan for transitioning between the two. It also recognizes that the “as-is” and “to-be” should address the business, performance, information/data, application/service, technology, and security aspects of the enterprise. To the extent these aspects are not addressed in this way, the quality of the architecture and thus its utility will suffer. In this regard, we found examples of departments and agencies that were addressing some but not all of these aspects. For example, HUD has yet to adequately incorporate security into its architecture. This is significant because security is relevant to all the other aspects of its architecture, such as information/data and applications/services. As another example, NASA’s architecture does not include a plan for transitioning from the “as-is” to the “to-be” environments. According to the administration’s Chief Enterprise Architect, a transition plan has not yet been developed because of insufficient time and staff. Looking across all the departments and agencies at core elements that are fully satisfied, not by stage of maturity, but by related groupings of core elements, provides an additional perspective on the state of the federal government’s architecture efforts. As noted earlier, these groupings of core elements are architecture governance, content, use, and measurement. Overall, departments and agencies on average have fully satisfied 77 percent of the governance-related elements. In particular, 93 and 96 percent of the agencies have established an architecture program office and appointed a chief architect, respectively. In addition, 93 percent have plans that call for their respective architectures to describe the “as-is” and the “to-be” environments, and for having a plan for transitioning between the two (see fig. 2). In contrast, however, the core element associated with having a committee or group with representation from across the enterprise directing, overseeing, and approving the architecture was fully satisfied by only 57 percent of the agencies. This core element is important because the architecture is a corporate asset that needs to be enterprisewide in scope and accepted by senior leadership if it is to be leveraged for organizational change. In contrast to governance, the extent of full satisfaction of those core elements that are associated with what an architecture should contain varies widely (see fig. 3). For example, the three content elements that address prospectively what the architecture will contain, either in relation to plans or some provision for including needed content, were fully satisfied about 90 percent of the time. However, the core elements addressing whether the products now contain such content were fully satisfied much less frequently (between 54 and 68 percent of the time, depending on the core element), and the core elements associated with ensuring the quality of included content, such as employing configuration management and undergoing independent verification and validation, were also fully satisfied much less frequently (54 and 21 percent of the time, respectively). The state of these core elements raises important questions about the quality and utility of the department and agency architectures. The degree of full satisfaction of those core elements associated with the remaining two groups—use and measurement—is even lower (see figs. 4 and 5, respectively). For example, the architecture use-related core elements were fully satisfied between 39 and 64 percent of the time, while the measurement-related elements were satisfied between 14 and 71 percent. Of particular note is that only 39 percent of the departments and agencies could demonstrate that IT investments comply with their enterprise architectures, only 43 percent of the departments and agencies could demonstrate that compliance with the enterprise architecture is measured and reported, and only 14 percent were measuring and reporting on their respective architecture program’s return on investment. As our work and related best practices show, the value in having an architecture is using it to affect change and produce results. Such results, as reported by the departments and agencies include improved information sharing, increased consolidation, enhanced productivity, and lower costs, all of which contribute to improved agency performance. To realize these benefits, however, IT investments need to comply with the architecture and measurement of architecture activities, including accrual of expected benefits, needs to occur. Most Agencies Have at Least Partially Satisfied Most Framework Elements In those instances where departments and agencies have not fully satisfied certain core elements in our framework, most have at least partially satisfied these elements. To illustrate, 4 agencies would improve to at least stage 4 if the criterion for being a given stage was relaxed to only partially satisfying a core element. Moreover, 11 of the remaining agencies would advance by two stages under such a less demanding criterion, and only 6 would not improve their stage of maturity under these circumstances. A case in point is Commerce, which could move from stage 1 to stage 5 under these circumstances because it has fully satisfied all but four core elements and these remaining four (one each at stages 2 and 4 and two at stage 5) are partially satisfied. Another case in point is the SSA, which has fully satisfied all but four core elements (one at stage 2 and three at stage 5) and has partially satisfied three of these remaining four. If the criterion used allowed advancement to the next stage by only partially satisfying core elements, the administration would be stage 4. (See fig. 6 for a comparison of department and agency program maturity stages under the two criteria.) As mentioned earlier, departments and agencies can require considerable time to completely address issues related to their respective enterprise architecture programs. It is thus important to note that even though certain core elements are partially satisfied, fully satisfying some of them may not be accomplished quickly and easily. It is also important to note the importance of fully, rather than partially, satisfying certain elements, such as those that fall within the architecture content group. In this regard, 18, 18, and 21 percent of the departments and agencies partially satisfied the following stage 4 content-related core elements, respectively: “EA products describe ‘as-is’ environment, ‘to-be’ environment and sequencing plan”; “Both ‘as-is’ and ‘to-be’ environments are described in terms of business, performance, information/data, application/service, and technology”; and “These descriptions fully address security.” Not fully satisfying these elements can have important implications for the quality of an architecture, and thus its usability and results. Seven Departments or Agencies Need to Satisfy Five or Fewer Core Elements to Be at Stage 5 Seven departments or agencies would meet our criterion for stage 5 if each was to fully satisfy one to five additional core elements (see table 6). For example, Interior could achieve stage 5 by satisfying one additional element: “EA products and management processes undergo independent verification and validation.” In this regard, Interior officials have drafted a statement of work intended to ensure that independent verification and validation of enterprise architecture products and management processes is performed. The other six departments and agencies are HUD and OPM, which could achieve stage 5 by satisfying two additional elements; Commerce, Labor, and SSA, which could achieve the same by satisfying four additional elements; and Education which could be at stage 5 by satisfying five additional elements. Of these seven, five have not fully satisfied the independent verification and validation core element. Notwithstanding the fact that five or fewer core elements need to be satisfied by these agencies to be at stage 5, it is important to note that in some cases the core elements not being satisfied are not only very important, but also neither quickly nor easily satisfied. For example, one of the two elements that HUD needs to satisfy is having its architecture products address security. This is extremely important as security is an integral aspect of the architecture’s performance, business, information/data, application/service, and technical models, and needs to be reflected thoroughly and consistently across each of them. Departments and Agencies Report Numerous Challenges Facing Them in Developing and Using Enterprise Architectures The challenges facing departments and agencies in developing and using enterprise architectures are formidable. The challenge that most departments and agencies cited as being experienced to the greatest extent is the one that having and using an architecture is intended to overcome— organizational parochialism and cultural resistance to adopting an enterprisewide mode of operation in which organizational parts are sub- optimized in order to optimize the performance and results of the enterprise as a whole. Specifically, 93 percent of the departments and agencies reported that they encountered this challenge to a significant (very great or great) or moderate extent. Other challenges reported to this same extent were ensuring that the architecture program had adequate funding (89 percent), obtaining staff skilled in the architecture discipline (86 percent), and having the department or agency senior leaders understand the importance and role of the enterprise architecture (82 percent). As we have previously reported, sustained top management leadership is the key to overcoming each of these challenges. In this regard, our enterprise architecture management maturity framework provides for such leadership and addressing these and other challenges through a number of core elements. These elements contain mechanisms aimed at, for example, establishing responsibility and accountability for the architecture with senior leaders and ensuring that the necessary institutional commitments are made to the architecture program, such as through issuance of architecture policy and provision of adequate resources (both funding and people). See table 7 for a listing of the reported challenges and the extent to which they are being experienced. Many Departments and Agencies Reported That They Have Already Realized Significant Architecture Benefits, While Most Expect to Do So in the Future A large percentage of the departments and agencies reported that they have already accrued numerous benefits from their respective architecture programs (see table 8). For example, 70 percent said that have already improved the alignment between their business operations and the IT that supports these operations to a significant extent. Such alignment is extremely important. According to our IT investment management maturity framework, alignment between business needs and IT investments is a critical process in building the foundation for an effective approach to IT investment management. In addition, 64 percent responded that they have also improved information/knowledge sharing to a significant or moderate extent. Such sharing is also very important. In 2005, for example, we added homeland security information sharing to our list of high-risk areas because despite the importance of information to fighting terrorism and maintaining the security of our nation, many aspects of homeland security information sharing remain ineffective and fragmented. Other examples of mission-effectiveness related benefits reported as already being achieved to a significant or moderate extent by roughly one-half of the departments and agencies included improved agency management and change management and improved system and application interoperability. Beyond these benefits, departments and agencies also reported already accruing, to a significant or moderate extent, a number of efficiency and productivity benefits. For example, 56 percent reported that they have increased the use of enterprise software licenses, which can permit cost savings through economies of scale purchases; 56 percent report that they have been able to consolidate their IT infrastructure environments, which can reduce the costs of operating and maintaining duplicative capabilities; 41 percent reported that they have been able to reduce the number of applications, which is a key to reducing expensive maintenance costs; and 37 percent report productivity improvements, which can free resources to focus on other high priority matters. Notwithstanding the number and extent of benefits that department and agency responses show have already been realized, these same responses also show even more benefits that they have yet to realize (see table 8). For example, 30 percent reported that they have thus far achieved, to little or no extent, better business and IT alignment. They similarly reported that they have largely untapped many other effectiveness and efficiency benefits, with between 36 and 70 percent saying these benefits have been achieved to little or no extent, depending on benefit. Moreover, for all the cited benefits, a far greater percentage of the departments and agencies (74 to 93 percent) reported that they expect to realize each of the benefits to a significant or moderate extent sometime in the future. What this suggests is that the real value in the federal government from developing and using enterprise architecture remains largely unrealized potential. Our architecture maturity framework recognizes that a key to realizing this potential is effectively managing department and agency enterprise architecture programs. However, knowing whether benefits and results are in fact being achieved requires having associated measures and metrics. In this regard, very few (21 percent) of the departments and agencies fully satisfied our stage 5 core element, “Return on EA investment is measured and reported.” Without satisfying this element, it is unlikely that the degree to which expected benefits are accrued will be known. Conclusions If managed effectively, enterprise architectures can be a useful change management and organizational transformation tool. The conditions for effectively managing enterprise architecture programs are contained in our architecture management maturity framework. While a few of the federal government’s 27 major departments and agencies have fully satisfied all the conditions needed to be at stage 2 or above in our framework, many have fully satisfied a large percentage of the core elements across most of the stages, particularly those elements related to architecture governance. Nevertheless, most departments and agencies are not yet where they need to be relative to architecture content, use, and measurement and thus the federal government is not as well positioned as it should be to realize the significant benefits that a well-managed architecture program can provide. Moving beyond this status will require most departments and agencies to overcome some significant obstacles and challenges. The key to doing so continues to be sustained organizational leadership. Without such organizational leadership, the benefits of enterprise architecture will not be fully realized. Recommendations for Executive Action To assist the 27 major departments and agencies in addressing enterprise architecture challenges, managing their architecture programs, and realizing architecture benefits, we recommend that the Administrators of the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, Small Business Administration, and U.S. Agency for International Development; the Attorney General; the Commissioners of the Nuclear Regulatory Commission and Social Security Administration; the Directors of the National Science Foundation and the Office of Personnel Management; and the Secretaries of the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Interior, Labor, State, Transportation, Treasury, and Veterans Affairs ensure that their respective enterprise architecture programs develop and implement plans for fully satisfying each of the conditions in our enterprise architecture management maturity framework. Agency Comments and Our Evaluation We received written or oral comments on a draft of this report from 25 of the departments and agencies in our review. Of the 25 departments and agencies, all but one department fully agreed with our recommendation. Nineteen departments and agencies agreed and six partially agreed with our findings. Areas of disagreement for these six centered on (1) the adequacy of the documentation that they provided to demonstrate satisfaction of certain core elements and (2) recognition of steps that they reported taking to satisfy certain core elements after we concluded our review. For the most part, these isolated areas of disagreement did not result in any changes to our findings for two primary reasons. First, our findings across the departments and agencies were based on consistently applied evaluation criteria governing the adequacy of documentation, and were not adjusted to accommodate any one particular department or agency. Second, our findings represent the state of each architecture program as of March 2006, and thus to be consistent do not reflect activities that may have occurred after this time. Beyond these comments, several agencies offered suggestions for improving our framework, which we will consider prior to issuing the next version of the framework. The departments’ and agencies’ respective comments and our responses, as warranted, are as follows: Agriculture’s Associate CIO provided e-mail comments stating that the department will incorporate our recommendation into its enterprise architecture program plan. Commerce’s CIO stated in written comments that the department concurred with our findings and will consider actions to address our recommendation. Commerce’s written comments are reproduced in appendix V. DOD’s Director, Architecture and Interoperability, stated in written comments that the department generally concurred with our recommendation to the five DOD architecture programs included in our review. However, the department stated that it did not concur with the one aspect of the recommendation directed at the GIG architecture concerning independent verification and validation (IV&V) because it believes that its current internal verification and validation activities are sufficient. We do not agree for two reasons. First, these internal processes are not independently performed. As we have previously reported, IV&V is a recognized hallmark of well managed programs, including architecture programs, and to be effective, it must be performed by an entity that is independent of the processes and products that are being reviewed. Second, the scope of the internal verification and validation activities only extends to a subset of the architecture products and management processes. The department also stated that it did not concur with one aspect of our finding directed at BEA addressing security. According to DOD, because GIG addresses security and the GIG states that it extends to all defense mission areas, including the business mission area, the BEA in effect addresses security. We do not fully agree. While we acknowledge that GIG addresses security and states that it is to extend to all DOD mission areas, including the business mission area, it does not describe how this will be accomplished for BEA. Moreover, nowhere in the BEA is security addressed, either through statement or reference, relative to the architecture’s performance, business, information/data, application/service, and technology products. DOD’s written comments, along with our responses, are reproduced in appendix VI. Education’s Assistant Secretary for Management and Acting CIO stated in written comments that the department plans to address our findings. Education’s written comments are reproduced in appendix VII. Energy’s Acting Associate CIO for Information Technology Reform stated in written comments that the department concurs with our report. Energy’s written comments are reproduced in appendix VIII. DHS’s Director, Departmental GAO/OIG Liaison Office, stated in written comments that the department has taken, and plans to take, steps to address our recommendation. DHS’s written comments, along with our responses to its suggestions for improving our framework, are reproduced in appendix IX. DHS also provided technical comments via e-mail, which we have incorporated, as appropriate, in the report. HUD’s CIO stated in written comments that the department generally concurs with our findings and is developing a plan to address our recommendation. The CIO also provided updated information about activities that the department is taking to address security in its architecture. HUD’s written comments are reproduced in appendix X. Interior’s Assistant Secretary, Policy, Management and Budget, stated in written comments that the department agrees with our findings and recommendation and that it has recently taken action to address them. Interior’s written comments are reproduced in appendix XI. DOJ’s CIO stated in written comments that our findings accurately reflect the state of the department’s enterprise architecture program and the areas that it needs to address. The CIO added that our report will help guide the department’s architecture program and provided suggestions for improving our framework and its application. DOJ’s written comments, along with our responses to its suggestions, are reproduced in appendix XII. Labor’s Deputy CIO provided e-mail comments stating that the department concurs with our findings. The Deputy CIO also provided technical comments that we have incorporated, as appropriate, in the report. State’s Assistant Secretary for Resource Management and Chief Financial Officer provided written comments that summarize actions that the department will take to fully satisfy certain core elements and that suggest some degree of disagreement with our findings relative to three other core elements. First, the department stated that its architecture configuration management plan has been approved by both the State and USAID CIOs. However, it provided no evidence to demonstrate that this was the case as of March 2006 when we concluded our review, and thus we did not change our finding relative to architecture products being under configuration management. Second, the department stated that its enterprise architecture has been approved by State and USAID executive offices. However, it did not provide any documentation showing such approval. Moreover, it did not identify which executive offices it was referring to so as to allow a determination of whether they were collectively representative of the enterprise. As a result, we did not change our finding relative to whether a committee or group representing the enterprise or an investment review board has approved the current version of the architecture. Third, the department stated that it provided us with IT investment score sheets during our review that demonstrate that investment compliance with the architecture is measured and reported. However, no such score sheets were provided to us. Therefore, we did not change our finding. The department’s written comments, along with more detailed responses, are reproduced in appendix XIII. Treasury’s Associate CIO for E-Government stated in written comments that the department concurs with our findings and discussed steps being taken to mature its enterprise architecture program. The Associate CIO also stated that our findings confirm the department’s need to provide executive leadership in developing its architecture program and to codify the program into department policy. Treasury’s written comments are reproduced in appendix XIV. VA’s Deputy Secretary stated in written comments that the department concurred with our recommendation and that it will provide a detailed plan to implement our recommendation. VA’s written comments are reproduced in appendix XV. EPA’s Acting Assistant Administrator and CIO stated in written comments that the agency generally agreed with our findings and that our assessment is a valuable benchmarking exercise that will help improve agency performance. The agency also provided comments on our findings relative to five core elements. For one of these core elements, the comments directed us to information previously provided about the agency’s architecture committee that corrected our understanding and resulted in us changing our finding about this core element. With respect to the other four core elements concerning use of an architecture methodology, measurement of progress against program plans, integration of the architecture into investment decision making, and management of architecture change, the comments also directed us to information previously provided but this did not result in any changes to our findings because evidence demonstrating full satisfaction of each core element was not apparent. EPA’s written comments, along with more detailed responses to each, are reproduced in appendix XVI. GSA’s Administrator stated in written comments that the agency concurs with our recommendation. The Administrator added that our findings will be critical as the agency works towards further implementing our framework’s core elements. GSA’s written comments are reproduced in appendix XVII. NASA’s Deputy Administrator stated in written comments that the agency concurs with our recommendation. NASA’s written comments are reproduced in appendix XVIII. NASA’s GAO Liaison also provided technical comments via e-mail, which we have incorporated, as appropriate, in the report. NSF’s CIO provided e-mail comments stating that the agency will use the information in our report, where applicable, for future planning and investment in its architecture program. The CIO also provided technical comments that we have incorporated, as appropriate, in the report. NRC’s GAO liaison provided e-mail comments stating that the agency substantially agrees with our findings and describing activities it has recently taken to address them. OPM’s CIO provided e-mail comments stating that the agency agrees with our findings and describing actions it is taking to address them. SBA’s GAO liaison provided e-mail comments in which the agency disagreed with our findings on two core elements. First, and notwithstanding agency officials’ statements that its architecture program did not have adequate resources, the liaison did not agree with our “partially satisfied” assessment for this core element because, according to the liaison, the agency has limited discretionary funds and competing, but unfunded, federal mandates to comply with that limit discretionary funding for an agency of its size. While we acknowledge SBA’s challenges, we would note that they are not unlike the resource constraints and competing priority decisions that face most agencies, and that while the reasons why an architecture program may not be adequately resourced may be justified, the fact remains that any assessment of the architecture program’s maturity, and thus its likelihood of success, needs to recognize whether adequate resources exist. Therefore, we did not change our finding on this core element. Second, the liaison did not agree with our finding that the agency did not have plans for developing metrics for measuring architecture progress, quality, compliance, and return on investment. However, our review of documentation provided by SBA and cited by the liaison showed that while such plans address metric development for architecture progress, quality, and compliance, they do not address architecture return on investment. Therefore, we did not change our finding that this core element was partially satisfied. SSA’s Commissioner stated in written comments that the report is both informative and useful, and that the agency agrees with our recommendation and generally agrees with our findings. Nevertheless, the agency disagreed with our findings on two core elements. First, the agency stated that documentation provided to us showed that it has a methodology for developing, maintaining, and validating its architecture. We do not agree. In particular, our review of SSA provided documentation showed that it did not adequately describe the steps to be followed relative to development, maintenance, or validation. Second, the agency stated that having the head of the agency approve the current version of the architecture is satisfied in SSA’s case because the Clinger-Cohen Act of 1996 vests its CIO with enterprise architecture approval authority and the CIO has approved the architecture. We do not agree. The core element in our framework concerning enterprise architecture approval by the agency head is derived from federal guidance and best practices upon which our framework is based. This guidance and related practices, and thus our framework, recognize that an enterprise architecture is a corporate asset that is to be owned and implemented by senior management across the enterprise, and that a key characteristic of a mature architecture program is having the architecture approved by the department or agency head. Because the Clinger-Cohen Act does not address approval of an enterprise architecture, our framework’s core element for agency head approval of an enterprise architecture is not inconsistent with, and is not superseded by, that act. SSA’s written comments, along with more detailed responses, are reproduced in appendix XIX. USAID’s Acting Chief Financial Officer stated in written comments stated that the agency will work with State to implement our recommendation. USAID’s written comments are reproduced in appendix XX. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Administrators of the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, Small Business Administration, and U.S. Agency for International Development; the Attorney General; the Commissioners of the Nuclear Regulatory Commission and Social Security Administration; the Directors of the National Science Foundation and the Office of Personnel Management; and the Secretaries of the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Interior, Labor, State, Transportation, Treasury, and Veterans Affairs. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this information, please contact me at (202) 512-3439 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XXI. Reported Enterprise Architecture Costs Vary, with Contractors and Personnel Accounting for Most Costs Department- and agency-reported data show wide variability in their costs to develop and maintain their enterprise architectures. Generally, the costs could be allocated to several categories with the majority of costs attributable to contractor support and agency personnel. Architecture Development and Maintenance Costs Vary As we have previously reported, the depth and detail of the architecture to be developed and maintained is dictated by the scope and nature of the enterprise and the extent of enterprise transformation and modernization envisioned. Therefore, the architecture should be tailored to the individual enterprise and that enterprise’s intended use of the architecture. Accordingly, the level of resources that a given department or agency invests in its architecture is likely to vary. Departments and agencies reported that they have collectively invested a total of $836 million to date on enterprise architecture development. Across the 27 departments and agencies, these development costs ranged from a low of $2 million by the Department of the Navy to a high of $433 million by the Department of Defense (DOD) on its Business Enterprise Architecture (BEA). Department and agency estimates of the costs to complete their planned architecture development efforts collectively total about $328 million. The department and agencies combined estimates of annual architecture maintenance costs is about $146 million. These development and maintenance estimates, however, do not include the Departments of the Army and Justice because neither provided these cost estimates. Figures 7 through 9 depict the variability of cost data reported by the departments and agencies. Contractor Support Accounts for the Majority of Architecture Development Costs All of the departments and agencies reported developing their architecture in-house using contractor support. All but two of the departments and agencies allocated their respective architecture development costs to the following cost categories: contractor support, agency personnel, tools, methodologies, training, and other. These 26 agencies accounted for about $741 million of the $836 million total development costs cited above. The vast majority (84 percent) of the $741 million were allocated to contractor services ($621 million), followed next by agency personnel (13 percent or $94 million). The remaining $26 million were allocated as follows: $12 million (2 percent) to architecture tools; $9 million (1 percent) to “other” costs; $4 million (1 percent) to architecture methodologies; and $2 million (less than 1 percent) to training. (See fig. 10.) Architecture Development Activities Were Reported as Largest Component of Contractor-Related Costs The departments and agencies allocated the reported $621 million in contractor-related costs to the following five contractor cost categories: architecture development, independent verification and validation, methodology, support services, and other. Of these categories, architecture development activities accounted for the majority of costs— about $594 million (87 percent). The remaining $85 million was allocated as follows: $51 million (7 percent) to support services, $13 million (2 percent) to “other” costs, $11 million (2 percent) to independent verification and validation, and $10 million (1 percent) to methodologies. (See fig. 11.) Departments and Agencies Reported Experiences with Their Architecture Tools and Frameworks Departments and agencies reported additional information related to the implementation of their enterprise architectures. This information includes architecture tools and frameworks. Departments and Agencies Reported Using a Variety of Enterprise Architecture Tools with Varying Degrees of Satisfaction As stated in our enterprise architecture management maturity framework, an automated architecture tool serves as the repository of architecture artifacts, which are the work products that are produced and used to capture and convey architectural information. An agency’s choice of tool should be based on a number of considerations, including agency needs and the size and complexity of the architecture. The departments and agencies reported that they use various automated tools to develop and maintain their enterprise architectures, with 12 reporting that they use more than one tool. In descending order of frequency, the architecture tools identified were System Architect (18 instances), Microsoft Visio (17), Metis (12), Rational Rose (8), and Enterprise Architecture Management System (EAMS) (4). In addition, 21 departments and agencies reported using one or more other architecture tools. Figure 12 shows the number of departments and agencies using each architecture tool, including the other tools. The departments and agencies also reported various levels of satisfaction with the different enterprise architecture tools. Specifically, about 75 percent of those using Microsoft Visio were either very or somewhat satisfied with the tool, as compared to about 67 percent of those using Metis, about 63 percent of those using Rational Rose, about 59 percent of those using System Architect, and 25 percent of those using EAMS. This means that the percentage of departments and agencies that were dissatisfied, either somewhat or very, with their respective tools ranged from a high of 75 percent of those using EAMS, to a low of about 6 percent of those using System Architect. No departments or agencies that used Metis, Rational Rose, or Microsoft Visio reported any dissatisfaction. See table 9 for a summary of department and agency reported satisfaction with their respective tools. Departments and Agencies Reported Using a Variety of Enterprise Architecture Frameworks with Varying Levels of Satisfaction As we have previously stated, an enterprise architecture framework provides a formal structure for representing the architecture’s content and serves as the basis for the specific architecture products and artifacts that the department or agency develops and maintains. As such, a framework helps ensure the consistent representation of information from across the organization and supports orderly capture and maintenance of architecture content. The departments and agencies reported using various frameworks to develop and maintain their enterprise architectures. The most frequently cited frameworks were the Federal Enterprise Architecture Program Management Office (FEAPMO) Reference Models (25 departments and agencies), the Federal Enterprise Architecture Framework (FEAF) (19 departments and agencies), and the Zachman Framework (17 departments and agencies), with 24 reporting using more than one framework. Other, less frequently reported frameworks were the Department of Defense Architecture Framework (DODAF), the National Institute of Standards and Technology (NIST) framework, and The Open Group Architecture Framework (TOGAF). See figure 13 for a summary of the number of departments and agencies that reported using each framework. Departments and agencies also reported varying levels of satisfaction with their respective architecture. Specifically, about 72 percent of those using the FEAF indicated that they were either very or somewhat satisfied, and about 67 and 61 percent of those using the Zachman framework and the FEAPMO reference models, respectively, reported that they were similarly satisfied. As table 10 shows, few of the agencies that responded to our survey reported being dissatisfied with any of the frameworks. Objective, Scope, and Methodology Our objective was to determine the current status of federal department and agency enterprise architecture efforts. To accomplish this objective, we focused on 28 enterprise architecture programs relating to 27 major departments and agencies. These 27 included the 24 departments and agencies included in the Chief Financial Officers Act. In addition, we included the three military services (the Departments of the Army, Air Force, and Navy) at the request of Department of Defense (DOD) officials. For the DOD, we also included both of its departmentwide enterprise architecture programs—the Global Information Grid and the Business Enterprise Architecture. The U.S. Agency for International Development (USAID), which is developing a USAID enterprise architecture and working with the Department of State (State) to develop a Joint Enterprise Architecture, asked that we evaluate its efforts to develop the USAID enterprise architecture. State officials asked that we evaluate their agency’s enterprise architecture effort based the Joint Enterprise Architecture being developed with USAID. We honored both of these requests. Table 11 lists the 28 department and agency enterprise architecture programs that formed the scope of our review. To determine the status of each of these architecture programs, we developed a data collection instrument based on our Enterprise Architecture Management Maturity Framework (EAMMF), and related guidance, such as OMB Circular A-130 and guidance published by the federal Chief Information Officers (CIO) Council, and our past reports and guidance on the management and content of enterprise architectures. We pretested this instrument at one department and one agency. Based on the results of the pretest, we modified our instrument as appropriate to ensure that our areas of inquiry were complete and clear. Next, we identified the Chief Architect or comparable official at each of the 27 departments and agencies, and met with them to discuss our scope and methodology, share our data collection instrument, and discuss the type and nature of supporting documentation needed to verify responses to our instrument questions. On the basis of department and agency provided documentation to support their respective responses to our data collection instrument, we analyzed the extent to which each satisfied the 31 core elements in our architecture maturity framework. To guide our analysis, we defined detailed evaluation criteria for determining whether a given core element was fully satisfied, partially satisfied, or not satisfied. The criteria for the stage 2, 3, 4, and 5 core elements are contained in tables 12, 13, 14, and 15 respectively. To fully satisfy a core element, sufficient documentation had to be provided to permit us to verify that all aspects of the core element were met. To partially satisfy a core element, sufficient documentation had to be provided to permit us to verify that at least some aspects of the core element were met. Core elements that were neither fully nor partially satisfied were judged to be not satisfied. Our evaluation included first analyzing the extent to which each department and agency satisfied the core elements in our framework, and then meeting with department and agency representatives to discuss core elements that were not fully satisfied and why. As part of this interaction, we sought, and in some cases were provided, additional supporting documentation. We then considered this documentation in arriving at our final determinations about the degree to which each department and agency satisfied each core element in our framework. In applying our evaluation criteria, we analyzed the results of our analysis across different core elements to determine patterns and issues. Our analysis made use of computer programs that were developed by an experienced staff; these programs were independently verified. Through our data collection instrument, we also solicited from each department and agency information on enterprise architecture challenges and benefits, including the extent to which they had been or were expected to be experienced. In addition, we solicited information on architecture costs, including costs to date and estimated costs to complete and maintain each architecture. We also solicited other information, such as use of and satisfaction with architecture tools and frameworks. We analyzed these additional data to determine relevant patterns. We did not independently verify these data. The results presented in this report reflect the state of department and agency architecture programs as of March 8, 2006. We conducted our work in the Washington, D.C., metropolitan area, from May 2005 to June 2006, in accordance with generally accepted government auditing standards. Detailed Assessments of Individual Departments and Agencies against Our EA Management Maturity Framework Department of Agriculture Table 16 shows USDA’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. version 1.1 of GAO’s EAMMF. Department of the Army Table 18 shows Army’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Commerce Table 19 shows Commerce’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Defense – Business Enterprise Architecture Table 20 shows the BEA’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Defense – Global Information Grid Table 21 shows the GIG’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Education Table 22 shows Education’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Energy Table 23 shows Energy’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Health and Human Services Table 24 shows HHS’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Homeland Security Table 25 shows DHS’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Housing and Urban Development Table 26 shows HUD’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. The Department of the Interior Table 27 shows DOI’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Justice Table 28 shows DOJ’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Labor Table 29 shows Labor’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of the Navy Table 30 shows Navy’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of State Table 31 shows State’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Transportation Table 32 shows Transportation’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of the Treasury Table 33 shows the Treasury’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Department of Veterans Affairs Table 34 shows VA’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Environmental Protection Agency Table 35 shows EPA’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. General Services Administration Table 36 shows GSA’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. National Aeronautics and Space Administration Table 37 shows NASA’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. National Science Foundation Table 38 shows NSF’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Nuclear Regulatory Commission Table 39 shows NRC’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Office of Personnel Management Table 40 shows OPM’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Small Business Administration Table 41 shows SBA’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Social Security Administration Table 42 shows SSA’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. U.S. Agency for International Development Table 43 shows USAID’s satisfaction of framework elements in version 1.1 of GAO’s EAMMF. Comments from the Department of Commerce Comments from the Department of Defense GAO Comments 1. We do not agree for two reasons. First, DOD’s internal processes for reviewing and validating the Global Information Grid (GIG), while important and valuable to ensuring architecture quality, are not independently performed. As we have previously reported, independent verification and validation is a recognized hallmark of well-managed programs, including architecture programs. To be effective, it should be performed by an entity that is independent of the processes and products that are being reviewed to help ensure that it is done in an unbiased manner and that is based on objective evidence. Second, the scope of these internal review and validation efforts only extends to a subset of GIG products and management processes. According to our framework, independent verification and validation should address both the architecture products and the processes used to develop them. 2. While we acknowledge that GIG program plans provide for addressing security, and our findings relative to the GIG reflect this, this is not the case for DOD’s Business Enterprise Architecture (BEA). More specifically, how security will be addressed in the BEA performance, business, information/data, application/service, and technology products is not addressed in the BEA either by explicit statement or reference. This finding relative to the BEA is consistent with our recent report on DOD’s Business System Modernization. Comments from the Department of Education Comments from the Department of Energy Comments from the Department of Homeland Security GAO Comments 1. We acknowledge this recommendation and offer three comments in response. First, we have taken a number of steps over the last 5 years to coordinate our framework with OMB. For example, in 2002, we based version 1.0 of our framework on the OMB-sponsored CIO Council Practical Guide to Federal Enterprise Architecture, and we obtained concurrence on the framework from the practical guide’s principal authors. Further, we provided a draft of this version to OMB for comment, and in our 2002 report in which we assessed federal departments and agencies against this version, we recommended that OMB use the framework to guide and assess agency architecture efforts. In addition, in developing the second version of our framework in 2003, we solicited comments from OMB as well as federal departments and agencies. We also reiterated our recommendation to OMB to use the framework in our 2003 report in which we assessed federal departments and agencies against the second version of the framework. Second, we have discussed alignment of our framework and OMB’s architecture assessment tool with OMB officials. For example, after OMB developed the first version of its architecture assessment tool in 2004, we met with OMB officials to discuss our respective tools and periodic agency assessments. We also discussed OMB’s plans for issuing the next version of its assessment tool and how this next version would align with our framework. At that time, we advocated the development of comprehensive federal standards governing all aspects of architecture development, maintenance, and use. In our view, neither our framework nor OMB’s assessment tool provide such comprehensive standards, and in the case of our framework, it is not intended to provide such standards. Nevertheless, we plan to continue to evolve, refine, and improve our framework, and will be issuing an updated version that incorporates lessons learned from the results of this review. In doing so, we will continue to solicit comments from federal departments and agencies, including OMB. Third, we believe that while our framework and OMB’s assessment tool are not identical, they nevertheless consist of a common cadre of best practices and characteristics, as well as other relevant criteria that, taken together, are complementary and provide greater direction to, and visibility into, agency architecture programs than either does alone. Comments from the Department of Housing and Urban Development Comments from the Department of the Interior Comments from the Department of Justice GAO Comments 1. See DHS comment 1 in appendix IX. Also, while we do not have a basis for commenting on the content of the department’s OMB evaluation submission package because we did not receive it, we would note that the information that we solicit to evaluate a department or agency against our framework includes only information that should be readily available as part of any well-managed architecture program. 2. We understand the principles of federated and segmented architectures, but would emphasize that our framework is intentionally neutral with respect to these and other architecture approaches (e.g., service-oriented). That is, the scope of the framework, by design, does not extend to defining how various architecture approaches should specifically be pursued, although we recognize that supplemental guidance on this approach would be useful. Our framework was created to organize fundamental (core) architecture management practices and characteristics (elements) into a logical progression. As such, it was intended to fill an architecture management void that existed in 2001 and thereby provide the context for more detailed standards and guidance in a variety of areas. It was not intended to be the single source of all relevant architecture guidance. 3. We agree, and believe that this report, by clearly identifying those departments and agencies that have fully satisfied each core element, serves as the only readily available reference tool of which we are aware for gaining such best practice insights. Comments from the Department of State GAO Comments 1. We acknowledge the comment that both CIOs approved the configuration management plan. However, the department did not provide us with any documentation to support this statement. 2. We acknowledge the comment that the architecture has been approved by State and USAID executive offices. However, the department did not provide any documentation describing to which executive offices the department is referring to allow a determination of whether they were collectively representative of the enterprise. Moreover, as we state in the report, the chief architect told us that a body representative of the enterprise has not approved the current version of the architecture, and according to documentation provided, the Joint Management Council is to be responsible for approving the architecture. 3. We acknowledge that steps have been taken and are planned to treat the enterprise architecture as an integral part of the investment management process, as our report findings reflect. However, our point with respect to this core element is whether the department’s investment portfolio compliance with the architecture is being measured and reported to senior leadership. In this regard, State did not provide the score sheets referred to in its comments, nor did it provide any other evidence that such reporting is occurring. Comments from the Department of the Treasury Comments from the Department Veterans Affairs Comments from the Environmental Protection Agency GAO Comments 1. We agree and have modified our report to recognize evidence contained in the documents. 2. We do not agree. The 2002 documents do not contain steps for architecture maintenance. Further, evidence was not provided demonstrating that the recently prepared methodology documents were approved prior to the completion of our evaluation. 3. We do not agree. While we do not question whether EPA’s EA Transition Strategy and Sequencing Plan illustrates how annual progress in achieving the target architectural environment is measured and reported, this is not the focus of this core element. Rather, this core element addresses whether progress against the architecture program management plan is tracked and reported. While we acknowledge EPA’s comment that it tracks and reports such progress against plans on a monthly basis, neither a program plan nor reports of progress against this plan were provided as documentary evidence to support this statement. 4. We do not agree. First, while EPA’s IT investment management process provides for consideration of the enterprise architecture in investment selection and control activities, no evidence was provided demonstrating that the process has been implemented. Second, while EPA provided a description of its architecture change management process, no evidence was provided that this process has been approved and implemented. Comments from the General Services Administration Comments from the National Aeronautics and Space Administration Comments from the Social Security Administration GAO Comments 1. We do not agree. Neither the governance committee charter nor the configuration management plan explicitly describe a methodology that includes detailed steps to be followed for developing, maintaining, and validating the architecture. Rather, these documents describe, for example, the responsibilities of the architecture governance committee and architecture configuration management procedures. 2. We do not agree. The core element in our framework concerning enterprise architecture approval by the agency head is derived from federal guidance and best practices upon which our framework is based. This guidance and related practices, and thus our framework, recognize that an enterprise architecture is a corporate asset that is to be owned and implemented by senior management across the enterprise, and that a key characteristic of a mature architecture program is having the architecture approved by the department or agency head. Because the Clinger-Cohen Act does not address approval of an enterprise architecture, our framework’s core element for agency head approval of an enterprise architecture is not inconsistent with, and is not superseded by, that act. Comments from the U.S. Agency for International Development GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgments In addition to the person named above, Edward Ballard, Naba Barkakati, Mark Bird, Jeremy Canfield, Jamey Collins, Ed Derocher, Neil Doherty, Mary J. Dorsey, Marianna J. Dunn, Joshua Eisenberg, Michael Holland, Valerie Hopkins, James Houtz, Ashfaq Huda, Cathy Hurley, Cynthia Jackson, Donna Wagner Jones, Ruby Jones, Stu Kaufman, Sandra Kerr, George Kovachick, Neela Lakhmani, Anh Le, Stephanie Lee, Jayne Litzinger, Teresa M. Neven, Freda Paintsil, Altony Rice, Keith Rhodes, Teresa Smith, Mark Stefan, Dr. Rona Stillman, Amos Tevelow, and Jennifer Vitalbo made key contributions to this report.
A well-defined enterprise architecture is an essential tool for leveraging information technology (IT) to transform business and mission operations. GAO's experience has shown that attempting to modernize and evolve IT environments without an architecture to guide and constrain investments results in operations and systems that are duplicative, not well integrated, costly to maintain, and ineffective in supporting mission goals. In light of the importance of enterprise architectures, GAO developed a five stage architecture management maturity framework that defines what needs to be done to effectively manage an architecture program. Under GAO's framework, a fully mature architecture program is one that satisfies all elements of all stages of the framework. As agreed, GAO's objective was to determine the status of major federal department and agency enterprise architecture efforts. The state of the enterprise architecture programs at the 27 major federal departments and agencies is mixed, with several having very immature programs, several having more mature programs, and most being somewhere in between. Collectively, the majority of these architecture efforts can be viewed as a work-in-progress with much remaining to be accomplished before the federal government as a whole fully realizes their transformational value. More specifically, seven architecture programs have advanced beyond the initial stage of the GAO framework, meaning that they have fully satisfied all core elements associated with the framework's second stage (establishing the management foundation for developing, using, and maintaining the architecture). Of these seven, three have also fully satisfied all the core elements associated with the third stage (developing the architecture). None have fully satisfied all of the core elements associated with the fourth (completing the architecture) and fifth (leveraging the architecture for organizational change) stages. Nevertheless, most have fully satisfied a number of the core elements across the stages higher than the stage in which they have met all core elements, with all 27 collectively satisfying about 80, 78, 61, and 52 percent of the stage two through five core elements, respectively. Further, most have partially satisfied additional elements across all the stages, and seven need to fully satisfy five or fewer elements to achieve the fifth stage. The key to these departments and agencies building upon their current status, and ultimately realizing the benefits that they cited architectures providing, is sustained executive leadership, as virtually all the challenges that they reported can be addressed by such leadership. Examples of the challenges are organizational parochialism and cultural resistance, adequate resources (human capital and funding), and top management understanding; examples of benefits cited are better information sharing, consolidation, improved productivity, and reduced costs.
Scope and Methodology We based our review on an assessment of DOD’s implementation of its own directives and instructions for new automated information systems or the selection and implementation of standard migratory systems under the CIM initiative, as these projects relate to the depot maintenance business area. These directives, referred to as Life-Cycle Management, contain the same steps and milestones as GAO’s own methodology for reviewing large automated information systems/projects. Our audit was performed between April 1994 and March 1995 in accordance with generally accepted government auditing standards. We performed our work primarily at the offices of the Deputy Under Secretary of Defense for Logistics in Washington, D.C., and the Joint Logistics Systems Center, Wright-Patterson Air Force Base, Ohio. Appendix I details our scope and methodology. The Deputy Under Secretary of Defense for Logistics provided written comments on a draft of this report. These comments are discussed at the end of this report and presented, along with our evaluation, in appendix II. Significant Problems in DOD Depot Maintenance Each year DOD spends about $13 billion to manufacture, overhaul, and repair more than 2 million items at its more than 27 maintenance depots. The depots have primary responsibility for the maintenance, overhaul, and repair of large items, such as tanks, ships, and airplanes, and small and intricate ones, such as communications and electronic components. Depot maintenance consists of three basic business processes: project management (maintenance of major-end items, such as airplanes, ships, and tanks), reparables management (maintenance of items, such as engines, transmissions, and radios), and specialized support (various individual functions, such as tracking hazardous materials, tools, and test samples). For years, GAO and DOD have reported on major problems facing the depot maintenance area, principally that DOD’s depot management structure has not resulted in substantial competition, interservicing, or reduction of excess capacity and duplication of effort. For example: In 1983, GAO testified that DOD had not moved quickly to eliminate duplicate capability and excess capacity within depot maintenance because of (1) parochial interests, (2) lack of central authority, and (3) absence of DOD-wide planning. In 1993, the Joint Chiefs of Staff reported that closing a significant number of depots was needed to reduce excess capacity and that significant savings could come from consolidating depot workload across service boundaries. In May 1993, we testified, that the Joint Chiefs of Staff identified 25 to 50 percent more depot capacity than will be needed in the future and that this problem had been exacerbated by (1) the end of the cold war, (2) reduction of defense systems and equipment, (3) retirement of less reliable and more maintenance-intensive systems, and (4) the private sector’s push for a greater share of the depot maintenance workload. In 1993, we reported that internal controls at Army depots did not adequately safeguard millions of dollars of weapons and equipment during the maintenance processes. Specifically, we reported that poor storage practices increased maintenance costs, depot inventory records were not accurate, and the Army’s depot cost accounting system did not capture actual job costs. In 1995, DOD reported to the Congress that its financial systems and databases were inadequate to provide the type of information to determine the cost-effectiveness of greater public-private competition for providing depot maintenance services. Over the last several years, DOD has taken a number of actions to correct these problems. One of these actions is its Corporate Information Management initiative, which was established to prepare DOD for future budget reductions and post-cold war readiness requirements through (1) streamlining business processes, (2) integrating essential data systems, and (3) eliminating duplicate or redundant information systems across the Department. The DMSS project was undertaken as part of this effort. Strategy for Addressing Depot Maintenance Problems To improve its depot maintenance operations and manage its resources more efficiently, the Principal Staff Assistant (PSA) for logistics, in November 1991, established the Joint Logistics System Center (JLSC). JLSC is to facilitate the improvement of depot maintenance processes by identifying business process improvements and managing the development and deployment of a standard depot maintenance system to replace service-unique systems currently used. In January 1994, JLSC prepared an economic analysis recommending development and deployment of a standard depot maintenance information system—called the Depot Maintenance Resource Planning (DMRP) system, which consisted of four system components. Currently, the standard information system consists of eight components and is called DMSS. The following table identifies the core depot maintenance business processes and the eight system components selected to support them. By implementing DMRP, DOD expected a return on its investment of $2.6 billion through business process improvements and savings derived from replacing more than 60 service-unique automated depot maintenance information systems. Specifically, these benefits are to be derived from (1) reduced direct and indirect labor costs, (2) reduced direct and indirect material costs, (3) reduced costs associated with shutting down old information technology (legacy) systems, (4) shorter cycle time for certain types of maintenance and inspections, and (5) automation of many currently paper-based work processes. Our concerns with this strategy are twofold. First, DOD did not base its decision to develop and deploy DMSS on convincing analyses of expected system development and deployment costs or detailed assessments of DMSS’s economic and technical risks. Further, Defense did not obtain the independent reviews by the MAISRC and approvals by the MDA of the project’s milestones, which are designed to ensure the decision was consistent with sound business practice. Second, we believe that DOD needs to consider reengineering entire processes before implementing system changes if it is to achieve the dramatic reductions in operational support costs called for by CIM. DUSD(L) Did Not Use Sufficient Analyses in Selecting DMSS In selecting DMSS as DOD’s initial step toward improving defense maintenance depot operations, the Deputy Under Secretary of Defense for Logistics (DUSD(L)) did not base its decision on sufficient analyses of expected system development and deployment costs or detailed assessments of DMSS’s economic and technical risks. Further, DUSD(L) did not obtain independent milestone reviews and approvals which are designed to ensure (1) decisions are consistent with sound business principles and (2) risks inherent in large information systems projects are adequately managed. Thus, even the marginal improvements Defense expects from DMSS may never be achieved. Defense directives require that decisions to develop and deploy information systems be based on convincing, well-supported estimates of project costs, benefits, and risks. These directives establish a disciplined process for selecting the best projects based on comparisons of competing alternatives. Defense’s principal means for comparing various alternatives is a functional economic analysis. For each alternative, it identifies resource, schedule, and other critical project characteristics and presents estimates of the costs, benefits, and risks. The Office of the Assistant Secretary of Defense for Program Analysis and Evaluation is required to validate these estimates to help ensure that the economic analysis presents compelling quantitative data for each of the alternatives being evaluated. Once an alternative is chosen, the analysis becomes the basis for project approval. Any significant change in the project’s expected costs, benefits, or risks requires that the project selection and direction be reevaluated. Also, DOD directives established the Major Automated Information System Review Council (MAISRC) to provide oversight of individual major information system projects. At each development milestone for proposed information system projects, MAISRC reviews these projects to determine if they are consistent with DOD policies and directives. MAISRC then recommends continuation, redirection, or termination of each project to the project’s Milestone Decision Authority (MDA). DOD’s current policy is to ensure that funds are not obligated for any automated information system until the MAISRC milestone review and MDA approval are complete. In January 1994, following the logistics CIM migration strategy, the JLSC evaluated three alternatives for improving the core Defense depot maintenance functions. The alternatives considered involved (1) maintaining status quo by allowing each service to continue to operate its own information system with some new development under JLSC’s purview, (2) choosing a corporate information system from among the services and establishing it as the DOD-wide standard system—deploying it either immediately and then enhancing it over a 3-year period or deploying it after enhancements, and (3) developing a new system. In selecting an alternative, DUSD(L) did not evaluate sufficiently accurate cost data and detailed assessment of risks, nor did it obtain milestone reviews and approvals designed to ensure automated information systems are selected consistent with sound business practices. DUSD(L) Selected DMSS Without Sufficient Cost Data DUSD(L) selected DMSS without analyzing the system’s full development and deployment costs. Instead, it relied on a functional economic analysis of a previously proposed project—the Depot Maintenance Resource Planning (DMRP) system. This analysis significantly understated DMSS costs by including costs for only some components, and it understated costs for the components it did include. In early 1994, the JLSC Commander recognized that the DMRP economic analysis did not reflect DMSS as defined. According to JLSC officials, DUSD(L) used the DMRP functional economic analysis as a basis for selecting DMSS because it was the best available at the time. The DMRP analysis estimated project costs at $988 million—$582 million to develop and deploy and $406 million to operate and support over a 10-year period. These officials stated that the DMRP analysis fairly represented the DMSS project. However, the Office of the Assistant Secretary of Defense for Program Analysis and Evaluation reviewed this analysis and found its level of detail insufficient to validate either cost or benefit estimates. Although we also found insufficient details supporting cost and benefit estimates, we believe that DMSS will cost significantly more than the DMRP. As shown in table 2, the DMRP economic analysis included costs for only three of the eight DMSS system components. Therefore, the analysis understated DMSS costs by the amount necessary to develop and deploy the five additional system components. As of February 1995, JLSC had not completed a cost estimate for these five additional components. In addition, the DMRP economic analysis underestimated costs for system components common to both DMRP and DMSS projects. Specifically, it underestimated licensing costs for using commercially owned software, costs to exchange data with other information systems, and costs to install the system. One example of underestimated licensing costs is in a key DMSS component—the Air Force’s Depot Maintenance Management Information System (DMMIS). Over the last 10 years, the Air Force spent over $200 million to develop DMMIS for use in its maintenance depots. Originally designed around a core of commercially available application and database software, the Air Force chose to extensively modify this proprietary software to better meet its unique depot maintenance requirements. However, all software versions remain the sole property of the commercial developers. As a result, to use the DMMIS system, DOD will have to pay license fees to several commercial software developers. Although the DMRP economic analysis did not specify DMMIS license fee costs, JLSC officials stated that $1.6 million per site was included in the deployment cost totals. In February 1995, JLSC estimated that DMMIS license fees for just the development facility and two operational sites would exceed $13 million, including a one-time payment of over $5 million and nearly $850,000 each year over the system’s life. As of April 1995, JLSC expected to run DMMIS at three additional sites. Licensing agreements had yet to be negotiated for these sites. The DMRP analysis also underestimated costs to develop interfaces needed to allow system components to exchange data with the information systems currently used by the services to accomplish their missions. While the analysis recognized that system components must interface with other systems, it did not include the full cost of these interfaces. According to JLSC officials, some costs to interface the DMMIS and Programmed Depot Maintenance Scheduling System were included in the $37.7 million estimate for developing the system’s software applications. However, they did not specify these costs. Although JLSC has yet to identify them, DMSS will require numerous system interfaces if it is to be the corporate depot maintenance system. For example, prior work done by the Air Force to deploy DMMIS, before it was selected as a DMSS component, identified 73 required interfaces just to meet Air Force requirements. As a DMSS system component, additional DMMIS interfaces will be needed to meet Army, Navy, and Marine Corps requirements. Further, interfaces for the remaining seven DMSS system components must be identified and developed. In February 1995, JLSC’s Deputy Director for Depot Maintenance estimated that $70 million not included in the DMRP economic analysis would be needed to develop the DMSS interfaces. Finally, the DMRP economic analysis underestimated costs for deploying the system. The analysis estimates $497 million for system deployment. This estimate includes nonrecurring costs of $17 million to install the system at each operational site. Since DMSS was initiated, JLSC has identified that an additional $60 million would be needed to deploy the system. In May 1994, the JLSC Commander told the DOD Comptroller about the DMRP economic analysis. The Commander stated that the economic analysis briefed to DUSD(L) in December 1993 and submitted for the DOD Comptroller’s review in early 1994 did not reflect DMSS as it was then defined. Further, he stated that to accommodate changes requested by the Comptroller and the office of Program Analysis and Evaluation and to reflect the current DMSS, JLSC was developing a new analysis. According to JLSC officials, the final economic analysis is expected to be completed in July 1995. However, by this time Defense will have spent more than $200 million to develop and deploy DMSS. DUSD(L) Selected DMSS Without Fully Assessing Risks Although any large automated information system development project is inherently a high-risk venture, DUSD(L) decided to develop and deploy DMSS without first fully assessing the risks to the project’s success. Without a detailed risk assessment, DOD has no assurance that DUSD(L) selected the best information system alternative for improving defense depot maintenance operations, nor can it plan actions designed to avoid or lessen the potential for project delay, overspending, or failure. DOD has long recognized that project success relies on its ability to manage risk. The Defense Systems Management College guide on risk management states that, as a minimum, a prudent manager should attempt to understand system specific risks and quantify their potential impact for each alternative. While the earlier DMRP analysis identified several potential risks associated with each alternative being considered, it did not quantitatively or qualitatively compare these risks. Additionally, it did not contain any plans to mitigate potential project risks. After DUSD(L) selected DMSS, JLSC convened a customer advisory team in April 1994 to identify and generate ideas on how to mitigate DMSS risks. This team, with membership from all the military services, identified a number of risks facing DMSS, such as (1) incomplete design and testing of the two core DMSS systems—Depot Maintenance Management Information System and the Baseline Advanced Industrial Management System, (2) not enough personnel to implement and maintain the system, (3) inability to obtain service cooperation needed to successfully build and deploy the system, (4) numerous external and internal interface issues, and (5) depot maintenance workers’ reluctance to work with an entirely new system. JLSC requested another high-level risk analysis of the depot maintenance standard system strategy from the Defense Information Systems Agency’s Center For Integration & Interoperability (CFI&I). In a July 1994 briefing to JLSC, CFI&I said that program management posed the greatest risks to DMSS success. CFI&I said the project lacked (1) integrated detailed planning specifying the activities and milestones to be achieved at each depot and (2) coordination of events necessary to implement the system, and that, as a result, there was no assurance that DMSS could meet cost, schedule, and performance expectations. In addition, CFI&I identified a number of technical risks to DMSS implementation, including (1) no encompassing data migration strategy, (2) incomplete and inadequate understanding of the requirement to interface DMSS with other current service systems, (3) difficulties associated with maintaining modified commercially owned software, and (4) incomplete development and testing of two of the system components. In October 1994, JLSC began an iterative detailed assessment of DMSS to quantify risks, identify possible mitigation or avoidance steps, and develop a risk management plan. As of April 1995, JLSC was continuing this assessment. Project Milestone Reviews and Approvals Not Obtained Although Defense directives establish MAISRC review and MDA approval procedures to ensure that decisions to develop major automated information systems are based on sound business principles, as of February 1995, DUSD(L) had not scheduled a date for an initial milestone review of the entire DMSS project. Under MAISRC guidelines, a project should be reviewed and approved at each of five decision milestones before substantial funds are obligated. Despite this DOD policy, DUSD(L) spent nearly $180 million in fiscal years 1993 and 1994 on DMSS, and budgeted $111.2 million in fiscal year 1995 and $95.1 million in fiscal year 1996. These budgeted amounts are for the development and deployment of DMSS and do not include amounts to maintain and operate the current systems. According to the director of logistics systems development within DUSD(L), DMSS will be submitted for MAISRC review and MDA approval during 1995. However, we found that as of February 1995, DMSS was on the MAISRC review schedule for 1995 but no date for the review had been established. The director also indicated that continued implementation of DMSS at selected prototype sites is justified based on past MAISRC reviews and MDA approvals of the DMMIS component of the project. However, Defense directives require programs which consist of a number of component systems to be reviewed by MAISRC and approved by MDA as a single project. Without these reviews and approvals, DOD has less assurance that the decision to select DMSS was consistent with sound business practices. Also, DOD did not have an opportunity afforded by the MAISRC review and MDA approval to redirect or terminate DMSS before investing significant amounts of money. DUSD(L) Did Not Consider Reengineering Depot Maintenance Processes Before Selecting DMSS In evaluating alternatives to improve depot maintenance operations, DUSD(L) did not consider reengineering alternatives which offer opportunities to dramatically improve depot maintenance business processes and greatly reduce the costs of operations. Even if successful, DOD’s strategy to develop and deploy an information system designed to incrementally improve depot maintenance processes will only provide marginal cost reductions and productivity increases rather than the fundamental and dramatic changes needed to meet the challenges of maintaining military readiness in the 1990s. Reengineering of Business Processes Can Offer Dramatic Improvement The defense community must make fundamental changes in the way it performs its activities if it is to provide the nation with the defense it requires and demands. ... Incremental improvements...will not shift the Department to a higher plateau of performance. Breakthrough innovation and change—a new paradigm for defense activities—is needed to meet the challenges of the 1990’s. In January 1991, the Deputy Secretary of Defense endorsed a CIM implementation plan in which DOD would “reengineer,” or thoroughly study and redesign, its business processes before it standardized its information systems. The Deputy Secretary understood that DOD would have to improve the way it does business to achieve dramatic cost reductions and productivity increases and that it could not merely standardize old, inefficient processes and systems. Simply stated, doing the same thing faster will not provide dramatic improvement. Though reengineering efforts in DOD have been limited in scope and represent a small portion of operations, significant improvements have been achieved through reengineering specific logistics business areas. For example, in 1980, the Defense Construction Supply Center established a contractor-operated parts depot program that reduced order and delivery time from 70 to 35 days—a 50-percent reduction. In addition, the private sector, which also has major industrial centers that use similar maintenance and repair supplies for regularly scheduled maintenance of equipment, has undergone successful reengineering efforts when faced with increasing costs associated with acquiring supplies, spare parts, and raw materials. For example, since 1986, through customized agreements with suppliers and the use of new inventory management practices, an Ohio steel firm, Timken Company, reduced maintenance and repair inventories by $4 million (32 percent). The company also eliminated six inventory storerooms, improved inventory availability, and increased the accuracy of physical inventories. We have also reported that by adopting certain commercial practices, Defense could similarly dramatically improve depot maintenance. In 1993, for example, we found that a number of private firms provide third-party logistics transportation services, such as freight bill processing, pre-auditing, verifying, and generating management reports with freight payment. Two of these firms proposed to perform transportation services for DOD at a cost ranging to $.75 to $1.25 per government bill of lading. DOD spends about $5.70 per freight bill to provide these same services. If DOD used these firms or changed its process to obtain similar performance, it could reduce costs for these services by more than 75 percent. Reengineering Not Considered for Improving Depot Maintenance Instead of first considering opportunities to reengineer business processes, DUSD(L) chose a strategy that focuses on the development and deployment of a DOD standard depot maintenance information system. Under this strategy, business processes are to be incrementally improved as DMSS is deployed. Reengineering of these processes will be considered only after system deployment. Currently, DMSS deployment is expected to be completed by fiscal year 1999. Accordingly, fundamental and dramatic changes to the depot maintenance processes will be delayed for years. According to DOD officials, the vast number of different logistics processes and supporting information systems across the Department must be reduced before significant improvements can be made. These officials further stated that, once fully deployed, the Defense standard information systems will form the foundation upon which significant improvements to current depot maintenance practices can be made. This foundation will eliminate the need to implement major changes across a multitude of information systems and business processes that exist throughout the services. Additionally, JLSC officials emphasized that improvements are being made to depot maintenance processes as DMSS is being deployed. According to these officials, benefits being achieved from these improvements include (1) cost reductions of $7 million in shop floor material recovered at the Air Logistics Center in Ogden, Utah, and a $8 million reduction in purchase of hazardous material at Hill Air Force Base and (2) performance increases from a 30-percent reduction in labor hours for overhauls of the Los Angeles class submarine, and two additional B-1 bombers processed through the Oklahoma City Air Logistics Center. While these examples show that incremental improvements are being made, JLSC estimated that, overall, the DMSS project would reduce depot operational costs by $2.6 billion over a 10-year period ending in fiscal year 2003 from $112.9 billion to $110.3 billion over this period—a net cost reduction of about 2.3 percent. We believe that standardizing existing information systems and incrementally improving business processes will not position DOD for reengineering its processes or dramatically improve their operations. Government and private industry have learned that initial focus on information system deployment may make future reengineering efforts more difficult by entrenching inefficient and ineffective work processes. Accomplishing order-of-magnitude improvements in both government and private organizations requires reengineering—fundamental redesign—of critical work processes. Information system initiatives that do not first reengineer business processes typically fail or attain only a fraction of their potential. In addition, case studies of private organizations presented in Reengineering The Corporation - A Manifesto For Business Revolution,revealed that companies often commit a fundamental error in viewing automation as the answer to enhancing or streamlining their business operations. They spend billions of dollars to automate existing processes so they can perform the same work faster. Companies that initially focused on information technology managed only to entrench inefficient processes and made future change to these processes more difficult. ...Defense has focused on trying to pick the best of its hundreds of existing automated systems and standardizing their use across the military components without thoroughly analyzing the technical, cost, and performance risks of this approach. As a result, Defense may lock itself into automated ways of doing business that do not service its goals for the future and cannot provide promised benefits and cost savings. Our review of DUSD(L)’s depot maintenance standard system strategy confirms this. The benefits it expects from implementing DMSS are relatively meager when compared with results other organizations are achieving through reengineering. Conclusion We agree with DOD’s concern over depot maintenance operations. Further, we agree that accurate information on depot operations and costs is critical to improving this important readiness-related support process. However, the decision to develop DMSS was based on insufficient cost data and with little consideration of identified risks. Efficient, cost-effective depot maintenance operations are important to supporting the Department’s military operations. Major investment decisions—such as DMSS—represent significant opportunities to make dramatic improvements in core business processes. Further, DOD’s proposed solution was made without due consideration of reengineering alternatives which offer dramatic improvements and greatly reduce costs of depot operations. DOD’s failure to consider reengineering alternatives and to fully consider the costs and risks associated with DMSS will likely limit those opportunities. Recommendations To achieve the dramatic improvements in effectiveness and efficiency of its depot maintenance operations that Defense has stated are critical to meet the challenges of the 1990s and beyond, we recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense for Logistics to complete the following actions. Prepare a full set of project documentation that describes the project and validates that it is the best alternative for improving depot operations. At a minimum, this documentation should include the following. A final functional economic analysis containing a comprehensive evaluation of information system alternatives. This analysis should formulate and compare estimates of the total costs and benefits of each alternative. Identification of economic and technical risks associated with success of each project alternative and development of a plan to avoid or mitigate these risks. A comprehensive implementation plan that identifies actions to be taken, schedules, and milestones for these actions, and performance measures to be used to manage the system deployment. Obtain the Major Automated Information Systems Review Council review and Milestone Decision Authority approval of the project documentation prior to spending any fiscal year 1996 funds on DMSS development and deployment. Conduct a thorough study of opportunities to reengineer the depot maintenance business processes. Reengineering alternatives identified by this study should be analyzed as part of the final functional economic analysis and submitted for MAISRC review and MDA approval. Agency Comments and Our Evaluation The Department of Defense provided written comments on a draft of this report. The Deputy Under Secretary of Defense for Logistics generally disagreed with our findings, but partially concurred with our recommendations. Defense’s specific comments are summarized below and presented, along with our rebuttals, in appendix II. In its comments, Defense took the following positions. The DMSS functional economic analyses of March 1993 and January 1994 provided sufficient cost, benefit, and risk information to select the best alternative for improving depot maintenance business processes. DMSS is not one system requiring MAISRC oversight and that the individual system components meeting MAISRC oversight criteria have been reviewed and approved. Process reengineering is being accomplished concurrently with DMSS development and deployment. Defense asserts that by following this strategy, it has achieved substantial depot maintenance improvements yielding significant cost reductions. Defense expects even more dramatic improvements and savings in the future. We disagree with Defense’s positions on these matters. Specifically: The March 1993 and January 1994 FEAs were insufficient because they did not include cost and benefit estimates for the DMSS, contained cost estimates of questionable accuracy, and did not include cost and benefit estimates for five of the DMSS system components. Defense CIM guidance specifically directs that information system projects be reviewed and approved in accordance with Defense life-cycle management directives. Under these directives DMSS is required to be reviewed by MAISRC and approved by the MDA at five milestone decision points before any funds are spent to develop the system. Further, these directives state that projects consisting of several components shall be considered as a single automated system. Defense’s approach to improving depot maintenance business processes focuses on the selection of the best currently operating information systems and implementation of these selected systems across all Defense components. While this approach may improve overall DOD business processes and may provide incremental benefits, it cannot be construed as reengineering. DMSS is designed to provide only incremental improvements to existing business processes and it is clear from Defense’s own benefit projections that it will not result in the dramatic improvements that are possible by considering reengineering-based solutions. While it claims that DMSS has improved depot maintenance processes and resulted in some reductions in operational costs, DUSD(L)’s focus on information system selection and implementation may inhibit reengineering efforts by entrenching current work processes. Although Defense disagreed with our findings, it agreed with our recommendation to prepare a full set of project documentation that describes DMSS and validates that it is the best alternative to improve depot maintenance. It partially concurred with our recommendation on obtaining MAISRC review but specifically disagreed with our recommendation concerning thoroughly studying opportunities to reengineer depot maintenance business processes. Our recommendation for MAISRC review is consistent with review requirements established in Defense life-cycle management directives. Further, because reengineering offers order-of-magnitude improvement and cost reductions, Defense cannot afford to deploy DMSS beyond the first five prototype sites until it has fully assessed reengineering alternatives. We are sending copies of this report to the Ranking Minority Member of the Subcommittee, the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations, the Senate Committee on Armed Services, the Senate Committee on Governmental Affairs, and the House Committee on Government Reform and Oversight; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Director of the Office of Management and Budget; and other interested parties. Copies will be made available to others on request. If you have any questions about this report, please call me at (202) 512-6240 or Carl M. Urie, Assistant Director, at (202) 512-6231. Major contributors to this report are listed in appendix IV. Objectives, Scope, and Methodology We based our review on an assessment of DOD’s implementation of its own directives and instructions for new automated information systems or the selection and implementation of standard migratory systems under the CIM initiative, as such projects relate to the depot maintenance business area. These directives, referred to as Life Cycle Management, contain the same steps and milestones as GAO’s own methodology for reviewing large automated information systems/projects. Specifically, to determine whether the Department based its selection of DMSS on convincing analyses of costs and benefits, we reviewed policies, procedures, directives, and memoranda establishing criteria for the successful acquisition of automated information systems under the CIM initiative. We compared the Department’s actions and plans for selecting and implementing DMSS with these criteria. To further assess the adequacy of the selection, we examined the cost and benefit data available to senior Defense officials responsible for selecting DMSS. Because the level of detail was insufficient, we did not evaluate these cost and benefit data. Also, we interviewed Defense logistics officials to obtain the rationale behind the DMSS selection. To identify expected DMSS costs and benefits, we analyzed available functional economic analyses (FEA). We did not validate the costs and benefits presented in the FEA used to justify DMSS since (1) our objective was to examine DOD’s decision given the cost and benefit information available to it at the time and (2) the FEA was based on a different project—the DMRP. We interviewed JLSC officials to determine changes made to project scope, costs, or benefits occurring since early 1994 and any additional analyses currently being done. We also met with numerous program and functional officials, including JLSC managers responsible for implementing the eight DMSS system components, and depot officials at the Air Force’s repair depot in Ogden, Utah, and the Army’s depot in Tobyhanna, Pennsylvania. To determine whether the Department had fully assessed economic and technical risks threatening the successful implementation of DMSS and identified actions to avoid or mitigate these risks, we reviewed risk assessments available when DUSD(L) decided to develop and deploy DMSS. Additionally, we examined risk analyses conducted by the Joint Logistics Systems Center, other Defense organizations, and industry experts completed since the DMSS selection was made. We interviewed program and technical officials to obtain opinions on the potential impact of risks identified by these analyses on project success and to identify actions for avoiding or mitigating those risks most likely to result in project failure, delay, and overspending. To determine whether the Department selected a strategy that would dramatically improve depot maintenance processes, we reviewed DOD documents detailing challenges of meeting the defense mission in the post-cold war environment, CIM goals and objectives to meet these challenges, and the plans and strategies for implementing CIM across the Department. We compared these DOD strategies and plans to the Logistics Migration Approach established to implement the CIM initiative in the logistics business area. We then compared the level of improvement expected from a standard depot maintenance information system to the DOD stated requirement to meet the challenges of the future defense environment. To identify alternatives to information system approaches, we reviewed private industry studies and past GAO reports of lessons learned by private and public organizations that have successfully improved their business processes. We compared these lessons learned and case studies with the approach being implemented through the development and deployment of DMSS. Our work was performed between April 1994 and March 1995 primarily at the offices of the Deputy Under Secretary of Defense for Logistics in the Pentagon, Washington D.C., and the Joint Logistics Systems Center, Wright-Patterson Air Force Base, Ohio. We also performed work at the offices of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence in Washington D.C.; the Center for Integration & Interoperability, Defense Information Systems Agency, Blacklick, Ohio; the Air Force Air Logistics Center, Hill Air Force Base, Utah; and the Tobyhanna Army Depot, Tobyhanna, Pennsylvania. Comments From the Department of Defense The following are GAO’s comments on the Department of Defense’s letter dated May 30, 1995. GAO Comments 1. The March 1993 Phase I Functional Economic Analysis (FEA) and January 1994 Phase II FEA for the DMRP processes did not provide well-supported estimates of project costs, benefits, and risks upon which to approve the Depot Maintenance Standard System (DMSS). As stated in our report, there are three reasons why these FEAs did not sufficiently support the DMSS selection. First, these FEAs did not include cost and benefit estimates for the DMSS. The DMSS was not defined as a project until March 1994—more than a year after the Phase I FEA was completed and 2 months after the Phase II FEA was submitted for review. The Commander of Joint Logistics Systems Center told the DOD Comptroller that the functional economic analysis briefed to DUSD(L) and submitted for the Comptroller’s review in early 1994 did not include DMSS. The Commander also stated that JLSC was developing a new economic analysis to (1) accommodate changes requested by the Comptroller and the Office of Program Analysis and Evaluation and (2) reflect the current DMSS. Secondly, the analyses contained estimates of questionable accuracy. The Office of the Assistant Secretary of Defense for Program Analysis and Evaluation, which is required to validate automated information system project estimates to help ensure that the economic analyses present compelling quantitative data for project selection, found that the level of detail was insufficient to validate either cost or benefit estimates. Even with this limitation, we determined that the Phase II FEA underestimated the cost to develop and deploy the three system components that later became part of DMSS by at least $100 million. Information system options and opportunities that support the functional management strategy and process improvement efforts are evaluated based on technical feasibility, cost, schedule, performance, risk, and conformance to architectural guidelines and standards. Information system development/modernization must comply with life cycle management policy...The SDP will be part of the approval decision package supporting the designation of the AIS as a migration system by the OSD Principal Staff Assistant. The SDP will also support an in-process review (IPR) or milestone review, as appropriate, by the designated milestone decision authority (MDA)...When AIS changes are part of the process improvement alternative(s) selected for more detailed analysis, the Functional Activity Program Manager’s evaluation decision is a filter that precedes other reviews required by DoDD 8120.1...The Functional Activity Program Manager is responsible for ensuring that the AIS-related aspects of the process improvement proposal are reviewed and approved in accordance with DoDD 8120.1, in addition to being reviewed and approved by the OSD Principal Staff Assistant as part of the complete process improvement proposal. As stated in our report, Defense Directive 8120.1, Life-Cycle Management (LCM) of Automated Information Systems (AISs); and Defense Instruction, 8120.2, Automated Information System (AIS) Life-Cycle Management (LCM) Process, Review, and Milestone Approval Procedures, establish MAISRC review and MDA approval procedures to ensure that decisions to develop or modernize major automated information systems are based on sound business principles. Under these procedures, a project should be reviewed and approved at each of five decision milestones before substantial funds are obligated. Despite this policy, DUSD(L) spent over $200 million to implement the DMSS initiative without receiving approval for even the initial milestone decision point. Also, DUSD(L)’s claim that the MAISRC did review the DMSS initiative on March 16, 1993, is not accurate. On this date the MAISRC completed an In-Process Review (IPR) of the overall Logistics CIM strategy. An IPR is defined as “An LCM review between LCM milestones to determine the current program status, progress since the last LCM review, program risks and risk-reduction measures, and potential program problems.” Further, as admitted by DUSD(L), the DMSS initiative was not approved until early 1994—more than a year after this review. For the purpose of determining whether an AIS is major, the separate AISs that constitute a multi-element program, or that make up an evolutionary or incremental development program, or make up a multi-component AIS program, shall be aggregated and considered a single AIS. Based on these directives, DUSD(L) is required to obtain MAISRC review and approval for the entire DMSS initiative at each of five milestone decision points before any additional funds are spent. 3. DUSD(L) officials contend that reengineering of depot maintenance processes is occurring concurrently with the deployment of the DMSS. Further, they assert that these reengineering efforts will provide dramatic economic benefits, and cite cost savings and productivity increases accrued from initial implementation of four DMSS system components as support. DUSD(L)’s approach focuses on the selection of the best currently operating information systems and implementation of these selected system across all Defense components. While this approach may improve overall DOD business processes and may provide incremental benefits, it is not the fundamental rethinking and radical redesign of depot maintenance processes and will not provide the dramatic cost reductions and productivity gains available from process reengineering. At best, it will allow DOD to accomplish current depot maintenance processes faster and more efficiently. At worst, DUSD(L)’s focus on information system selection and implementation will make future reengineering efforts more difficult by entrenching current work processes. In January 1991, the Deputy Secretary of Defense endorsed a Corporate Information Management initiative implementation plan that directed business processes be reengineered before information systems are standardized. However, DUSD(L) did not consider reengineering opportunities as alternatives to the DMSS initiative. As discussed in the report, the functional economic analysis used by DUSD(L) to approve the DMSS initiative compared only three alternatives. All three of these alternatives focused on using automated information systems to improve current depot maintenance functions. Further, as stated in DMSS documentation, the initiative is designed to provide only incremental improvements to existing business processes. It is clear from Defense’s own benefit projections that DMSS will not result in dramatic improvements possible from consideration of reengineering-based solutions. DUSD(L) projected DMSS would reduce the costs to DOD depot maintenance operational costs over a 10-year period from $112.9 billion to $110.3 billion. A cost reduction of $2.6 billion or only 2.3 percent over this period does not constitute a dramatic increase in efficiency. In late 1994, the Office of the Secretary of Defense for Command, Control, Communications, and Intelligence, responsible for CIM initiatives across DOD, found major flaws in the overall implementation. It concluded that, as opposed to the private sector which uses a very different approach, “DOD has virtually no chance of making high impact/quantum changes using the current approach.” Further, the Commission on Roles and Missions of the Armed Forces, charged by the Congress to provide an independent review of the roles and missions of the armed services, has found that “ather than reengineering its processes, DOD has spent its energies in closing excess capacity (base and facilities) and in standardizing its management information systems” and concluded that DOD will achieve a more compact, more standardized version of its traditional logistics approach.The Commission confirmed that DOD must radically reengineer its logistics processes to achieve meaningful improvements. 4. While information technology is critical to any reengineering effort because it provides a tool for breaking old rules and creating new ways of working, it should not be the driver of the reengineering effort. Such an approach may make future reengineering efforts more difficult by entrenching inefficient and ineffective work processes. Reengineering offers order-of-magnitude improvement compared to the incremental improvements DMSS is designed to provide. DUSD(L) can not afford to deploy DMSS departmentwide beyond the first five prototype sites until it has first determined which old rules need to be broken and what new ways of accomplishing depot maintenance are most efficient and effective. The Commission on Roles and Missions of the Armed Forces has identified a number of alternatives for changing the way DOD conducts its depot maintenance. These alternatives could serve as a starting point for a thorough study by DUSD(L) of its reengineering opportunities. Description of DMSS Component Systems This appendix provides brief descriptions of the eight information systems selected as Depot Maintenance Standard System components to support the DOD-wide depot maintenance function. Baseline Advanced Industrial Management System: Supports allocation decisions on resource application, schedules, and job management of maintenance projects. It allows timely review of cost and schedule performance at any level of the work breakdown structure. One of this system’s major modules, Programmed Depot Maintenance Scheduling System, provides project schedules of individual maintenance operations and critical path of work requirements for maintenance of major end items. Depot Maintenance Management Information System (DMMIS): Provides depot maintenance managers with an automated capability to forecast workloads; schedule repair activities; track and control inventories; program staffing, materials, and other resources; and track and manage production costs. Enterprise Information System: Provides the ability to interface to existing data sources, extract relevant data, and package the information to support decisionmakers with timely summary information. Facilities and Equipment Maintenance: Provides an integrated tracking and control system for equipment and facility maintenance, preventive maintenance, and calibration of precision measurement equipment. Depot Maintenance Hazardous Materiel Maintenance System: Records the receipt and issue of all hazardous material within a maintenance depot. Provides inventory visibility of all hazardous material to control the issue of hazardous material to authorized users. Interservice Material Accounting and Control System: Provides the tracking of Depot Maintenance Interservice Support agreements and visibility and control for interservice workloads. Laboratory Information Management System: Provides the monitoring and control of laboratory data such as sample order status, order tracking, backlog, scheduling, location tracking, workload prediction, pricing, and invoicing. Automates tracking and archiving for depot material samples and test results. Tool Inventory Management Application: Provides total inventory tracking and accountability of both hard and perishable (consumable) tools and tooling assets. Tracks issues and receipts of assets to both individuals and in tool kits. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Cincinnati Regional Office Related GAO Reports Government Reform: Using Reengineering and Technology to Improve Government Performance (GAO/T-OGC-95-2, Feb. 2, 1995). Defense Management: Impediments Jeopardize Logistics Corporate Information Management (GAO/NSIAD-95-28, Oct. 21, 1994). Commercial Practices: DOD Could Reduce Electronic Inventories by Using Private Sector Techniques (GAO/NSIAD-94-110, Jun. 29, 1994). Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994). Commercial Practices: Leading Edge Practices Can Help DOD Better Manage Clothing and Textile Stocks (GAO/NSIAD-94-64, Apr. 13, 1994). Defense Management: Stronger Support Needed for Corporate Information Management Initiative To Succeed (GAO/AIMD/NSIAD-94-101, Apr. 12, 1994). Defense IRM: Business Strategy Needed for Electronic Data Interchange Program (GAO/AIMD-94-17, Dec. 9, 1993). Defense Transportation: Commercial Practices Offer Improvement Opportunities (GAO/NSIAD-94-26, Nov. 26, 1993). Defense Inventory: Applying Commercial Purchasing Practices Should Help Reduce Supply Costs (GAO/NSIAD-93-112, Aug. 6, 1993). Commercial Practices: DOD Could Save Millions by Reducing Maintenance and Repair Inventories (GAO/NSIAD-93-155, Jun. 7, 1993). DOD Food Inventory: Using Private Sector Practices Can Reduce Costs and Eliminate Problems (GAO/NSIAD-93-110, Jun. 4, 1993). Defense ADP: Corporate Information Management Must Overcome Major Problems (GAO/IMTEC-92-77, Sep. 14, 1992). DOD Medical Inventory: Reductions Can Be Made Through the Use of Commercial Practices (GAO/NSIAD-92-58, Dec. 5, 1991). Commercial Practices: Opportunities Exist to Reduce Aircraft Engine Support Costs (GAO/NSIAD-91-240, Jun. 28, 1991). Defense Logistics: Observations on Private Sector Efforts to Improve Operations (GAO/NSIAD-91-240, Jun. 13, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Address Correction Requested
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) justification for developing and deploying its Depot Maintenance Standard System (DMSS), focusing on whether DOD has: (1) based its DMSS selection on costs and benefit analyses as well as economic and technical risks; and (2) selected a strategy that would dramatically improve depot maintenance operations. GAO found that: (1) DOD has not based its DMSS decisions on sufficient cost and benefit analyses or detailed assessments of economic and technical risks; (2) DOD may not achieve the marginal improvements envisioned, since it has failed to obtain project milestone reviews or approvals for DMSS that would ensure that system development and implementation decisions are consistent with sound business practices; (3) DMSS will not dramatically improve DOD depot maintenance or produce significant cost savings, since DOD has not reengineered its business practices; and (4) DOD may have made future reengineering efforts more difficult by entrenching inefficient and ineffective work processes.
Background The appointment of a child welfare receivership began with the filing of a class action in 1989 on behalf of abused and neglected children in the District of Columbia. The U.S. District Court trial and subsequent opinions documented many shortcomings within the child welfare system and led to a finding of liability on the part of the District. For example, the court found that as a result of inept management and the indifference of the mayor’s administration, the District had failed to comply with reasonable professional standards in almost every area of its child welfare system. Specifically, the court found that the District had failed to investigate reports of neglect or abuse in a timely manner, make appropriate placements for children who entered the child welfare system, monitor their care, or adequately ensure that they had permanent homes. Court documents traced these failures to staffing and resource problems— namely, staff shortages, inconsistent application of policies and procedures, and an inadequate automated system to track the placement and status of children in the District’s care. A remedial action plan was developed jointly by the plaintiffs and the defendants in the class action, and that plan led to the development of the MFO in January 1994. The MFO includes more than a hundred policy, procedural, and data requirements with which the agency must comply. These requirements include steps for improving protective services; services to children and families; and the placement, supervision, and review of children in foster care (app. II provides a summary of selected requirements). In 1995, the court, lacking sufficient evidence of program improvement, removed the agency from DHS and placed it in full receivership. Since then, the court has twice appointed a receiver to manage the child welfare agency’s efforts to institute the changes outlined in the MFO. The first receiver served from August 1995 through June 1997. Not finding improvements in the child welfare program, the court appointed a second receiver in 1997, who served through November 30, 2000. The court appointed the deputy receiver for programs to serve as interim receiver effective November 30, 2000. The receiver appointed in 1997 primarily focused on changes to the organization’s infrastructure, such as enhancing personnel management and implementing a new management information system, as we reported in our previous testimony. Additionally, in February 2000, CFSA consolidated functions that had been dispersed at seven locations throughout the city and co-located almost all staff into the same building (see the CFSA organization chart in app. III). CFSA has also taken steps to create new organizational roles or units to fulfill specific responsibilities. For example, in 1998, the receiver hired specialists in child care, housing, education, and substance abuse, who act as “internal consultants” by sharing their expertise with social workers and interacting with other District agencies. The substance abuse specialist, for example, identifies and locates services in the community that meet the needs of the population of children in CFSA’s care. CFSA provides a range of child welfare services from the time children enter the system until they reach permanent stable care arrangements. Specifically, the Intake Administration oversees the process by which children enter the system. After intake, children are served by a number of different programs, depending on the setting in which they are placed once they are removed from their home, such as traditional foster care, kinship care, and adoptions. Other program areas provide special services, such as the Teen Services Division, which focuses on adolescents in care by, for example, helping to prepare them to live independently as adults, and the Family Services Division, which addresses the needs of families when a determination has been made that a child can safely remain at home. Health Services, through a program called D.C. KIDS that was established in October 1999, provides for initial physical and mental health screenings and for continuing medical care. In recent years, the number of children receiving such services has increased, while the number of social workers has declined. Although the agency serves children in a variety of settings, in December 1997, 2 months after the appointment of the second receiver, there were approximately 2,900 children in foster care; at that time, there were 289 social workers on board providing a broad array of services in agency programs, such as kinship care, foster care, and adoptions.As of August 31, 2000, there were about 3,271 children in foster care, and the agency employed 241 social workers.To provide services to children, CFSA had a budget in fiscal year 2000 of $147 million, almost one-third of which was federal child welfare funding.The MFO requires CFSA to maximize its use of several federal funding sources, including title IV-E and Medicaid, and it has taken steps to increase the receipt of such funding. CFSA has requested an increase of $37 million for fiscal year 2001, for a total budget of $184 million, which according to some agency officials is the first budget that will fully support their efforts to comply with the MFO. CFSA operates in a complex child welfare system. Although it provided many services directly, in fiscal year 2000, about 57 percent of all agency expenditures were spent on contracted services. For example, contracts provide for placements of children in group homes as well as some foster homes and other facilities. The agency spends about $6.2 million annually on eight Healthy Families/Thriving Communities collaboratives, nonprofits that provide neighborhood-based support services to stabilize families so that fewer children enter the child welfare system. CFSA also works with a consortium of 22 private agencies to place children in foster and adoptive homes.In addition, CFSA relies on services provided by other District government agencies. For example, both the Fire Department and the Health Department inspect facilities where children are placed; D.C. Public Schools prepares individual education plans for children in care; and the D.C. Interstate Compact office in the Department of Human Services has responsibility for working with CFSA and other states to process the interstate placement of children.To process cases through the court system, CFSA interacts with 59 D.C. Superior Court judges, each of whom has responsibility for a share of the child abuse and neglect caseload. Under District of Columbia law, while CFSA has primary responsibility for investigating neglect cases, the Metropolitan Police Department (MPD) has primary responsibility for investigating abuse cases. This arrangement, known as “bifurcation,” is unique among child welfare systems nationwide. Following MPD investigation, the office of Court Social Services (CSS) of the District’s Superior Court provides oversight and management of these abuse cases, which totaled about 600 in July 2000. In abuse cases in which a child cannot be returned home and no relative can be found, CSS transfers the case to CFSA. In addition to complying with the provisions of the MFO and District law, CFSA must comply with applicable federal laws, including the Adoption and Safe Families Act of 1997 (ASFA), which placed new responsibilities on all child welfare agencies nationwide. The act introduces new time periods for moving children toward permanent stable care arrangements and penalties for noncompliance. For example, it requires states to hold a permanency planning hearing no later than 12 months after the child is considered to have entered foster care. In an effort to provide for greater accountability among court-appointed receivers in the District, including the child welfare receiver, the Congress passed and the President signed in October 2000 the District of Columbia Receivership Accountability Act of 2000 (Pub. L. No. 106-397). The act provides for increased oversight and accountability of receivership performance. The act specifies several approaches for enhancing oversight, including periodic fiscal, management, and program audits, that are intended to strengthen the structure of accountability for District government programs. Receiver’s Changes Had Limited Effect on Children’s Well-being Since 1997, the receiver introduced management and programmatic changes intended to meet the requirements of the MFO and to improve child welfare outcomes in the District. These changes include initiatives to recruit and train qualified social workers, provide additional funding for community-based services, establish and enhance organizational components, and develop a new automated system. The implementation of these efforts has fallen short of expected results, and these efforts have had a limited effect on CFSA’s ability to provide needed child welfare services to enhance children’s well-being and guide their progress toward permanent stable care arrangements. For example, although many new staff have been hired, some had not yet been assigned caseloads because of delays in obtaining professional social work licenses. As a result of such delays and inadequate efforts to retain staff and maintain adequate staffing levels, caseloads remain above standards defined in the MFO. This impairs social workers’ ability to perform critical casework functions, such as visiting children to ensure their safety and adequacy of care, preparing court reports, and investigating cases within designated time periods. Likewise, CFSA issued a policy handbook in 1995 to which it has made numerous revisions. While a recent effort to include these policies in the agency’s automated system may improve staff access to them, many staff told us that they have lacked consistent direction in how to implement policies during the course of their work. Moreover, the policy handbook and subsequent revisions do not yet include policies covering all requirements of the MFO. In addition, CFSA’s new automated system— FACES—lacks complete case information, and social workers have not fully used it in conducting their daily case work. Changes in Key Management Requirements Fall Short of Expected Results In response to management-related requirements contained in the MFO, the receiver undertook changes in areas such as recruitment and retention, training, social worker caseload reduction, the development of policies and procedures, and the implementation of a new automated information system. However, these changes have generally fallen short of expected results. (See app. II for an assessment of CFSA’s compliance with selected provisions of the MFO.) Recruitment, Retention, and Training Activities Problems with the recruitment and retention of qualified social workers preceded the receivership. Recognizing these challenges, the MFO required CFSA to improve recruitment efforts and hire a sufficient number of social workers who had obtained Master of Social Work (MSW) degrees. Following recent recruitment efforts, CFSA hired 56 social workers between March and June 2000, which represented 80 percent of its goal of 70 new hires for that period.This hiring activity resulted from recruitment efforts that included obtaining independent hiring authority in October 1999, developing a recruitment plan in March 2000, conducting several job fairs, raising salaries, and offering additional recruitment incentives. The retention of qualified social workers, however, has been a constant challenge for CFSA. To help the agency maintain a stable workforce, the MFO required it to develop a retention plan. As of August 2000, CFSA had created a retention committee that meets periodically and reports to the receiver, but it has not developed an agencywide retention plan. The agency continues to experience a fluctuating yet significant loss of social workers. CFSA staff estimated that in 1997, the first year of the receiver’s term, the agency lost about 15 social workers per month. While CFSA officials stated that this rate had declined by June 2000 to about four or five social workers per month, attrition continues to be a significant issue.Overall, according to a CFSA official’s estimate, the agency lost about one- third of its social workers between January 1999 and July 2000. While attrition is high in many other child welfare agencies across the country, turnover among social workers in the District is explained in large part by unmanageable workloads and the availability of better-paying jobs with other District agencies and the private sector, according to CFSA officials and staff. Furthermore, according to CFSA’s analysis of interviews with staff who left the agency, some social workers cited the quality of supervision as a reason for their decision to resign.CFSA officials noted that the creation of a social worker associate position at the Bachelor’s of Social Work level could help agency retention efforts by providing more flexibility in assigning such workers to deliver some MFO-required services. High turnover adversely affects CFSA’s capacity to effectively manage the provision of services to children and families. According to CFSA staff, the agency is losing many of its more experienced social workers, whereas the new hires selected to replace these workers and to help CFSA attain more desirable staffing levels face a significant learning curve. New hires, senior social workers, and supervisors we spoke with also cited casework problems associated with high turnover, such as delays that result when social workers resign and cases previously assigned to them need certain actions, including transfer to another program area. New hires also stated that once on the job, they were assigned cases previously handled by others that lacked sufficient case data, forcing them to spend extra time to recreate the data and learn the case history. Additionally, high turnover results in the assignment of a succession of social workers to the case of a child in CFSA’s care, furthering instability in the lives of these children. The MFO also required CFSA to establish a full-time training unit and to provide minimum requirements for training new hires and ongoing training for more senior workers. While CFSA has met these requirements, casework priorities often lead to low attendance at training sessions. In response to these requirements, the agency initially conducted a needs assessment to plan the development of the training unit. Until the training unit was established, an administrator and a trainer provided or obtained training for agency employees. In January 1999, CFSA established the training unit through a contract with Virginia Commonwealth University to provide training to agency social workers beginning in February 1999.The university has developed a training course for new hires, which provides the 80 hours the MFO requires, and a curriculum of about 30 courses, from which more senior staff can choose classes to meet the continuing education requirements of the MFO. The training program director reported that between May and October 2000, 54 of 72 new hires completed their initial training.Additional training has been provided in areas such as preparing for court appearances and meeting ASFA requirements. Staff we interviewed expressed a variety of views on the quality of training. For example, some new hires who had recently earned their MSWs stated that they found portions of the new-hire training to be elementary or insufficiently tailored to their case management duties at CFSA. In addition, staff at all levels stated that they wanted additional training in how to assess the risks a child faces at home to determine whether removal is necessary, as well as additional training in agency policies and procedures. CFSA officials stated that risk assessment training has been offered to intake workers several times but cancelled for poor attendance. CFSA officials and social workers stated that casework priorities often result in low attendance at classes, which leads to either course cancellations or rescheduling trainers. CFSA incurs additional training costs that range from $500 to $800 per day for rescheduled classes. In addition, the MFO required CFSA to assess whether staff satisfactorily master the course content. CFSA lacks such methods, although the agency hired a curriculum specialist in September 2000 to develop methods for evaluating the extent to which social workers apply training content to the work they perform. Caseload Reduction The MFO established caseload limits to help social workers respond to the service needs of children and families. Although CFSA has achieved these caseload levels in some program areas, the caseloads CFSA reported for other areas remain significantly above the maximum caseloads allowed by the MFO, limiting social workers’ ability to meet the needs of children and families. For each program area, table 1 identifies staffing levels, caseloads per staff required by the MFO, and average caseloads carried by CFSA social workers in teams in each program area as of July 2000. As the table shows, social workers were carrying actual average caseloads that exceeded the MFO limits in 6 of 10 CFSA programs. For example, workers in the traditional foster care program were carrying average caseloads that ranged from 13 to 55, compared with the MFO limit of 16. Moreover, eight of the agency’s nine traditional foster care teams had average worker caseloads that exceeded this limit. Moreover, average caseloads may understate the caseloads actually carried by some social workers. Social workers we spoke to consistently described their caseloads as overwhelming and unmanageable. To illustrate the effect of high caseloads, a senior social worker, also in traditional foster care, told us that his caseload included responsibility for 44 children. He described the following duties that he must fulfill to meet the needs of these children and their families: He must prepare 44 case plans, assess the needs of 44 children and make appropriate referrals, attend the court hearings for these cases, participate in internal reviews of these cases, and ensure appropriate placements in 44 different schools, among other activities. In addition, the social worker is responsible for obtaining a variety of goods and services, including clothing, transportation, health and mental health services, and mentoring services. Caseloads that exceed prescribed limits have several effects. For example, supervisors reported that they sometimes must carry cases. This practice not only violates an MFO requirement that supervisors not carry cases; it also limits their ability to provide effective supervision. Yet, CFSA data as of June 2000 showed 25 supervisory staff carrying the cases of 129 children. High caseloads also have a very direct effect on the availability and level of CFSA oversight of the children in its care. Specifically, social workers reported that when caseloads are high, certain other activities assume a lower priority. Among these are providing referrals so that children can obtain needed services, conducting required visits to assess children’s progress in their placements, and entering data in the management information system. Finally, social workers we spoke to acknowledged that high caseloads also lengthen the time required to process cases, and they contribute to difficulties in moving children to permanency without delay. According to a report prepared for the Deputy Mayor for Children, Youth, and Families, children still spend an average of 3.7 years in the District’s child welfare system.In this environment, these time periods jeopardize the District’s ability to comply with ASFA’s requirement that children reach permanency within 12 months, according to District Superior Court officials. Policies and Procedures Development The MFO required CFSA to develop policies and procedures covering 28 key child welfare program areas, including conducting timely investigations, providing needed services, developing performance-based contracting, appropriately placing children and achieving permanency for them, and managing social worker caseloads.The agency issued policies in a 1995 handbook. Since then, these policies have been revised by changes communicated through “circular letters” that provided draft updates to specific policies and procedures and direction that varied from supervisor to supervisor. As a result, staff expressed confusion over how to achieve consistent implementation of agency policies. Moreover, agency policies do not cover all court-mandated requirements contained in the MFO. Additionally, until October 2000, CFSA had assigned only one worker to coordinate the development of draft policies. Even though policies have been in place, CFSA staff told us they have not been completely clear or useful in carrying out their work. Uncertainty over CFSA’s policies constrained supervisors’ ability to communicate priorities and direct the work of social workers under their supervision, and in some cases, social workers expressed a reluctance to seek guidance from their supervisor because they felt their supervisor lacked sufficient knowledge. Finally, staff we interviewed said that, as a result, CFSA’s ability to work effectively with other key child welfare partners was constrained. For example, according to a CFSA official, the lack of clear and consistent policies created uncertainty regarding how social workers should respond to directives from the District’s Superior Court regarding the preparation of court-mandated reports, appearances at court hearings, and other legal matters. Even though policies are now available through FACES, CFSA staff indicated that social workers will still need to seek supervisory guidance to clarify and implement them consistently. CFSA officials demonstrated the approved policies and procedures manual feature of the automated information system to GAO in October 2000.While the policies and procedures appeared to be at least as comprehensive as earlier policies, it is too early to say how staff will rely on this new feature to obtain consistent direction to their work. However, CFSA officials stated that social workers will receive training on using the automated policies and can contact CFSA’s Office of Planning, Policy, and Program Support to obtain clarification on specific policy implementation. New Automated Information System The MFO required CFSA to develop an automated information system to permit the agency to comply with the provisions of the MFO and with District law. On October 1, 1999, CFSA implemented the FACES system, adapted from systems previously implemented in Oklahoma and West Virginia, at a cost of about $20 million.While additional modifications or enhancements could be made, CFSA considers the system fully implemented and available for staff use. According to the system administrator, several factors contributed to system design: the requirements of federal law, compliance with the provisions of the MFO, and input from a team of 70 “end users” consisting of staff from various program areas throughout the agency. While CFSA officials believe FACES will comply with federal requirements, it cannot produce all the reports the MFO required.For example, CFSA reported that FACES could not produce reports on the timeliness of administrative reviews and could not generate certain placement data as specified by the MFO. CFSA staff also do not fully use the system. Staff across the agency noted that they continue to use spreadsheets or databases outside of FACES.The system administrator expressed concerns about the completeness of the data in FACES—and, therefore, its validity—noting that incomplete data entry undermines the purpose for which the system was designed. She described FACES as a tool that supports case practice and allows social workers and supervisors to track cases, assess risks to children, control vendor payments, and assess contractor performance. The system can also document actions social workers perform during a case’s entire history. To the extent that timely and complete data entry is not achieved, however, the agency’s ability to track its entire caseload is compromised. In part, this low usage stems from the lack of case data entry into the system by social workers. For example, CFSA officials estimated that as of September 2000, across all programs, about half of all case plans had been entered into FACES; however, Superior Court judges and a court official we spoke to believe that this estimate may overstate the actual rate of data entry. CFSA officials also noted that the percentage of data entered in the system varies by program area, as shown in table 2. The system administrator identified several possible reasons why social workers might not be entering complete data into FACES: a lack of comfort with learning new technology and a “cultural” preference for paper documents among child welfare practitioners, a lack of knowledge among staff about the system’s capabilities, supervisors’ decisions to allow social workers to continue using paper, and insufficient time to use the system because of other case priorities. Social workers also said that when caseloads become difficult to manage, other activities like data entry assume a lower priority. Finally, FACES is not yet well linked with systems in other agencies.Existing linkages with other agencies are limited and do not include key participants in the child welfare system, such as MPD, CSS, Office of Corporation Counsel (OCC), D.C. Superior Court, and D.C. KIDS.Officials in some of these agencies expressed a desire for access to FACES to track children in the child welfare system and report more complete case information in support of District efforts to obtain additional federal funds. In July 2000, the CFSA System Administrator noted that the agency’s 2001 budget provides for limited linkages with OCC, CSS, and MPD and a full FACES interface with D.C. KIDS. However, implementation priority to date has focused on rolling out the system within CFSA. Changes in Key Program- Related Areas Also Fail to Meet Established Goals In addition to requirements that address human resources and caseloads, the MFO imposed program requirements on CFSA in a number of areas, ranging from intake and assessment to efforts to provide children with permanent placements. Despite progress in some areas, CFSA still faces challenges in meeting the terms of the court order. In particular, the agency has not met certain MFO time periods for initiating and completing investigations. While the agency has begun to address its need for additional homes and facilities, it continues to place children in settings prohibited by the MFO, such as homes without current licenses and homes with more children in their care than their licenses permit. Additionally, CFSA has not consistently met MFO requirements regarding the provision of ongoing support services to children once they are placed, and its oversight of contractors’ service delivery is limited. Moreover, while the agency has added staff to process the cases of children placed outside the District without proper documentation, a large backlog of these cases remains. Finally, despite MFO requirements to expedite the process by which children move into permanent, stable care arrangements, children still spend an average of 3.7 years in the system. Intake and Assessment The court order mandated certain time periods to expedite the process by which children enter the child welfare system. For example, it required that investigations be initiated within 48 hours of the receipt of the abuse or neglect report and completed within 30 days of the report. District law exceeds the MFO requirement and requires that the initial investigation be initiated within 24 hours of the report.As shown in table 3, CFSA has had great difficulty meeting these requirements. For example, roughly one-third of all cases referred for investigation since October 1999 were not initiated within 24 hours of the report and CFSA failed to complete investigations within 30 days on about half of them. CFSA has made some progress in reducing the backlog of cases for which investigations had not been completed within 30 days. An intake official recently reported that the backlog of incomplete neglect investigations had been significantly reduced and that only 30 incomplete investigations remained as of August 2000. Beginning in June 2000, CFSA set a unit of recently hired intake workers to helping the MPD reduce its own backlog that had not met the 30-day time period from 177 cases to 64 cases. However, intake officials acknowledged continuing difficulties in meeting both the 24-hour and the 30-day time periods. Intake officials cited staff turnover as one explanation. CFSA lost about 26 percent of its intake workers in 1999. Intake officials believe they will be able to comply with both time periods if the agency is fully staffed, and they cited the succes that new intake workers had in reducing the backlogs in July 2000 as one example of their ability to comply, given additional staff. The MFO also required joint investigations of abuse cases by CFSA social workers and police officers and mandated that CFSA develop policy to guide such joint activities. While CFSA reported that 562 joint investigations were conducted in fiscal year 2000, joint investigations are not yet routine.For example, CFSA and MPD officials agree that this number refers to investigations in which CFSA and MPD staff collaborated in some way on a case. The number of cases in which CFSA and MPD jointly visited families to conduct investigations is much lower, and the officials could not provide a concrete number. While CFSA and MPD officials developed a protocol for working together in September 2000, the lack of available staff in both agencies is likely to continue to limit their ability to conduct joint investigations. Opportunities for Placing Children in Foster, Adoptive, and Group Homes The MFO addressed the placement of children by requiring that CFSA prepare a needs assessment and development plan to identify more placement opportunities in additional foster, adoptive, and group homes and other facilities. The MFO also prohibited placing children in settings considered harmful to them, such as placing children younger than 6 in group homes.While CFSA has not developed a resource development plan per se, the agency’s strategic plan for fiscal year 2000 identified goals for developing more foster and adoptive homes, for example, and included time periods and specific steps to be taken. This plan had not yet been updated by October 2000. Social workers we spoke to emphasized that the development of additional capacity in foster and adoptive homes is crucial if children are to be appropriately placed in a timely manner. CFSA staff also cited a shortage of group homes, noting that 76 placement slots have been lost because of the recent closing of several group homes.Finally, social workers noted that the supply shortage is especially acute for emergency care facilities, infant care facilities, and homes for large sibling groups. CFSA’s difficulties in securing appropriate placement facilities are illustrated by the fact that CFSA has placed children in facilities that lack current licenses, facilities where the number of children exceeds the number permitted by the license, and inappropriate facilities—all practices prohibited by the MFO.For example, as shown in table 4, in July 2000, CFSA reported that 62 children younger than 6 were residing in congregate care or group homes for as long as 3 months to almost 2 years. A national child welfare expert described such placements as very harmful to young children. The lack of placement options has also led to extended stays by children in CFSA’s on-site “respite center,” which was not designed for overnight care. CFSA staff confirmed that the respite center has been used to place children on an emergency basis for several days at a time. Recognizing the need to develop new placement capacity, CFSA has taken some recent steps to do so, but the effects of these activities are not yet known. Moreover, several officials we spoke to agreed on the need for a comprehensive analysis of needs, matched with an analysis of existing system capacity to meet the agency’s long-term needs for placement opportunities. To address its placement needs, CFSA has worked with the Annie E. Casey Foundation to study ways to recruit additional foster homes, and it implemented a project with this aim in June 2000. The foundation’s Family- to-Family initiative, for example, uses strategies to recruit, train, and retain foster families that are community-based and culturally sensitive. Additionally, CFSA’s adoption program manager identified ways to improve adoptive home recruitment by, for example, conducting effective follow-up with persons interested in adopting.In September 2000, the receiver announced emergency plans to pursue contract modifications that would allow providers who have an immediate capacity to accept additional children to do so. Support Services for Foster and Adoptive Families CFSA has had difficulties in providing pre-placement and post-placement support services. For example, the MFO required social workers to visit children in foster homes not less frequently than once a week for the first 2 months after placement. While CFSA reported that as of June 2000 social workers had visited most foster children at least once, agency data show that in most cases the reported visits were less frequent than once a week. As of June 2000, CFSA reported that 53 children had not been visited at all since being placed. Moreover, foster and adoptive parents may not be fully prepared for the complexity of children’s needs. CFSA’s Office of Quality Assurance studied children who had experienced multiple placements and concluded that many foster parents lacked an understanding and knowledge of how to cope with the special needs of some children.These needs reflect underlying conditions such as depression, attention deficit hyperactivity disorder, post-traumatic stress disorder, and attachment disorder. In some cases, CFSA has provided insufficient support to stabilize placements and prevent disruptions. For example, the report found that, in some cases, social workers failed to implement recommendations included in psychological and psychiatric evaluations and that some children who had been physically or sexually abused were not provided therapy or other services aimed at addressing the effect of abuse when they entered foster care. Similarly, the MFO recognized that services are necessary to preserve adoptive families and requires that families at risk of disruption receive appropriate services. A CFSA adoption official acknowledged that many children also have special needs that present long-term issues that may not become apparent until some time after the adoption has been finalized. This situation can appear to be overwhelming to adoptive parents, who may need ongoing services to ensure family stabilization and prevent disruption. In response to the needs of adoptive families, CFSA initiated a new postadoption program, supported by an initial grant from Freddie Mac in June 1999.The new program will coordinate a range of referrals for adoptive parents, such as medical and mental health advocacy groups, developmental specialists and therapists, and experts who are knowledgeable about the needs of adopted children. Oversight of Contracted Services Although the MFO requires CFSA to use performance-based contracting, CFSA has made little progress in holding its contractors more accountable for the services they provide.For example, although CFSA has succeeded in introducing some performance measures to guide oversight of the eight Healthy Families/Thriving Communities collaboratives, these performance- based contracts represent a small proportion of all contracts. More generally, CFSA’s capacity for effective oversight of contracts is limited in several ways. The agency employed six contract monitors to oversee contract expenditures of about $80 million for fiscal year 2000.A CFSA contracts official stated that some of these contract monitors lack training and experience corresponding to this level of responsibility. CFSA’s oversight of certain facilities is augmented by group home and residential treatment center monitors, who are responsible for ensuring that facility staff and conditions are consistent with the terms of the contract. Specifically, 4 group home monitors are responsible for overseeing about 17 homes, and 3 residential treatment center monitors have oversight of about 30 facilities.Generally, contracts with the group homes require visits by the group home monitors at least once a month, and visits with each residential treatment center are to be made at least quarterly. Given the monitors’ oversight responsibilities and staff resources, they told us they need additional monitoring staff to more effectively oversee facility performance. Moreover, the monitors stated that there is currently no oversight of about 200 purchase-of-service agreements, which are small contracts that usually involve specific services for one child each. Agency officials stated that CFSA plans to develop a process to monitor these contracts. Specialized Organizational Units Although not required to do so by the MFO, CFSA has added staff to existing organizational units that address relationships with the court and the processing of interstate placements for children. While both units have helped the agency address specific problems, the units face ongoing challenges related in part to high social worker caseloads and the agency’s difficulties in securing placements for children. Since 1998, CFSA has added nine positions to its Court Liaison Unit, which formerly consisted of one person.The unit is to track all court reports and court orders, submit court reports and case plans in timely fashion, and maintain relationships with the judges. Despite these additional resources, as of July 2000, Superior Court judges said that social workers consistently fail to submit court reports and case plans in a timely way, which adversely affects working relationships between CFSA and the court. Social workers we spoke to acknowledged that when caseloads become difficult tomanage, they cannot always document case information, compounding the court’s dissatisfaction with their performance. Regarding interstate placements, the agency hired four social workers on a temporary basis in May 2000 to reduce a backlog of several hundred placements that lacked proper documentation. Numerous clearances (for example, police clearances and medical reports) are required when children are placed in foster homes. Because these clearances require lead time to process, children were placed outside the District before all the paperwork could be completed. According to CFSA officials and social workers, CFSA continued making such placements without completing all the necessary documentation, effectively violating the ICPC requirement to provide sufficient information to the state where the placement is made. Agency staff cited several factors that contributed to the growth of the backlog. For example, some children were required to be placed out of state by court order, some were placed with relatives, and, for other children, no alternative placements were available in the District. Some social workers said that all or most of their cases require interstate placement and, therefore, completion of the ICPC process. CFSA reported 999 children in its ICPC backlog as of September 2000.The interstate compact coordinator reported that the majority of these backlog cases needing additional documentation were in Maryland.In September 2000, CFSA, the Deputy Mayor for Children, Youth, and Families, and the state of Maryland signed a memorandum of understanding regarding the completion of interstate compact documentation for children already placed in Maryland and expedited the processing of current and future interstate compact approvals. The memorandum provides that 10 percent of Maryland’s emergency placement slots are to be designated for District placements of up to 30 days. According to the terms of the consent order, CFSA will assume total responsibility for the ICPC function and will no longer need to forward paperwork to DHS for processing, creating an opportunity to reduce processing delays. Time Periods to Achieve Permanent, Stable Care As embodied in ASFA, an important goal in child welfare is to reduce the amount of time children spend in the system and move them into permanent placements as soon as possible. Permanent placements may take one of several forms, such as family reunification, adoption, independent living, and placement with a relative or guardian. Although the number of adoptions has increased, the agency has made little progress in moving children into other permanent placements. CFSA relies on several processes to expedite permanency, but each has its shortcomings, and children still spend about 3.7 years on average in the system. Moreover, under ASFA, which requires a permanency hearing no later than 12 months after a child enters foster care and allows the federal government to withhold funding in the event of noncompliance, the District faces additional pressures to reduce delays in moving children into permanency. The MFO included various provisions to expedite processing adoption cases.While the agency has been instrumental in increasing the number of adoptions, more can be done to expedite the cases of children waiting to be adopted. In fiscal year 1999, CFSA achieved 250 adoptions that were made final by the District Superior Court—a record number and an increase of almost 200 percent from 1995. In fiscal year 2000, 329 adoptions were made final. The adoption program manager attributes the increase to efforts that have been made to identify various ways to expedite the processing of adoption cases, such as moving the cases of abandoned babies directly from intake to adoptions, using the waiver of parental rights (which can be more timely than the termination process), and setting deadlines for paperwork submitted by pre-adoptive parents. However, CFSA’s adoption program manager estimated that at least 600 children in CFSA’s care with a goal of adoption are being handled by other programs, such as traditional foster care and kinship care, and concluded that more needs to be done to transfer adoption cases to the adoption program in a timely way. Several agency processes aim to expedite moving children into permanent care: administrative reviews, special staffings, and using new performance standards in staff appraisals. Regarding administrative review, federal law and the MFO require every 6 months an administrative review of the progress toward permanency and the achievement of case plan goals for all children in foster care.The objective of these reviews is to ensure that children’s physical, social, and emotional needs are being met and that progress toward permanency is timely. However, as shown in table 5, a report prepared by the court-appointed monitor shows that as of July 1999, while CFSA had made some progress in reducing the number of cases with no review between December 1998 and July 1999, the agency had made no progress in reducing the number of cases with untimely reviews.Moreover, of the cases with untimely reviews in July 1999, about half had not been reviewed in more than a year. As of October 2000, the agency could not provide more recent data on cases without reviews and cases with untimely reviews. In late 1998, CFSA began a series of special “permanency staffings” meetings to review children’s progress toward obtaining permanent, stable care arrangements. The effort focused on the cases of children who had been in foster care for 15 of the past 22 months.CFSA plans to continue to hold these meetings in order to reduce the backlog of cases in this category. For each case reviewed, the cognizant worker and supervisor review the case plan and the permanency goal and make suggestions for determining whether the permanency goal is still appropriate and consistent with the case plan. In some instances, it may be determined that the child has reached permanency and that the case is ready to be closed. However, the meetings do not routinely include legal advice that may be required to determine whether a case is ready to be closed. In February 2000, a District Superior Court official reviewed 68 cases that were subject to these special permanency staffings and found that, for most cases, documents contained insufficient information to make a determination of case closure and that legal input to determine whether certain legal standards (for example, “reasonable efforts”) had been met was lacking. Finally, according to CFSA officials, children’s movement toward permanency will be considered in a new staff appraisal process that incorporates performance standards developed by the firm of Arthur Andersen.While this step would enhance individual social worker accountability for progress toward permanency, the performance standards had not been implemented in September 2000 as planned, pending resolution of a citywide collective bargaining process. While CFSA needs to demonstrate more progress in moving children into permanent placements, the implementation of ASFA, with its specific time periods and financial penalties, introduces new risks for CFSA’s federal funding.Federal regulations provide for periodic audits of states’ substantial compliance with ASFA.The audits review outcomes and timeliness on small samples of about 30 to 50 cases. If CFSA is deemed out of substantial compliance with ASFA, penalties could be imposed, jeopardizing a portion of the agency’s federal funding. CFSA officials expect that HHS will conduct this audit in July 2001. The District’s Efforts to Provide More Collaborative Services Are Limited in Scope Our previous work and studies by other organizations have shown that certain systemwide initiatives are critical to improving child welfare outcomes. Critical initiatives include collaborative operations among the agencies that provide child welfare and other support services, as well as case-specific initiatives aimed at bringing together children, family members, social workers, attorneys, and others to help address the needs of children and their families.Some participants in the District’s child welfare system have recently taken initial steps to improve operations. For example, District agencies have initiated recent efforts to integrate child welfare services with other family services. However, systemwide collaboration has not yet been fully developed, leaving the District’s child welfare system hampered by continued fragmentation. In addition, while some District families have access through the collaboratives to an approach called family case conferencing that brings relatives into decision-making around a child’s well-being, CFSA has not adopted this approach in its own practice with families. Collaboration on Two Levels Is Critical to Effective Child Welfare Systems In our earlier testimony, we reported that effective working relationships among key child welfare system participants who play a role in keeping children safe are essential to successful reform efforts.In order to function effectively, child welfare agencies need a rich array of services to meet the needs of abused and neglected children and their families. Rarely, however, does a single state or local agency have control over acquiring all the needed services, and many of those services, such as mental health care and drug treatment, are outside the control of the child welfare agency. Therefore, strong collaboration among all stakeholders who play a role in helping children and families, such as the courts, private provider agencies, neighborhood collaboratives, the police department, local government leaders, substance abuse and mental health agencies, and agency legal counsel, is essential to obtaining the necessary services. Collaborative approaches can occur on two levels—some focus on integrating the key child welfare system participants to develop joint solutions to cross-cutting problems, and others focus on building collaboration in making decisions on individual child welfare cases. In our earlier testimony, we reported that strong collaboration among all stakeholders who play a role in helping children and families is essential to obtaining necessary services.For example, jurisdictions in five states— California, Florida, Illinois, North Carolina, and Ohio—have convened multidisciplinary advisory committees to work on resolving turf battles, dispel the mistrust among system participants, and develop and implement reforms. Committees were typically composed of representatives from key groups such as child welfare agencies, attorneys, judges, court-appointed special advocates, and other advocates. For example, Cook County, Illinois, established a Child Protection Advocacy Group of 32 individuals representing all offices of the court, the child welfare agency, private social service agencies, legal service providers, advocacy groups, and universities. The group’s subcommittees focus on various issues such as formulating alternatives to court intervention, making decisions in the best interest of the child, and terminating parental rights. To help reform the child welfare system and the court’s role in it, the group was charged with advising the presiding judge on all matters relating to improving the court’s Child Protection Division. Participants in these groups noted that working together in this way provided a unifying force that was invaluable in initiating and institutionalizing reforms. In a 1999 report, the National Association of Public Child Welfare Administrators, an affiliate of the American Public Human Services Association, also cited the benefits of interagency collaboration. According to the association, an interagency approach to providing child protection and other services can improve agency coordination, identify service gaps, and advocate for needed resources. Other jurisdictions across the country have taken a different approach to building collaboration by pooling or blending funds from multiple funding sources to obtain the needed services on a more integrated, systemwide basis. For example, Boulder County, Colorado, pooled its child welfare allocation from the state with funding from the mental health agency and the youth corrections agency to provide joint programming and placement decision-making for adolescents in need of out-of-home care in group or residential settings. Similarly, the Wraparound Milwaukee program in Wisconsin blended Medicaid, child welfare, and federal grant funds into a single buying pool to purchase individualized, family-based services to help children placed in residential treatment centers return to their families, foster homes, or other living arrangements in the community. Other collaborative efforts focused on improving decision-making on individual cases, intervening at key points to gather and share comprehensive information among participants. For example, Day One Conferences in North Carolina’s District 20 are held on the first business day after a child is taken into custody by the child welfare agency. In attendance are the parents, child welfare caseworkers, guardians adlitem, public and mental health liaisons, attorneys, public education liaisons, child support liaisons, and law enforcement officers.These meetings provide a forum to arrange immediate services for the family and provide an opportunity to reach agreement on many aspects of the case outside the courtroom, thus reducing the number of times a case is continued in court.Our previous work showed that state and local officials who had implemented these conferences believe that additional time invested at the beginning of a case can shorten the length of time it takes to make a permanent placement decision.The National Council of Juvenile and Family Court Judges has also provided guidance on how to improve case- specific decision-making in child abuse and neglect cases. The council reported that the nation’s juvenile and family courts need clear guidance on how they can best fulfill their responsibilities in child abuse and neglect cases. According to the council, such guidance should explain the decision- making process in these cases and identify the individuals required to attend applicable proceedings. District Agencies Have Undertaken Initial Collaborative Efforts In the District of Columbia, numerous and diverse agencies provide programmatic and legal services for the many children in CFSA’s custody, as depicted in figure 1. District officials and child welfare experts familiar with the District acknowledge that collaboration is key to protecting children. Toward this end, various District agencies and others have undertaken initial efforts to work together to improve services for children and families. However, these efforts have been limited in scope. The information below highlights such interagency efforts. Children’sAdvocacyCenter. Created in 1995, the D.C. Children’s Advocacy Center—“Safe Shores”—operates a nonprofit organization in partnership with the District and federal government agencies. The center coordinates the work of an interagency, multidisciplinary team that investigates allegations of physical and sexual abuse of children. The interagency team includes law enforcement officers, social service officials, prosecution attorneys, mental health workers, medical personnel, and victim advocates. Despite the collaborative efforts spearheaded by the center, its efforts focus on the population of physically and sexually abused children and do not reach the population of neglected children. FamilyReunification. Recognizing the central role proper housing can play in helping to reunify children and their families, CFSA and the District’s Housing Authority have worked together to help families obtain suitable housing. Funds from the U.S. Department of Housing and Urban Development support this effort for the benefit of families with children in CFSA’s custody, among other program participants. However, the demand for housing in this program exceeds the supply. CourtReformProject. The D.C. Superior Court and CFSA have had difficulty sustaining effective working relationships, as discussed previously. To address these difficulties, the court, in conjunction with the National Council of Juvenile and Family Court Judges, has been selected to participate in a court reform project aimed at applying best practices to court processes, including practices to improve working relationships between CFSA and other selected child welfare system participants. Another approach to improving collaboration across programs and systemwide operations is pooling or blending funds. To help facilitate access to various funding sources, CFSA has budgeted for emergency cash assistance to help finance such needs as one-time rent deposits, furniture, and clothing. While such assistance may help social workers and other staff gain access to funds in support of multiple needs, these budgeted funds do not cover other service needs, such as mental health services for children living with their birth parents or kin. The separation of funding streams that are tied to different programs may also hamper the ability to pool or blend funds across programs or to target funds appropriately. According to the Children’s Advocacy Center Executive Director’s testimony in May 2000, the historical lack of a citywide strategic funding plan for maltreated children has adversely affected the prevention of child abuse and has allowed funding from multiple sources to determine programming rather than permitting the needs of the community’s children to drive the system’s response. In addition to collaborative efforts involving specified agencies and funding sources, several CFSA officials, District officials, and other child welfare experts we spoke with suggested that systemwide authority is needed to provide overarching leadership and accountability. The information below highlights two existing structures to provide interagency oversight and coordination. DeputyMayorforChildren,Youth,andFamilies. In 1999, the District’s Mayor appointed a Deputy Mayor for Children, Youth, and Families as a new cabinet position with responsibility for overseeing initiatives aimed at addressing the needs of the District’s children, youth, and families. In this position, the Deputy Mayor oversees DHS, the Department of Health, Office on Aging, and the Department of Recreation. CFSA management and District officials we interviewed acknowledged the Deputy Mayor as a focal point for fostering greater communication or collaboration among District government agencies on behalf of children and families. Mayor’sAdvisoryCommitteeonChildAbuseandNeglect. During an earlier mayoral administration, the Mayor’s office established the Mayor’s Advisory Committee on Child Abuse and Neglect to promote public awareness of child abuse and neglect, assist in improving services and coordinating interagency activities, and make recommendations regarding needs assessments and policies, among other priorities. The committee recommends program improvements to the Mayor. While the committee includes 27 members, as of September 2000, its membership did not include representatives from the District’s substance abuse agency, public school system, or public housing authority. Moreover, the committee has relatively limited funding. It administers a $50,000 fund held in trust for the District’s children. Case-specific initiatives can improve efforts to meet the needs of children and their families as well. For example, District agencies recently initiated efforts to address circumstances that undermine family stability and case processing needs. The D.C. Superior Court’s Special Master, among other priorities, reviews the status of child welfare cases to facilitate timely action and reduce case backlogs.In addition, the Superior Court has begun a permanency mediation pilot designed to include birth parents and relatives in decisions concerning particular permanency goals for children, such as adoption. Finally, two of the Healthy Families/Thriving Communities neighborhood collaboratives began family case conferencing practices aimed at bringing families together, with the support of trained facilitators, to develop a strategy to support the child’s well-being. CFSA program managers said that, consistent with a neighborhood-based service delivery philosophy, the agency has chosen to rely on the collaboratives to initiate efforts at achieving family case-conferencing and other case- specific collaboration, preferring instead to hold special meetings with agency personnel, once a child is in its custody. As of September 2000, CFSA reported that it had referred 17 families to collaborative-sponsored family case conferencing. The receiver acknowledged that CFSA could adopt family case conferencing for its own case practice and that such an approach would benefit children and families. However, she said that this approach would not be appropriate for all families. Collaborative Efforts Are Constrained by Long- Standing Organizational Impediments While various entities in the child welfare system have begun efforts to improve collaboration between CFSA and others, these efforts have been constrained by ineffective working relationships among CFSA and other key participants. In 1999, the Mayor’s office issued the results of a study that reviewed the status of interagency operations in the District’s child welfare system.The study found that CFSA lacks functional relationships with critical executive branch government agencies, such as DHS, the Department of Health, Fire and Emergency Medical Services, and the District of Columbia public school system. In addition, CFSA staff and Superior Court judges said the agency and the court have poor working relationships. CFSA social workers have not consistently provided court reports and other hearing documentation when ordered by the court, and they have not always reported to court to attend hearings. Attorneys from OCC have responsibility for prosecuting civil abuse and neglect cases on behalf of the District of Columbia. CFSA attorneys acknowledge this role, noting that OCC represents not the legal interests of children but the District as a whole. As a result, the opinions of CFSA social workers and OCC attorneys are sometimes at odds. In this instance, CFSA social workers believe that they do not have adequate representation. Moreover, OCC management acknowledged that it does not have enough attorneys to cover all cases. Given these resource constraints, they focus on new cases entering the system and other critical issues. As specified in its child welfare system emergency reform plan of October 2000, the District plans to provide additional resources to OCC to help eliminate the backlog of foster care and adoption cases and achieve compliance with ASFA. Toward this end, the plan requires a workload analysis of OCC and a survey of other jurisdictions to determine the staffing and resource levels necessary to help ensure ASFA compliance and to expedite prosecutions for child abuse and neglect. In addition, the U.S. District Court’s consent order requires the District to provide CFSA with adequate legal staff to enable the agency to meet its legal obligations under the MFO, including the creation of a legal unit within OCC to provide legal services to CFSA. Bifurcated responsibilities for child abuse investigations compound the organizational fragmentation of the District’s child welfare system. Under this bifurcated approach, the District’s criminal statutes assign MPD lead responsibility for investigating child abuse cases. The investigatory practices of MPD are sometimes at odds with those of CFSA social workers, which can make it more difficult for social workers to respond to the needs of the child and family based on their own established protocols. Investigatory responsibilities are further complicated by resource constraints. While the MFO requires MPD and CFSA to conduct joint investigations of abuse cases, department and agency officials said that the inability of both organizations to jointly staff investigations has prolonged investigatory time periods. MPD and CFSA attributed the lack of joint investigations to the lack of available police officers and social workers when an instance of child abuse is first alleged. The bifurcated approach also splits case administration responsibilities between CSS and CFSA. According to CSS staff, they administer about 600 child abuse cases that are not included in CFSA’s automated system. To address the difficulties posed by having bifurcated investigatory responsibilities between CFSA and MPD, a District task force has developed joint investigatory protocols involving child protection workers and law enforcement officials. The U.S. District Court’s consent order addresses the current bifurcated system and calls for District government to enact legislation requiring CFSA and MPD to conduct joint investigations of child abuse allegations. The Structural Issues Are Important in Transferring CFSA Back to the District Long-standing challenges such as a lack of effective working relationships in the child welfare system impede the District’s ability to fully apply best practices to protect children. As it prepares for the transfer of CFSA to local governance, the District faces many organizational and operational challenges. To maximize the opportunity for the child welfare system to improve the well-being of children and their families, District officials and child welfare experts have acknowledged that a sound transition plan should be developed to help facilitate this transfer. They believe this plan should address several factors, such as the organizational context within which the new child welfare agency would operate, the recruitment and retention of qualified personnel, and a mechanism for ongoing oversight and accountability. Participants in the child welfare system took the first step and developed an emergency reform plan at the request of the Subcommittee on the District of Columbia of the House Committee on Government Reform. Prepared with input from key participants in the District’s child welfare system and presented to the subcommittee by the Mayor in October 2000, this plan addresses the roles of OCC, MPD, the D.C. Superior Court, and others in the District’s child welfare system.In October 2000, the U.S. District Court issued a consent order terminating the receivership upon the satisfaction of several major conditions, such as the enactment of legislation ending bifurcated investigations of child abuse and neglect allegations, the appointment of a child welfare agency administrator by the District’s mayor, and the development of licensing standards for foster homes and group homes. The order also provides for a 1-year probationary period during which CFSA must meet specific performance standards, such as meeting investigation time periods, complying with social work visitation requirements, and complying with ASFA time periods, among others. During this probationary period, the MFO is not enforceable, allowing the District time to make improvements to the system without the threat of litigation. At the conclusion of this period, if the court believes the agency has performed satisfactorily, the MFO will again become fully enforceable and the monitor will continue to report on the agency’s compliance with the order. The plan and subsequent consent order attempt to address a number of the organizational challenges faced by CFSA and the District’s child welfare system as a whole. The consent order mandates that CFSA be established as a cabinet-level agency with independent hiring authority and independent procurement authority consistent with District law, as a precondition for terminating the receivership. CFSA officials said that certain benefits would be associated with separate, cabinet-level status. These officials believe that cabinet-level status would provide CFSA with greater independence for setting program priorities and obtaining needed resources. For example, some officials believed this status would provide the agency more control over recruiting staff and would allow the agency to respond more flexibly to the needs of children and families. One official thought that cabinet-level status would enhance service delivery and interagency coordination. The emergency plan and court mandates contained in the consent order also call for additional responsibilities to be transferred to the agency. For example, these requirements call for transferring to CFSA responsibility for (1) implementing the ICPC from DHS; (2) licensing, regulating, and monitoring foster and group homes from the Department of Health; and (3) managing the child abuse cases currently handled by CSS. The emergency reform plan also calls for, among other things, developing a community- based service delivery system in which services are provided to children and families in their own neighborhoods and for expanding the Safe Shores Children’s Advocacy Center into a Children’s Assessment Center—co- locating and integrating the work of all agencies involved in the investigation and prosecution of child abuse and neglect. Accomplishing many of these initiatives, however, would require developing and implementing new local legislation and enhancing federal funding. Although the emergency plan provides time periods for implementing the initiatives, it does not discuss some of the details regarding implementation, such as the need for new staff to handle the increased responsibilities. A member of the Mayor’s staff indicated that the District will develop an implementation plan as part of its legislative package outlining how the District will carry out the requirements of the consent order. With respect to personnel issues, particularly those in higher-level management positions, it remains unclear whether the CFSA staff hired as employees under the receivership would be converted to District government positions. About one-third of CFSA’s current workforce was hired by the former receiver. CFSA officials added that the agency will need to plan for how it will address the future employment status of these employees upon transfer of the agency to the District. The emergency reform plan was silent on how these personnel issues will be handled. The consent order, however, requires the named parties to develop a plan for addressing the status of employees hired under the receivership. With regard to the continued need for agency oversight, District officials outside CFSA have pointed to the need for a mechanism to ensure the agency’s accountability in the future. Upon transfer of CFSA to the District, the court-appointed monitor will retain responsibility for assessing the extent to which CFSA meets the performance standards contained in the consent order. The development of a baseline by which to measure CFSA’s performance is a critical step in carrying out the consent order. The order provides the monitor with the authority to establish the baselines for compliance by conducting a case record review and by relying on CFSA data that the monitor determines are reliable and appropriate. The monitor will also have authority to modify the standards if the defendant or plaintiffs believe they are unreasonable in relation to the baseline. Conclusions CFSA faces many of the same challenges it faced more than a decade ago when it became the subject of a class action suit filed on behalf of the District’s abused and neglected children. Since then the agency has continued to confront long-term managerial shortcomings, and the lack of integration in its child welfare system has contributed significantly to the lack of success in preventing children from entering the system and reducing their length of stay while in the District’s care. After 5 years of operating under receivership, CFSA has shown limited progress in meeting the requirements of the MFO. Compounding these agency challenges, the child welfare system—of which CFSA is a part—continues to operate without a fully developed collaborative structure and the effective working relationships it needs to provide integrated services to children and their families. Moreover, the agency has not fully applied best practices to enhance collaboration, such as family case conferencing, that could enhance outcomes for children and families. While the goals outlined in the emergency reform plan and consent order are a necessary first step, long- term structural and operational challenges must be considered in transferring the agency back to local governance and to foster improved outcomes. It will take a fully collaborative system to help ensure progress toward improving program outcomes and sustained commitment from the Mayor and District government to make achieving the goals a priority. Without such collaboration and leadership, the District will continue to lack the operational framework necessary to protect and meet the needs of children and ultimately to ensure accountability for these goals. Agency Comments and Our Evaluation We received written comments on a draft of this report from CFSA and one oral comment from the District of Columbia. CFSA found the report to be balanced in its findings but believed that clarification was needed on several points (see app. IV). CFSA also provided a number of technical comments that we incorporated where appropriate. One of the agency’s comments addressed the issue of social worker caseloads. CFSA commented that it is somewhat misleading to report caseload averages for a team of social workers rather than an average caseload per worker in the various program areas. When we asked CFSA for caseload data during the course of our review, the agency provided the range of average caseloads by team. These data do, however, reflect average caseloads carried by workers assigned to teams in each program area. Both CFSA and the Deputy Mayor for Children, Youth, and Families commented on the status of the agency’s policies, indicating that policies had existed to a greater extent than portrayed in the draft report. CFSA said it has relied on a 1995 policy handbook and subsequent policy revisions to guide the work of the agency. CFSA further stated that it had been developing an on-line version during the course of our review. We have reviewed the 1995 policy handbook and we have noted the extent to which these policies address court-mandated requirements in appendix II. However, despite the existence of the 1995 handbook, staff we spoke to throughout the course of our review expressed confusion over which policies and procedures to follow and, in some cases, which policies had been approved. As we agreed with your offices, unless you publicly announce the report’s contents earlier, we plan no further distribution of it until 4 days from the date of this letter. We will then send copies to the Honorable Anthony A. Williams, Mayor of the District of Columbia, the interim receiver, and other District officials. We will also send copies to others who are interested on request. If you or your staffs have any questions about this report, please contact Diana M. Pietrowiak, Assistant Director, at (202) 512-6239. Other major contributors were Christopher D. Morehouse, Elizabeth O’Toole, and Mark E. Ward. Scope and Methodology Using primary and secondary source material, we designed our methodology to validate the status of progress the Child and Family Services Agency (CFSA) has made toward meeting requirements of the modified final order (MFO). We asked CFSA to provide copies of written policies and procedures and management information system (MIS) reports so that we could assess its status in complying with the court- mandated requirements. We did not independently verify the accuracy of the data in the MIS reports that CFSA provided. In addition, we reviewed our earlier reports and studies by the American Public Human Services Association, Child Welfare League of America, and other organizations to identify generally accepted best practices of child welfare systems and we assessed the extent to which the District had applied these principles in implementing systemwide child welfare changes. In conducting our work, we relied on a broad array of testimonial, documentary, and analytical evidence in responding to the three research questions. To identify the financial and operational changes that the receiver appointed in 1997 made to comply with the MFO requirements, we analyzed policies, procedures, and information system reports generated by the receiver and reports from other agencies. Based, in part, on findings contained in our testimony entitled FosterCare:StatusoftheDistrictof Columbia’sChildWelfareReformEfforts(GAO/T-HEHS-00-109, May 5, 2000), our work focused on requirements directly related to agency resources, services for children and families, working relationships with other key stakeholders, and program results. These MFO requirements direct CFSA to address staffing and caseloads, financial management, management information systems, resource development, out-of-home care, and family services. We also obtained and analyzed child welfare agency policies, regulations, memorandums, and other information on agency procedures in order to document financial and operational changes undertaken in efforts to attain MFO compliance. To obtain a broad range of perspectives from staff across CFSA’s program areas and with different levels of experience, we interviewed CFSA managers, supervisors, senior social workers, new hires, and other officials knowledgeable about the level of agency compliance. For group interviews with agency staff, we asked CFSA to invite employees with diverse levels of experience to meet with us. Regarding the efforts to initiate improvements in the District’s child welfare system, such as interagency collaboration and the pooling or blending of funds, we examined the extent to which such practices have been included in the day-to-day operations of the District’s system and the challenges the system faces in adopting such initiatives. To make this assessment, we identified initiatives other organizations cited as efforts intended to improve the operations and program results of child welfare systems in other jurisdictions. These organizations include the Annie E. Casey Foundation, the Casey Family Program, the Child Welfare League of America, the Edna McConnell Clark Foundation, and the National Council of Juvenile and Family Court Judges. To identify additional changes required to return the District’s child welfare agency to local governance, we focused our analysis on areas that affect the interaction of child welfare agencies with other organizations. We obtained perspectives on these issues from CFSA staff, program officials in other District of Columbia government agencies, and other organizations. In addition, we analyzed transfer-related documentation developed by the Mayor’s office and other organizations to examine proposed scenarios and operational issues the District identified in the context of transferring CFSA back to local governance. District of Columbia Child Welfare System Features the MFO Required, July 2000 Protective services (intake and assessment) Written policies and procedures for cooperative screening and investigation with the Metropolitan Police Department (MPD) of alleged child abuse complaints. Written policies and procedures for screening complaints of abuse and neglect to determine whether they are within the definitions of District law. Written policies and procedures for prioritizing response times to each report of abuse and neglect. Written policies and procedures for conducting risk assessments and ensuring that the child protective services investigations and decisions are based on a full and systematic analysis of a family’s situation and the factors placing a child at risk and for guiding decision-making. Written policies and procedures for determining which children (who are the subject of abuse or neglect reports or other children in the household) should receive a complete medical, psychological, or psychiatric evaluation. Ability to produce data showing, for the children who need medical reports, how many received them within 48 hours after the report of neglect or abuse was supported. Written policies and procedures for the reporting, investigation, and determination of reports of neglect or abuse (including specifications of what information must be included), in a final determination of whether abuse or neglect has occurred. A standardized form for recording final determination. Written policies and procedures for ensuring that workers receive immediate access to police protection. Written policies and procedures for determining and ensuring that families are referred to and receive the intensity and level of services necessary to preserve family relationships, to prevent additional abuse and neglect, to promote better parental care, and to ensure good care for the child. Written policies and procedures for specifying criteria for the provision of family services and for referring families to private agencies the agency contracts with for such services. Ability to produce management data showing the actual caseloads by worker, for workers in home-based services units. Written policies and procedures for governing the placement process to ensure that children are placed in the least restrictive, most family-like setting that meets their individual needs and that they are placed in or in close proximity to the homes and communities in which they resided before entering the agency’s custody. Written policies and procedures for ensuring the prompt and appropriate placement—including return home, where appropriate—of infants who are residing in hospitals in the District of Columbia but who are, or are soon to be, medically ready for discharge. Ability to produce management data showing, for children needing medical screening on entering the agency’s custody, those who receive screening within 24 hours. Ability to produce management data showing, for children placed in substitute care facilities and needing a thorough, professional evaluation of their needs, those who receive evaluation within 30 days. Written policies and procedures for providing regulations to govern all foster-care facilities it places children in. Written policies and procedures that establish a planning process that initially will seek to work intensively with the child’s parents and other appropriate family members to allow the child to remain at home, if appropriate; in instances in which removal is necessary, will work intensively with the child’s parents and other appropriate family members collaboratively to return the child home under appropriate circumstances consistent with reasonable professional standards; and if, after all reasonable efforts have been made but have not succeeded in returning the child home, will assure the child an alternative, appropriate, permanent placement as quickly as possible. Written policies and procedures for ensuring that in all instances in which a report of abuse or neglect is supported, the case is transferred to a foster-care worker within 5 working days of the finding. Ability to produce management data showing, of all cases in which a report of abuse or neglect is supported, those that were transferred to a foster-care worker within 5 working days of the finding. Ability to produce management data showing, of all cases in which a report of abuse or neglect is substantiated, those in which a worker met with parents within 7 calendar days of the substantiation, those in which a meeting was held after 7 days and those in which no meeting was held. Ability to produce management data showing children for whom a case plan was not developed within 30 days. Ability to produce management data showing the number of children with a permanency goal of returning home for 12 months or more. A standardized form for 90-day reviews. Ability to produce management data showing the number of children with a current, valid 90-day review; number of children without such a review. Written policies and procedures for governing the process of freeing children for adoption and matching children with adoptive homes. Ability to produce management data showing, of the children with a permanency goal of adoption, the number referred to the adoption branch within 5 days of their permanency goal becoming adoption. Ability to produce management data showing the number of children legally free for adoption and awaiting placement for more than 6 months. Ability to produce management data showing, of the children placed in a DHS foster home, the number whom an agency worker has visited at specified intervals. Ability to produce management data showing, of the children placed in a private-agency foster home, the number whom a private agency worker has visited at specified intervals. Ability to produce management data showing, of the children placed in a foster family or facility, the number who have been visited at specified intervals. Written policies and procedures for ensuring that all children receive administrative reviews. Written policies and procedures by which the quality assurance unit will conduct quality assurance reviews. A standardized form used in the quality assurance process. Ability to produce management data showing the caseload figures by worker for all workers conducting investigations of reports of abuse or neglect. Ability to produce management data showing the caseload figures by worker for all workers providing services to families in which the children are living in their home. Ability to produce management data showing the caseload figures by worker for all workers providing services to children in placement, broken out by children with special needs and all other children. Ability to produce management data showing the caseload figures by worker for all workers with responsibility for children (including situations in which the private agency has responsibility for both the child and the family) in placement with a private agency. Ability to produce management data showing the caseload figures by worker for all workers with responsibility for children in the adoption branch. Written policies and procedures for using a caseload weighing formula to ensure that workers who have caseloads that fall into more than one category (mixed caseloads) have caseloads that conform with the equivalent of the maximum limits. Ability to produce management data showing the caseload figures by worker for all workers with mixed caseloads. Ability to produce management data showing the caseload figures by supervisor for all supervisors. Ability to produce management data showing the number of children assigned to a worker within 3 hours of the agency’s assuming custody of the child. Ability to produce management data showing the formal identification and assessment of District of Columbia practices and procedures that affect the recruitment and retention of social workers. A recruitment plan for professional staff. Ability to produce management data showing the number of supervisors with MSWs and the number without. Ability to produce management data showing the number of supervisors with at least 3 years of social work experience in child welfare. Written policies and procedures for providing a comprehensive child-welfare training program that will ensure that all persons charged with responsibilities for children in the plaintiff class will receive sufficient training to permit them to comply with the relevant mandates of agency policy, District of Columbia law, and all MFO provisions. An assessment of staff training needs. Assessments of training effectiveness. Ability to produce management data showing the number of new hires with 80 hours of instructional training. Ability to produce management data showing the number of new hires with 80 hours of field training. Ability to produce management data showing the number of workers with 40 hours of in-service training each calendar year. Ability to produce management data showing the number of senior workers with casework responsibility who have 24 hours of training. Ability to produce management data showing the number of supervisors meeting within 3 months of promotion to supervisor the requirement for 40 hours of training that is directed to supervising child welfare social workers. Ability to produce management data showing the number of supervisors with 24 hours of in-service training each calendar year. Ability to produce management data showing the number of foster parents completing 15 hours of training. Ability to produce management data showing the number of prospective adoptive parents completing 30 hours of training. Ability to produce management data showing the number of judges trained to date in judicial training program. Ability to produce management data showing the number of professional staff demonstrating satisfactory mastery of the curriculum for the following training: new hire 80-hour instruction, new hire 80-hour field, workers 40-hour in-service, senior workers 24-hour additional, supervisors 40-hour within 3 months, and supervisors 24-hour in-service. Resource needs assessments. Resource development plan. Reports projecting the number of emergency placements, foster- homes, group homes, therapeutic foster homes, and institutional placements that children in the agency’s custody will require during the next 12 months. A placement implementation plan. Written policies and procedures for ensuring that decisions are made promptly concerning the issuance of a license for any foster-care facility in which a member of the plaintiff class may be placed, including foster homes, group homes, residential treatment centers, and other child-care facilities. Written policies and procedures for monitoring all facilities and foster homes in which children in the agency’s physical or legal custody are placed. Ability to produce management data showing the number of foster homes and group facilities the monitoring unit visits at least once a year. Ability to produce management data showing by worker the caseload figures for all workers monitoring foster homes. Ability to produce management data showing by worker the caseload figures for all workers monitoring group homes and institutions. Written policies and procedures for licensing relatives as foster parents. Written policies and procedures for specific contract performance and a contract performance review process for each category of services. Ability to produce information systems reports showing, for each worker with direct responsibility for any children in the agency’s physical or legal custody, the number of children for whom that worker is responsible. Ability to produce information systems reports showing, for each worker with direct responsibility for any children in the agency’s physical or legal custody, the number of children for whom that worker is responsible for whom any of the following events either are late or are due in the 60 days following the report: expiration of allowed emergency care status, case plan review, administrative review, judicial review, or dispositional hearing. Ability to produce information systems reports showing, for each supervisor who has principal responsibility for any child in the agency’s physical or legal custody, the number of children for whom that supervisor is responsible. Ability to produce information systems reports showing all facilities—foster homes, group homes, institutions, consortium or other contract homes, or any other facility for which any vacancies exist—including the name of the facility, the type of facility, and the number of vacancies. Ability to produce information systems reports showing the number of children, by unit, who are placed in facilities—foster homes, group homes, institutions, consortium or other contract homes, or any other facility—that do not have current valid permits or licenses. Ability to produce information systems reports showing the number of children, by unit, who are placed in facilities—foster homes, group homes, institutions, consortium or other contract homes, or any other facility—in which there are more children than is permitted by the facility’s license or permit. Ability to produce information systems reports showing each facility—foster homes, group homes, institutions, consortium or other contract homes, or any other facility—in which there are more children than is permitted by the facility’s license or permit. Ability to produce information systems reports showing all social workers, by unit, who have caseloads exceeding the caseload limits established in the MFO, including the name and identification of the worker, the worker’s supervisor, and the size of the worker’s caseload. Ability to produce information systems reports showing all cases in which an investigation has not been initiated within 48 hours of the receipt of the report. Ability to produce information systems reports showing all cases in which an investigation has not been completed within 30 days of the receipt of the report of abuse or neglect. Ability to produce information systems reports showing all cases in which a child does not have a written case plan within 30 days of entering the department’s custody. Ability to produce information systems reports showing all cases in which a child has not received an administrative review during the preceding 9 months. Ability to produce information systems reports showing all cases in which a child has not received a dispositional hearing within 21 months of entering the department’s custody. Ability to produce information systems reports showing all cases in which a child younger than 6 has been placed in a congregate-care facility. Ability to produce information systems reports showing all cases in which a child has had a plan of adoption and who has not been referred to the adoption program within 30 days of the establishment of the permanency goal. Ability to produce information systems reports showing all cases in which a child younger than 12 has been assigned a permanency goal of continued care. Ability to produce information systems reports showing all cases in which a child younger than 16 has been assigned a permanency goal of independent living. Written policies and procedures for maximizing funds available to the agency through titles IV-B and IV-E of the Adoption Assistance and Child Welfare Act of 1980, the Medicaid Act, and Supplemental Security Income. We assessed CFSA’s ability to produce written policies and procedures, management data, and information system reports as evidence of the extent to which it had developed practices required by the MFO. Child and Family Services Agency Organization, September 2000 Related GAO Products ChildWelfare:NewFinancingandServiceStrategiesHoldPromise,but EffectsUnknown(GAO/T-HEHS-00-158, July 20, 2000). FosterCare:HHSShouldEnsureThatJuvenileJusticePlacementsAre Reviewed(GAO/HEHS-00-42,June9,2000). FosterCare:StatusoftheDistrictofColumbia’sChildWelfareSystem ReformEfforts(GAO/T-HEHS-00-109, May 5, 2000). FosterCare:States’EarlyExperiencesImplementingtheAdoptionand SafeFamiliesAct(GAO/HEHS-00-1, Dec. 22, 1999). FosterCare:EffectivenessofIndependentLivingServicesUnknown (GAO/HEHS-00-13, Nov. 5, 1999). FosterCare:HHSCouldBetterFacilitatetheInterjurisdictionalAdoption Process(GAO/HEHS-00-12, Nov. 19, 1999). ManagementReform:ElementsofSuccessfulImprovementInitiatives (GAO/T-GGD-00-26, Oct. 15, 1999). FosterCare:KinshipCareQualityandPermanencyIssues(GAO/HEHS-99- 32, May 6, 1999). FosterCare:IncreasesinAdoptionRates(GAO/HEHS-99-114R, Apr. 20, 1999). JuvenileCourts:ReformsAimtoBetterServeMaltreatedChildren (GAO/HEHS-99-13, Jan. 11, 1999). ChildWelfare:EarlyExperiencesImplementingaManagedCareApproach (GAO/HEHS-99-8, Oct. 21, 1998). FosterCare:AgenciesFaceChallengesSecuringStableHomesforChildren ofSubstanceAbusers(GAO/HEHS-98-182, Sept. 30, 1998). ChildProtectiveServices:ComplexChallengesRequireNewStrategies (GAO/HEHS-97-115, July 21, 1997). ChildWelfare:States’ProgressinImplementingFamilyPreservationand SupportServices(GAO/HEHS-97-34, Feb. 18, 1997). Ordering Information The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. To Report Fraud, Waste, or Abuse in Federal Programs Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
Many children have languished in the care of the District of Columbia's child welfare system for extended periods of time. Years of indifference, managerial shortcomings, and long-standing organizational divisiveness have undermined the system's ability to safeguard these children. As a result of these prolonged deficiencies, the U.S. District Court for the District of Columbia issued a remedial order in 1991 to improve the performance of the child welfare agency. GAO assessed the agency's progress in complying with the court's requirements, specifically examining how financial and operational changes made by the Children and Family Services Agency (CFSA) have affected the protection of children and the provision of services to children and families, the extent to which critical elements of an effective child welfare system have been applied in the District, and issues that need to be addressed in planning for the transfer of CFSA back to local governance. GAO found that the financial and operational changes have not significantly improved the protection of children or the delivery of other child welfare services. Although the District has started to integrate child welfare services with other support services, it still lacks a fully developed collaborative structure to help foster more efficient day-to-day operations and improve program accountability. Furthermore, multiple issues must be resolved before CFSA can be transferred back to local governance.
Background In 1998 VA and DOD, along with the Indian Health Service (IHS), began an initiative to share patient health care data, called the government computer-based patient record (GCPR) project. At that time, each agency collected and maintained patient health information in separate systems, and their health facilities could not electronically share patient health information across agency lines. GCPR was envisioned as an electronic interface that would allow physicians and other authorized users at VA, DOD, and IHS health facilities to access data from any of the other agencies’ health facilities. The interface was expected to compile requested patient information in a “virtual” record that could be displayed on a user’s computer screen. In reporting on the initiative in April 2001, we raised doubts about GCPR’s ability to provide expected benefits. We noted that the project was experiencing schedule and cost overruns and was operating without clear goals, objectives, and consistent leadership. We recommended that the participating agencies (1) designate a lead entity with final decisionmaking authority and establish a clear line of authority for the GCPR project, and (2) create comprehensive and coordinated plans that included an agreed- upon mission and clear goals, objectives, and performance measures, to ensure that the agencies could share comprehensive, meaningful, accurate, and secure patient health care data. VA, DOD, and IHS agreed with our findings and recommendations. In March 2002, however, we again reported that the project was continuing to operate without clear lines of authority or a lead entity responsible for final decisionmaking. Further, the project continued to move forward without comprehensive and coordinated plans and an agreed-upon mission and clear goals and measures. In addition, the participating agencies had announced a revised strategy that was considerably less encompassing than the project was originally intended to be. For example, rather than serve as an interface to allow data sharing across the three agencies’ disparate systems, as originally envisioned, the revised strategy initially called only for a one-way transfer of data from DOD’s current health care information system to a separate database that VA hospitals could access. In further reporting on this initiative in June 2002, we recommended that VA, DOD, and IHS revise the original goals and objectives of the project to align with their current strategy, commit the executive support necessary to adequately manage the project, and ensure that it followed sound project management principles. In September 2002 we reported that VA and DOD had made some progress toward electronically sharing patient health data. The two departments had renamed the project the Federal Health Information Exchange (FHIE) program and, consistent with our prior recommendation, had finalized a memorandum of agreement designating VA as the lead entity for implementing the program. With this agreement, FHIE became a joint effort between VA and DOD to achieve the exchange of health care information in two phases. The first phase, completed in mid-July 2002, enabled the one-way transfer of data from DOD’s existing health information system to a separate database that VA hospitals could access. A second phase, finalized earlier this month, completed VA’s and DOD’s efforts to add to the base of patient health information available to VA clinicians via this one-way sharing capability. VA and DOD reported total FHIE costs of about $85 million through fiscal year 2003. The revised strategy also envisioned VA and DOD pursuing a longer term, two-way exchange of health information. This initiative, known as HealthePeople (Federal), is premised upon the departments’ development of a common health information architecture comprising standardized data, communications, security, and high-performance health information systems. The joint effort is expected to result in the secured sharing of health data required by VA’s and DOD’s health care providers between systems that each department is currently developing—DOD’s Composite Health Care System II (CHCS II) and VA’s HealtheVet VistA. DOD began developing CHCS II in 1997 and has completed its associated clinical data repository that is key to achieving an electronic interface. DOD expects to complete deployment of all of its major system capabilities by September 2008. The department reported expenditures of about $464 million for the system through fiscal year 2003. VA began work on HealtheVet VistA and its associated health data repository in 2001, and expects to complete all six initiatives that make up this system in 2012. VA reported spending about $120 million on HealtheVet VistA through fiscal year 2003. Under the HealthePeople (Federal) strategy, VA and DOD envision that, upon entering military service, a health record for the service member will be created and stored in DOD’s CHCS II clinical data repository. The record will remain in the clinical data repository and be updated as the service member receives medical care. When the individual separates from active duty and, if eligible, seeks medical care at a VA facility, VA will then create a medical record for the individual, which will be stored in its health data repository. Upon viewing the medical record, the VA clinician would be alerted and provided access to clinical information on the individual also residing in DOD’s repository. In the same manner, when a veteran seeks medical care at a military treatment facility, the attending DOD clinician would be alerted and provided with access to the health information existing in VA’s repository. According to VA and DOD, the planned approach would make virtual medical records displaying all available patient health information from the two repositories accessible to both departments’ clinicians. VA officials have stated that they anticipate being able to exchange some degree of health information through an interface of their health data repository with DOD’s clinical data repository by the end of calendar year 2005. Lacking A Defined Strategy, VA And DOD Have Made Limited Progress Toward A Common Health Information Exchange VA’s and DOD’s ability to exchange data between their separate health information systems is crucial to achieving the goals of HealthePeople (Federal). Yet successfully sharing patient health information via a secure electronic interface between each of their data repositories can be complex and challenging, and depends on their having a clearly articulated architecture, or blueprint, defining how specific technologies will be used to achieve the interface. Developing, maintaining, and using an architecture is a best practice in engineering information systems and other technological solutions. An architecture would articulate, for example, the system requirements and design specifications, database descriptions, and software descriptions that define the manner in which the departments will electronically store, update, and transmit their data. Equally critical is an established project management structure to guide project development. Industry best practices and information technology project management principles stress the importance of accountability and sound planning for any project, particularly an interagency effort of the magnitude and complexity of this one. Inherent in such planning is the development and use of a project management plan that describes, among other factors, the project’s scope, implementation strategy, lines of responsibility, security requirements, resources, and estimated schedule for development and implementation. As was the situation when we testified last November, VA and DOD continue to lack an explicit architecture detailing how they intend to achieve the data exchange capability, or just what they will be able to exchange by the end of 2005—their projected time frame for putting this capability into operation. VA officials stated that they recognize the importance of a clearly defined architecture, but acknowledged that the departments’ actions were continuing to be driven by the less-specific, high-level strategy that has been in place since September 2002. The officials added that just this month, the departments had taken a first step toward trying to determine how their separate data repositories would interface to enable the two-way exchange of patient health records. Specifically, officials in both departments pointed to a project that they are undertaking in response to requirements of the National Defense Authorization Act for Fiscal Year 2003, which mandated that VA and DOD develop a real-time interface, data exchange, and capability to check prescription drug data for outpatients by October 1, 2004. VA’s Deputy Chief Information Officer for Health stated that they hope to determine from a prototype planned for completion by next September whether the interface technology developed to meet this mandate can be used to facilitate the exchange of data between the health information systems that they are currently developing. By late February, VA had hired a supporting contractor to develop the planned prototype, but the departments had not yet fully defined their approach or requirements for developing and demonstrating its capabilities. DOD officials stated that the departments would rely on the contractor to more fully define the technical requirements for the prototype. Further, according to VA officials, since the departments’ new health information systems that are intended to be used under HealthePeople (Federal) have not yet been completed, the demonstration may only test the ability to exchange data in VA’s and DOD’s existing health systems—the Veterans Information Systems and Technology Architecture and the Composite Health Care System, respectively. Thus, given the early stage of the prototype and the uncertainties regarding what capabilities it will demonstrate, there is little evidence and assurance as to how or whether this project will contribute to defining the architecture and technological solution for the two-way exchange of patient health information. Further compounding the challenges and uncertainty that VA and DOD face is the lack of a fully established project management structure to ensure the necessary day-to-day guidance of and accountability for the departments’ investments in and implementation of the electronic interface between their systems. Officials in both departments maintain that they are collaborating on this initiative through a joint working group and with oversight provided by the Joint Executive Council and VA/DOD Health Executive Council. However, neither department has had the authority to make final project decisions binding on the other, and there has been a visible absence of day-to-day project oversight for the joint initiative to develop an electronic interface between the departments’ planned information systems. Further, VA and DOD are operating without a project management plan describing the overall development and implementation of the interface, including the specific roles and responsibilities of each department in developing, testing, and deploying the interface and addressing security requirements. In discussing these matters last week, VA officials stated that the departments had recently designated a program manager for the planned prototype. Further, VA and DOD officials added that they had begun discussions to establish an overall project plan and finalize roles and responsibilities for managing the joint initiative to develop an electronic interface. Until these essential project management elements are fully established, VA and DOD will lack assurance that they can successfully develop and implement an electronic interface and the associated capability for exchanging health information within the time frames that they have established. Progress Toward Achieving a Two-Way Data Exchange Has Been Limited In the absence of an architecture and project management structure for the initiative, VA and DOD have continued to make only limited progress toward developing the technological solution essential to interfacing their patient health information. To their credit, the departments have continued essential steps toward standardizing clinical data—important for exchanging health information between disparate systems. The Institute of Medicine’s Committee on Data Standards for Patient Safety has reported the lack of common data standards as a key factor preventing information sharing within the health care industry. Over the past 4 months, VA and DOD have agreed to adopt additional data standards for uniformly presenting in any system data related to demographics, immunizations, medications, names of laboratory tests ordered, and laboratory result contents. Nonetheless, as reflected in figure 1, the technology needed to achieve a two-way exchange of patient health information remains far from complete, with only DOD’s data repository having been fully developed. Since November, both departments have delayed key milestones associated with the development and deployment of their individual health information systems. VA program officials told us that completion of a prototype for the department’s health data repository has been delayed approximately a year, until the end of this June. The officials explained that earlier testing of the prototype had slowed clinicians’ use of the clinical applications, necessitating a revised approach to populating the repository. In addition, while DOD officials previously stated that the department planned to complete the deployment of its first release of CHCS II functionality (a capability for integrating DOD clinical outpatient processes into a single patient record) in September 2005, the agency has now extended its completion date to June 2006. According to DOD officials, the schedule for completing this deployment was revised because of a later than anticipated decision on when the department could proceed with its worldwide deployment. Collectively, the lack of an architecture and project management structure, coupled with delays in the departments’ completion of key projects, places VA and DOD at increased risk of being unable to successfully accomplish the HealthePeople (Federal) initiative and the overall goal of more effectively meeting service members’ and veterans’ health care and disability needs. VA and DOD Could Benefit From Current And Past Recommendations On Sharing Electronic Medical Records Mr. Chairman, as part of our review, you asked that we update the status of VA’s and DOD’s actions to address prior recommendations related to sharing electronic medical information. In this regard, both the President’s task force and we have made a number of recommendations to VA and DOD for improving health care delivery to beneficiaries through better coordination and management of their electronic health sharing initiatives. In its final report of May 2003, the President’s task force recommended specific actions for providing timely, high-quality care through effective electronic sharing of health information, such as the development and deployment, by fiscal year 2005, of electronic medical records that are interoperable, bidirectional, and standards-based. The departments reported that they are in various stages of acting on these recommendations, with anticipated completion dates ranging from June of this year to September 2005. Our attachment to this statement summarizes these specific recommendations, and the departments’ reported actions to address them. Giving full consideration to these recommendations could provide VA and DOD with relevant information for determining how to proceed with the HealthePeople (Federal) initiative. Also, as mentioned earlier, our prior reviews of the departments’ project to develop a government computer-based patient record determined that the lack of a lead entity, clear mission, and detailed planning to achieve that mission had made it difficult to monitor progress, identify project risks, and develop appropriate contingency plans. As a result, in reporting on this initiative in April 2001 and again in June 2002, we made several recommendations to help strengthen the management and oversight of this project. VA and DOD have taken specific measures in response to our recommendations for enhancing overall management and accountability of the project, with demonstrated improvements and outcomes. Extending these practices to current activities supporting the development of HealthePeople (Federal) could strengthen the departments’ approach to successfully accomplishing a two-way health information exchange. In summary, Mr. Chairman, achieving an electronic interface to enable VA and DOD to exchange patient medical records between their health information systems is an important goal, with substantial implications for improving the quality of health care and disability claims processing for our nation’s military members and veterans. However, in seeking a virtual medical record based on the two-way exchange of data between their separate health information systems, VA and DOD have chosen an approach that necessitates the highest levels of project discipline, including a well-defined architecture for describing the interface for a common health information exchange and an established project management structure to guide the investment in and implementation of this electronic capability. At this time, the departments lack these critical components, and thus risk investing in a capability that could fall short of their intended goals. The continued absence of a clear approach and sound planning for the design of this new electronic capability elevates concerns and skepticism about exactly what capabilities VA and DOD will achieve as part of HealthePeople (Federal), and in what time frame. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. Contacts and Acknowledgments For information about this testimony, please contact Linda D. Koontz, Director, Information Management Issues, at (202) 512-6240 or at [email protected], or Valerie C. Melvin, Assistant Director, at (202) 512- 6304 or at [email protected]. Other individuals making key contributions to this testimony include Nabajyoti Barkakati, Michael P. Fruitman, Carl L. Higginbotham, Barbara S. Oliver, J. Michael Resser, Sylvia L. Shanks, and Eric L. Trout. Appendix: VA’s and DOD’s Reported Actions to Address Recommendations in the President’s Task Force Report of May 26, 2003 Department of Veterans Affairs (VA) Department of Defense (DOD) The VA/DOD Joint Strategic Plan and the Joint Electronic Health Records Plan have set September 2005 as the target date by which VA and DOD will achieve interoperability of health data. The VA/DOD Health Executive Council Information Management/ Information Technology Work Group is on track to complete this capability by the end of fiscal year 2005. In March 2004, the departments awarded a contract to develop a bi- directional pharmacy solution that will demonstrate interoperability in a prototype environment. The departments are on track to complete the prototype by October 2004. Operational interoperability is planned for fiscal year 2005.The pharmacy prototype is the initial effort within the Clinical Health Data Repositories (CHDR) framework. This framework is the effort to develop software component services that will be used by the VA and DOD data repositories. The prototype has a planned completion date of October 2004. This issue remains under review by the Veterans Health Administration’s HIPAA Program Office. It is VA’s understanding that VA and DOD have concluded that this is not necessary in order to share information on patients that both departments are treating. DOD believes that it and VA can achieve the appropriate sharing of protected health information within the guidelines of the current regulations. The HIPAA privacy rule has a specific exception authorizing one-way sharing of health data at the time of a service member’s separation. This supports the “seamless transition to veteran status.” The Joint Strategic Plan has set June 2004 as the target date for the departments to develop an implementation plan for the one physical exam protocol. VA and DOD are currently piloting the single separation physical exam that meets DOD needs and VA’s rating criteria at 16 Benefits Delivery at Discharge sites. The departments are currently testing an advanced technological demonstration project that transfers images of paper personnel documents to VA from official military personnel file repositories in the Army, Navy, and Marine Corps, with Air Force integration into the program in process (including the DD214). When fully operational, this system will send digital images of any personnel record to the VA within 48 hours of the request. Department of Veterans Affairs (VA) Department of Defense (DOD) Both the Health Executive Council (through the Deployment Health Work Group) and the VA/DOD Benefits Executive Council are currently developing and implementing processes to address these issues. DOD is already providing VA with daily information on personnel separating from active duty, which includes assignment history, location, and occupational duties through the DD214. DOD’s TRICARE On Line provides health care professionals with access to the individual service member’s pre- and post-deployment health assessments The Defense Occupational and Environmental Health Readiness System with CHCS II, is capturing data on occupational exposures and transferring it to the clinical data repository. When these systems are fully operational, appropriate information will be able to be shared via a two-way exchange with VA. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A critical component of the Department of Veterans Affairs' (VA) information technology program is its ongoing work with the Department of Defense (DOD) to achieve the ability to exchange patient health care data and create electronic records for use by veterans, active military personnel, and their health care providers. GAO testified before Congress last November that one-way sharing of data, from DOD to VA medical facilities, had been realized. At the Congress's request, GAO assessed, among other matters, VA's and DOD's progress since that time toward defining a detailed strategy for and developing the capability of a twoway exchange of patient health information. Since November, VA and DOD have made little progress in determining their approach for achieving the two-way exchange of patient health data. Department officials recognize the importance of an architecture to articulate how they will electronically interface their health systems, but continue to rely on a nonspecific, high-level strategy--in place since September 2002--to guide their development and implementation of this capability. VA officials stated that an initiative begun this month to satisfy a mandate of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 will be used to better define the electronic interface needed to exchange patient health data. However, this project is at an early stage, and the departments have not yet fully identified the approach or requirements for this undertaking. Given these uncertainties, there is little evidence of how this project will contribute to defining a specific architecture and technological solution for achieving the two-way health data exchange. These uncertainties are further complicated by the absence of sound project management to guide the departments' actions. At present, neither department has the authority to make final decisions binding on the other, and day-to-day oversight of the joint initiative to develop an electronic interface is limited. Progress toward defining data standards continues, but delays have occurred in the development and deployment of the agencies' individual health information systems.
Background Federal grants are forms of financial assistance from the government to a recipient for a particular public purpose that is authorized by law. Federal grant funds flow to the nonprofit sector in various ways, as shown in figure 1. Some grant funds are awarded directly to nonprofits, while others are first awarded to states, local governments, or other entities and then awarded to nonprofit service providers. Federal laws, policies, regulations, and guidance associated with federal grants apply regardless of how federal grant funding reaches the final recipients. Some federal grant programs contain statutory limits on administrative cost reimbursement for state and local government grantees. Additionally, some federal grant programs predetermine a limit for subgrantees (see table 1 for the statutory limits on the six grants we reviewed). OMB circulars A-87 and A-122 provide guidance to state and local governments and nonprofits on classifying costs as direct or indirect and direct state and local governments to employ the necessary management techniques in order to efficiently and effectively administer federal awards. OMB circulars A-87 and A-122 generally define direct and indirect costs as follows: Direct costs are those that can be identified specifically with a particular final cost objective, that is, a particular award, project, service, or other direct activity of an organization. Indirect costs are those that have been incurred for common or joint objectives and are not readily assignable to the cost objectives specifically benefited, without effort disproportionate to the results received. A cost may not be allocated to an award as an indirect cost if any other cost incurred for the same purpose, in like circumstances, has been assigned to an award as a direct cost. Direct costs of minor amounts may be treated as indirect costs under certain conditions. Recognizing that nonprofit organizations have diverse characteristics and accounting practices, the guidance states that it is not possible to specify the types of cost that may be classified as indirect costs in all situations. Whether a nonprofit classifies costs as direct or indirect is often a result of the organization’s ability to link costs to a particular program. OMB Circular A-122 guidance to nonprofits further divides indirect costs into two broad categories: facilities and administration. Facilities costs generally include costs related to the “depreciation and use allowances on buildings and equipment, as well as operations and maintenance expenses.” Administration costs generally include “general administration and expenses such as the director’s office, accounting, personnel, library services and all other expenses not listed under facilities.” OMB Circular A-133 provides general guidance on the roles and responsibilities of the federal awarding agencies and primary recipients of government funds regarding audit requirements of grantees. It sets forth standards for obtaining consistency and uniformity among federal agencies for the audit of states, local governments, and nonprofit organizations expending federal awards totaling $500,000 or more annually. Among other responsibilities, it gives federal awarding agencies the responsibility to advise recipients of requirements imposed on them by federal laws, regulations, and the provisions of contracts or grants and primary recipients the responsibility to identify grant awards; advise subrecipients of requirements imposed on them by federal laws, regulations, and the provisions of contracts or grant agreements as well as any supplemental requirements; and monitor the implementation of the grants. Awarding agencies and all recipients and subrecipients of federal grant funds must comply with certain data collection, record-keeping, and reporting requirements to help monitor grant implementation. These requirements differ across grants and are determined by the federal awarding agency, federal law, or both. State and local governments sometimes impose additional requirements on their subgrantees. Inconsistencies in Terminology Lead to Challenges in Cost Classification, Which Can Result in Uneven Treatment of Costs Understanding OMB guidance regarding the relationship between indirect and administrative costs is particularly challenging for state and local governments and nonprofits. According to OMB officials, the terms “direct” and “indirect” can be thought of as ways to classify costs; that is, they are “cost buckets.” In contrast, the term “administrative” refers to a cost function or activity—such as accounting, procurement, personnel, or budgeting. On the one hand, OMB Circular A-122 cost guidance to nonprofits indicates that administrative costs are usually but not always indirect costs; on the other hand, that same guidance lists “administration” costs as one of two categories of indirect costs. Further, OMB Circular A- 87 cost guidance to state and local governments uses the terms indirect and administrative interchangeably in certain places. Taken together, the OMB guidance can be viewed as ambiguous. Guidance is most useful when it is clear and well understood. OMB officials told us that given the uncertainty and confusion with respect to these definitions and their application, it may be helpful to bring federal, state, and local officials together with representatives from nonprofit organizations to discuss these issues. Doing so, they acknowledge, could help clarify and improve understanding of how indirect costs should be treated. Classifying similar costs differently can make it difficult to determine how much money grantees receive for cost activities typically thought of as indirect, and at what rate. For example, the ESG program provides states or local government grantees up to 5 percent for administrative costs. As the primary recipients of ESG funds, states are required to share at least a portion of this funding with local government subgrantees; however, there is no such requirement for cost sharing with nonprofits. Thus, on its face it may appear as if ESG provides no administrative cost reimbursement for nonprofits. However, the ESG statute allows some emergency shelter costs, such as rent and utilities, which are typically thought of as indirect costs, to be claimed as a direct cost under ESG’s “operating costs” activity—one of five direct program activities for which subgrantees may be reimbursed. In another example, the statute for the HOPWA grant program limits administrative cost reimbursement for project sponsors to 7 percent. Because administrative costs can either be charged as direct or indirect costs depending on the circumstance, and because HOPWA has no explicit limit on indirect costs, it is difficult to accurately characterize cost reimbursement for activities commonly thought of as indirect. When grants and grantees classify similar costs differently it can also result in the same cost activity being covered for some nonprofits but not others, and can increase the complexity of administering the grants. Nonprofit association officials told us that because grant award packages and federal guidance contain unclear or conflicting information on how to allocate costs, nonprofits sometimes unknowingly exclude eligible expenses in their calculation of administrative costs and, as a result, limit their own reimbursement potential. Further, some of the nonprofit and association officials we spoke with said that because grant programs have different definitions of indirect costs, they must take care to reconcile their own accounting systems with the requirements of each grant they receive to ensure that they properly account for the funds. They also said that this is time consuming and resource intensive, and that more consistent classifications and treatment across federal grants would simplify grant administration and may reduce costs. We and others have previously reported that federal grant programs sometimes classify similar or identical costs differently. In 2006, we reviewed seven programs from HHS, and the Departments of Agriculture and Labor, and found that the legal definitions of and the federal funding rules for administrative costs varied even though many of the same activities were performed to administer the programs. The report noted that the statutes and regulations that define administrative costs for these programs differ in part because the programs evolved separately over time and have different missions, priorities, services, and clients. Further, the report noted that a number of state budget officials said that varying definitions of administrative costs create challenges for them. For example, one said that it can be difficult to develop coding for accounting and budgeting that can be used across programs and, as a result, it can be difficult to monitor costs accurately; another shared this concern and said that consistent definitions of and caps for administrative costs would make it easier to allocate costs across programs and, therefore, might reduce costs. This concern is not new; in a 2002 report on tax-exempt organizations, we reported that different approaches for charging expenses, as well as different allocation methods, can result in charities with similar types of expenses allocating them differently. Even though the terms indirect costs and administrative costs are not synonymous, we found that some nonprofit, state, and local government officials we spoke with use them interchangeably. A national nonprofit association official made a similar observation, noting that terminology varies throughout the nonprofit sector. State and local government and nonprofit officials we spoke with also reported using other terms, such as overhead, general operating expenses, or management and general expenses, synonymously with indirect and administrative costs. A 2007 report on nonprofits’ overhead costs also discussed widespread confusion about indirect costs throughout the sector, and identified “variations in definitions of overhead and the overhead cost rate” as areas of concern among nonprofit researchers and practitioners. The report also concluded that there is a substantial difference between indirect costs and administrative costs, noting that not all indirect costs are administrative, such as the costs of a telemarketing campaign, which is a programmatic or fundraising function. The report also said that there are administrative costs that are direct costs, such as those for the computers and office supplies used by the finance department. Inconsistencies in guidance in grant award packages and across federal programs add to the challenge of administering federal grants. For example, officials from a Louisiana nonprofit said that one federal contract may allow them to charge rent as a direct cost, while another federal contract states that it is to be charged as an indirect cost. These officials told us that they should be able to “call an apple, an apple….every time.” In another example, HUD’s supplemental guidance for HOPWA recipients advises that in reviewing administrative and indirect costs, recipients should keep in mind that “all administrative costs are indirect costs, but not all indirect costs are administrative costs.” Conversely, in describing HHS’s PSSF grant and the Family Violence Prevention Services/Battered Women’s Shelter grant, ACF officials explained that administrative costs can be either direct or indirect costs. Nonprofits’ Reimbursement for Indirect Costs Largely Depends on Federal, State, and Local Government Practices For the majority of grants in our review, we found that state and local government grantees are allowed to decide whether or not and how much they reimburse nonprofit subgrantees for their administrative or indirect costs. In all three states we reviewed, we found differences in the rates at which state and local governments reimburse nonprofits for indirect costs. These differences, including whether nonprofits are reimbursed at all, largely depend on the policies and practices of the state and local governments that award federal funds to nonprofits. State and local governments may apply the same indirect cost limit to all subgrantees or may choose to apply different indirect cost limits to different subgrantees. For example, for all subgrantees who receive funds under the Block Grants for Community Mental Health Services and the Prevention and Treatment of Substance Abuse, the Louisiana Department of Health and Hospitals limits indirect cost reimbursement to 12 percent. Other state and local government agencies, such as the Wisconsin Department of Health Services, work with individual subgrantees to determine an indirect cost reimbursement rate. Officials from the department told us that they often assist subgrantees in determining how to classify costs; this helps to determine what costs to reimburse as indirect, and at what rate. The amount of funding passed through to nonprofits can also be affected by the amount of funding a state or local government uses for its own administrative costs. For example, according to a Dane County, Wisconsin official, Dane County receives 10 percent for administrative and indirect costs for the PSSF grant from the state of Wisconsin and passes the entire amount on to its nonprofit service providers; this increases the amount of funds available to nonprofits. However, some state and local governments we spoke with interpret statutory limitations on their own administrative costs as necessarily limiting the administrative and indirect costs allowable by the grant for all subgrantees. Although states often enjoy wide latitude in determining the administrative and indirect reimbursement rates of their subgrantees, applying a more specific interpretation of federal statute potentially limits the amount of funds available to nonprofits. Variations in cost coverage exist not only among different grants across different states, but also within the same grant across different states. For example, for the PSSF grant, states may retain up to 10 percent of the grant award to pay for their own costs to administer this grant, or they may pass this amount through to the nonprofit service providers to which they award PSSF grants. In addition, states may determine the allowable level of indirect cost reimbursement for the nonprofit service providers to whom they award PSSF grants. As shown in figure 2, three nonprofits that receive funding under the PSSF grant in Louisiana, Maryland, and Wisconsin are reimbursed for their indirect costs, administrative costs, or both at different rates (9.4 percent, 0 percent, and 14 percent, respectively). The differences among reimbursement rates for these nonprofits may in part be due to the presence or absence of an indirect cost rate agreement. Primary recipients of federal funds are required to have a federal indirect cost rate agreement in order to be reimbursed for indirect costs. There is no such requirement for recipients that receive federal funds that first flow through entities such as state and local governments. Five of the 17 nonprofits in our sample have federal agreements. However, state and local governments are not required to consider or honor federal indirect cost rate agreements when awarding federal funds. Some state and local governments negotiate a similar indirect cost rate agreement directly with subrecipients; others do not. When Nonprofits Report Differences between Indirect Costs Incurred and Reimbursed, They Take a Variety of Steps to Bridge Gaps Nonprofits Fund Indirect Costs from a Variety of Sources To help cover their indirect costs, nonprofits reported using funding from a variety of sources in addition to federal funds, such as capacity-building grants, private donations, fundraising, endowment funds, and business income generated from services provided. For example, some of the nonprofits we spoke with operate fee-for-service furniture restoration, repair shop, and batterers’ treatment programs. A Wisconsin nonprofit official said that the United Way recognizes the challenges nonprofits face in receiving reimbursement for indirect costs and provides unrestricted funding to help cover them. Other nonprofit officials we spoke with, however, reported that these grants can be difficult to secure. A November 2009 CRS report noted, perhaps not surprisingly, that charitable giving declined during the recent recession. For some nonprofits the decline comes at a time when their services may be in greater demand, which can further strain resources. Nonprofits also rely on in-kind donations and volunteer labor to help cover costs. For example, nonprofits reported receiving food donations from local restaurants, furniture donations, and facilities repairs by nonprofit board members. One Louisiana nonprofit official said that in-kind and volunteer labor is essential for her organization’s ability to provide services, and it received $160,000 in volunteer labor in 2008. However, nonprofit officials also noted that while the use of volunteer labor is valued, it is not “free,” as volunteers may require additional supervision and training. Nonprofits Take Steps to Bridge Reported Funding Gaps Fifteen of the 17 nonprofits in our sample reported that funding received for indirect costs does not cover their actual indirect costs. A nonprofit official whose organization receives a HUD grant from the state of Wisconsin said that his organization is authorized to claim 5 percent for administrative costs associated with delivering supportive housing program services, but that amount does not cover the costs of administering the program. In another example, recipients of the Family Violence Prevention Services/Grants for Battered Women’s Shelters grants in all three states reported receiving no indirect cost reimbursement, but their overall organizational indirect costs ranged from about 8 to 11 percent. Similarly, nonprofit subrecipients of ESG funding across all three states reported no indirect cost reimbursement from state and local governments. The overall organizational indirect costs for these nonprofits ranged from 1.8 to 20 percent. These self-reported levels are generally in line with an Urban Institute study that analyzed the 1999 tax returns of approximately 160,000 health-related and human services nonprofits, and reported average management and general expenses of 17 and 16 percent, respectively. Although nonprofits’ fiscal challenges are not limited to indirect cost funding, as noted above, funding sources that can be used to cover indirect costs can be difficult to come by. As such, it is particularly important to understand steps nonprofits take to bridge gaps when they report gaps between indirect costs incurred and reimbursed. We found that nonprofits often respond by reducing service levels, compromising infrastructure and staff investments, or both, and that these cost-cutting measures can limit nonprofits’ ability to build a financial safety net. Reduced Service Levels Several nonprofits we spoke with said at the time of our interviews they had reduced the size of their programs and populations served as a result of gaps in funding for direct and indirect costs. For example, a Louisiana nonprofit official said that his organization scaled back its housing and shelter services 10 to 15 percent even though its mission is to serve all at- risk youth in need of these services. As a result, he said, the nonprofit now has a waiting list for its residential services. A Maryland nonprofit official told us that the organization’s psychiatric rehabilitation program was one of the largest in the state. However, according to this official, the level of reimbursement his organization received from government sources led to the nonprofit reducing the program’s size in order to remain viable. A 2008 study that examined several nonprofits also discussed negative effects on nonprofits’ capacity to provide services due to funding gaps, noting that as a result of funding gaps in the short term, staff members struggle to provide more services but with fewer resources. Nonprofits we spoke with also reported reducing the range of services they offered. An official from a Maryland nonprofit whose mission includes providing housing, employment services, and job referrals, said that the organization once provided a computer lab with a part-time computer instructor for its clients as part of its General Education Development services. The official said that in an effort to more closely align costs incurred with costs reimbursed, the nonprofit eliminated the instructor position because it was not directly related to the organization’s primary mission of providing supportive housing and housing placement. Officials from a Maryland drug and alcohol rehabilitation nonprofit told us that they discontinued a vocational education program for similar reasons. Compromised Infrastructure Investments Many nonprofits compromise vital facilities maintenance and “back-office” support functions, such as information technology systems, to avoid reducing their services. Almost half of the nonprofits we spoke with reported making such trade-offs. For example, a Louisiana nonprofit said that it does not have an updated security system that adequately protects the victims of domestic violence that it serves, which directly affects the nonprofit’s ability to fulfill its mission—providing a safe space for victims of domestic violence. We also observed ceilings that were in disrepair when we toured this nonprofit’s facility. An official from a Maryland nonprofit said that her staff makes personal sacrifices to sustain services, such as working in dark offices to conserve electricity costs or bringing supplies from home. Wisconsin nonprofit officials reported that their medical and dental appointment systems are not integrated, inhibiting their ability to better serve their patients. The experiences of these nonprofits are consistent with other studies’ findings that trade-offs in facility maintenance can hinder nonprofits’ ability to effectively carry out their mission in the long term. A 2007 study on the financial health of the human service providers in Massachusetts said that providers may defer routine costs, such as facility maintenance and other critical infrastructure investments, when they lack indirect cost funding. A 2008 study suggested that funders have unrealistic expectations for nonprofits’ indirect costs, which can lead nonprofits to underinvest in infrastructure that is needed to maintain or improve standards for service delivery. A 2008 study on the administrative management capacity of 16 select nonprofit programs noted that many organizations cite a lack of resources for information technology infrastructure needs and that some organizations in the study reported that they cannot meet technology needs beyond a basic level of functionality. The study also reported that these organizations lack sufficient strategic and long-term planning for future information technology needs and equipment and software updates. Compromised Staff Investments Nonprofits often report that they forgo staff investments or reduce or freeze salaries to avoid reducing services. Officials from 10 of 17 nonprofits we spoke with said that at the time of our interviews they had delayed filling vacant positions or have eliminated positions to cover costs. For example, officials from a Maryland nonprofit eliminated a development position and trained a receptionist to assume other responsibilities. As a result, the organization lacked a dedicated receptionist during business hours, which makes it more challenging to respond to clients’ needs. A Wisconsin nonprofit said that it has not hired a medical coder—a position that would allow the doctors in the organization to devote more time to seeing patients instead of on administrative paperwork. Another Wisconsin nonprofit official reported instituting a voluntary leave without pay program during the summer months to reduce salary costs. Another Maryland nonprofit official explained that because she cannot attract qualified staff at the salary she is able to offer, she usually hires people with very little experience who require a significant amount of training and supervision. Similarly, officials from a third Maryland nonprofit said that they are unable to provide salary increases or cost-of-living adjustments for their staff and have had to cut benefits. Other studies have shown that nonprofits may also leave positions vacant to realize savings, which can have adverse quality implications. A 2008 study found that program staff at the 16 nonprofits in the study often take on administrative tasks, such as recruitment processes and site maintenance, to bridge gaps in administrative infrastructure and support; as a result, program staff devote less time to activities more directly tied to service delivery and quality programming. A 2004 study on nonprofit overhead costs reported that limited or no staff for administrative functions limited nonprofits’ ability to manage and monitor finance, development, and other important functions. A 2007 study noted that staff salaries and benefits of the human service providers in Massachusetts do not appear to keep pace with increases in the overall cost of living. It further noted that the relatively low wages can limit the qualifications and level of experience of many direct care workers and can lead to rapid staff turnover. A 2004 study on nonprofit overhead costs discussed how challenges in recruiting and retaining qualified staff compromised nonprofits’ effectiveness, noting that key positions are filled by individuals with little relevant experience and training, and once staff gain relevant experience, they seek employment at organizations with higher salaries, leading to high turnover for nonprofits. Limited Ability to Build a Financial Safety Net Nonprofits’ strained resources also limit their ability to build financial reserves for unanticipated expenses. Officials at a Louisiana nonprofit said that their ability to build a financial safety net is limited because they struggle to cover their costs and do not have money left over to save. A nonprofit association official said that nonprofits sometimes cannot set aside sufficient cash reserves to cover unforeseen costs, such as a broken boiler. To address unexpected costs, nonprofits often draw from their program costs where possible, which can lead to a decline in program quality. Other studies also reported on financial sustainability challenges for nonprofits. Nonprofit financial management experts have recommended that nonprofits maintain cash reserves sufficient to fund 3 months of operating expenses. A 2009 study on the operating reserves of over 2,000 Washington, D.C. area nonprofits reported that in 2006, 57 percent of the operating public charities in the Greater Washington area had operating reserves of less than 3 months; 28 percent of these organizations reported no operating reserves. A 2008 study on the administrative management capacity of select nonprofit programs reported that half of the nonprofits in the study do not maintain the recommended level of reserves. Finally, a 2007 study reported that one- third of the more than 600 Massachusetts providers in its sample had less than 15 days’ cash at the ends of their fiscal years; another quarter have only 3 to 4 weeks of cash at the ends of their fiscal years. Given recent economic conditions, the need for sufficient cash reserves may be particularly important. Untimely Reimbursements and High Grant Administration Costs Exacerbate Nonprofits’ Reported Funding Gaps A November 2009 CRS report noted that (1) in addition to funding cuts, states apparently have been delaying payments for services they have contracted with nonprofits to provide; and that (2) it appears that governments, particularly state governments, may be contributing to the financial difficulties of nonprofit organizations. During the course of our work, we spoke with nonprofits that made similar observations. Factors such as untimely reimbursements and high grant administration costs can place stress on the nonprofit sector, diminishing its ability to continue to provide services to vulnerable populations. OMB officials acknowledged that building nonprofits’ capacity to manage may help nonprofits better contend with these issues and continue to meet their missions. Untimely Reimbursements Untimely receipt of government grant and contract payments contributes to financial strain on nonprofits. Six of the 17 nonprofits in our study reported that their reimbursements from federal, state, and local governments are delayed at times, which can cause cash flow problems and undermine their sustainability. For example, an official from a Maryland nonprofit said that her organization was awarded an HHS grant from the state of Maryland in October 2008 but did not actually receive the funding until May 2009. Maryland nonprofit officials said they sometimes experience 15- to 30-day delays in reimbursement from the state of Maryland. One Maryland nonprofit official said delays such as these create a “cash-flow nightmare” for her organization. The nonprofit has a line of credit it can draw on to tide it over until it receives grant payments, but this increases costs because it incurs interest and fees on the line of credit, which are not reimbursed. Three of the nonprofits in our study said that smaller nonprofits without cash reserves or lines of credit rely on timely payments to sustain their operations. They said that even small delays put these nonprofits at risk of failure. Some state and nonprofit association officials we spoke with, however, said that reimbursement delays also occur when nonprofit staff are so busy operating programs that they do not keep up with filing invoices in a timely manner; as a result, when nonprofits most need the money, it is not available. We and others have also cited challenges nonprofits face as a result of delayed reimbursements from federal, state, or local governments. In 2006 we reported that recipients of selected federal grants reported that delayed awards create significant burden on them and limit their ability to plan for and efficiently execute grant programs. Grant recipients noted that they often received award notifications significantly later than they had anticipated, sometimes months after the expected award date provided in the opportunity announcement. These uncertainties and delays caused significant problems in planning for and executing grant projects. Grant recipients in this study suggested that agencies should award grants in a more timely way or provide more precise information on when an award could be expected. A 2007 study on the financial health of the human service providers in Massachusetts noted that when an organization with limited cash experiences unexpected delays in the receipt of income, a crisis situation can occur. A 2002 study that reviewed prior research on this topic noted that when government agencies are delayed in approving contracts or grant payments, recipient organizations often experience cash flow problems. Consistent with comments from the nonprofits we interviewed, this report suggested that payment delays are especially difficult for smaller and new organizations because they do not have established mechanisms to withstand delayed or unpredictable funding. Costs of Administering Grants The high costs of grant administration sometimes discourage nonprofits from applying for grant funds. Three nonprofits we interviewed reported that they do not seek additional government grants or may not reapply for grants they currently receive for this reason. For example, a Maryland nonprofit official stated that her organization is eligible for a Recovery Act grant program that provides services to youth, but she is hesitant to take on the project because the grant’s administrative reimbursement rate is 3 percent, which would not cover the cost of administering the grant. Over half of the nonprofits in our study said that administrative reporting requirements make it challenging to administer grants they receive. Officials from a Louisiana nonprofit told us that complying with reporting requirements for the more than 20 federal grants they manage requires a significant amount of staff resources. A Maryland nonprofit official explained that some of the nonprofits’ federal grants are “big, complex, and complicated” to acquire and manage because it does not have a dedicated grants management team and establishing one would redirect resources away from other areas. Likewise, officials from a Wisconsin nonprofit said that complying with the county’s challenging bureaucratic process requires a significant amount of time that could otherwise be spent on mission-related activities, and that the organization regularly loses money as a result of these requirements. We and others have previously reported on the challenges facing nonprofits in administering grants. In July 2007, we testified that practitioners and researchers alike acknowledged the difficulty that nonprofit organizations, particularly smaller entities, have in responding to the administrative and reporting requirements of their diverse funders. We said that although funders need accountability, the diverse requirements of different funders make reporting a time-consuming and resource-intensive task. For example, meeting the increasing expectations that nonprofits measure performance, given the size of grants and the evaluation capabilities of the staff, can be difficult. One researcher said that performance evaluation is one of the biggest challenges they face. A 2002 study, which included an analysis of Internal Revenue Service (IRS) Forms 990 from 1,172 nonprofit organizations from 1985 to 1995, found that for some nonprofits, an increase in government funding is positively correlated with an increase in the share of administrative expenses the following year, which could be the result of the costs associated with obtaining contracts and the challenges of meeting accountability and reporting requirements. Similarly, a 2004 study on nonprofit overhead costs of 9 nonprofit organizations reported that the nonprofits with the weakest organizational infrastructures received half or more of their revenue from public sector sources, and that the public sector practice of providing little support for overhead costs was directly associated with the organizational weaknesses at these nonprofits. Conclusions Federal, state, and local governments rely on nonprofit organizations as key partners in implementing programs and providing services to the public, such as health care, human services, and housing-related services. Nonprofits’ ability to determine and manage their indirect costs is affected by inconsistencies in terminology and guidance across federal programs on how to classify costs. Further, varying reimbursement practices by state and local governments that award federal funds affect the rate at which indirect costs are covered. Absent a clear understanding among federal, state, local, and nonprofit officials about how to interpret OMB’s indirect cost guidance and consistently classify activities typically thought of as indirect costs, nonprofits will likely continue to struggle with accurately and consistently reporting on their indirect and administrative costs of doing business, and a clear picture of the true gap between actual and reimbursed indirect costs will remain elusive. As the federal government increasingly relies on the nonprofit sector to provide services, it is important to better understand the implications of reported funding gaps, such as compromised quality of important administrative functions, including information technology, human resources, legal, and accounting operations. Such gaps further limit nonprofits’ capacity to correctly determine how indirect costs should be treated. Collectively, these challenges potentially limit the sector’s ability to effectively partner with the federal government, can lead to nonprofits providing fewer or lower-quality federal services, and, over the long term, could risk the viability of the sector. Given OMB’s role in federal grants management, OMB is in a unique position to convene stakeholders to review these issues. Recommendation for Executive Action GAO recommends that the Director of OMB bring together federal, state, and local governments, and nonprofit representatives to propose ways to clarify and improve understanding of how indirect costs should be treated, particularly for grants passed through state and local governments to nonprofits by clarifying the definitions of indirect costs and administrative costs and their relationship to each other and considering ways to help nonprofits improve their understanding and ability to better capture, categorize, report, and recover indirect and administrative costs. Agency Comments and Our Evaluation We provided a draft of this report to OMB and the Departments of Health and Human Services (HHS) and Housing and Urban Development (HUD). OMB generally agreed with our findings, conclusions, and recommendations. OMB also provided technical comments, which we incorporated, and suggested clarifying language for the recommendation, which we agreed with and incorporated. HHS and HUD did not provide formal comments, but made technical comments by e-mail, which we incorporated. We will send copies of this report to the Director of OMB and the Secretaries of Health and Human Services and Housing and Urban Development. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology Our objectives were to provide information for selected federal grant programs and nonprofits on (1) how indirect cost terminology and classification vary, (2) how indirect costs are reimbursed, and (3) if gaps occur between indirect costs incurred and reimbursed, steps nonprofits take to bridge the gaps. To address our objectives and obtain information on federal grants initially awarded to state and local governments and passed through to nonprofit service providers and the impact of indirect cost funding on nonprofits, we used several approaches. These included selecting a nonprobability sample of federal grants, states, and nonprofits to serve as case studies and conducting a literature review to analyze published work related to this topic. The scope of the third objective was broader to include the perspectives of nonprofits that receive any federal funding, direct or pass-through. We also interviewed nonprofit association officials. First, we selected six federal grant programs—four from the Department of Health and Human Services (HHS) and two from the Department of Housing and Urban Development (HUD)—of 26 grant-making federal agencies that offer over 1,000 grant programs annually. We selected HHS and HUD as our two primary agencies of focus because of their familiarity and historical relationship with nonprofit organizations. HHS and HUD grants address many of the National Taxonomy of Exempt Entities (NTEE) classifications related to social and housing services. The NTEE classification system for nonprofits was devised by the Urban Institute’s National Center for Charitable Statistics (NCCS), which is a national clearinghouse of data on the nonprofit sector in the United States. NTEE classifications are widely referenced by the Internal Revenue Service and nonprofit researchers and practitioners. HUD and HHS grants address NTEE categories such as: Human Services Housing and Shelter Agriculture, Food, Nutrition Community Improvement, Capacity Building Youth Development Health Care Mental Health/Crisis Intervention Civil Rights, Social Action, Advocacy As shown in table 2, the six grants selected are designed to fulfill missions consistent with most of the NTEE categories listed above. Second, we selected three states for our case study—Louisiana, Maryland, and Wisconsin—as well as local governments within those three states, as appropriate. As part of our criteria for selecting states, we considered the following: Levels of HHS and HUD funding: We included states that receive varying levels of HHS and HUD funding to observe how indirect cost funding needs may be related to the amount of grant funding received by a state. Population: We included states with different population sizes to allow us to examine potential implications for states that need to provide services to larger numbers of persons. Geographic dispersion: We included states that were geographically dispersed to allow for regional representation across the country and diversity with respect to the population receiving services; the economic climate of the area; and other regional, cultural, and demographic characteristics. Third, we selected 17 501(c)(3) nonprofit organizations from Louisiana, Maryland, and Wisconsin that receive at least one of the six grants we selected. 501(c)(3) organizations are public charities that are eligible to receive federal funding to support their missions of providing for the public benefit. The nonprofits we selected had varying missions and represented a wide range of operating budgets, from less than $1 million to more than $25 million. Once we selected our case study grants, states, and nonprofits, we reviewed Office of Management and Budget (OMB), HHS, and HUD documents, guidance, and policies governing the treatment of indirect costs, and interviewed budget and program officials at the three agencies. Further, we reviewed documents, guidance, and policies governing the treatment of indirect costs from the selected states, local governments, and nonprofits. We also interviewed budget and program officials from state and local government entities as well as from nonprofit organizations. To further corroborate the information obtained from our case studies, we reviewed existing research related to nonprofits’ indirect costs and overall financial health. We used several search strategies to identify existing studies. Through snowball sampling techniques, we identified research and received study referrals from numerous nonprofit researchers and other nonprofit groups. We conducted searches of several automated databases, including Checkpoint, the Government Printing Office’s Catalog, ProQuest, Lexis Nexis, Academic OneFile, and FirstSearch. We also searched the OMB website, Congressional Research Service website, and the Federal Audit Clearinghouse. We searched on various combinations of the following terms: nonprofit, indirect cost, administrative cost, cost, overhead funding, nonprofit funding, overhead, administrative, pass through, grant, grantee, federal, fund, gap, and trade- offs. Finally, search results were limited to studies published after 1995. Through our referrals and literature searches, we identified eight studies and reports that were relevant to our work. We reviewed the studies we included in our work to ensure that they were methodologically sound. Appendix II: GAO Contact and Staff Acknowledgments Acknowledgments Jacqueline M. Nowicki (Assistant Director) and Sonya Phillips (Senior Analyst-in-Charge) managed this assignment. Carol Patey, Christine Hanson, Mary Koenen, and Barbara Lancaster made key contributions to various aspects of the work. Cindy Gilbert provided methodological assistance; Donna Miller developed the report’s graphics; Sabrina Streagle provided legal support; and Jessica Thomsen provided key assistance with message development and writing.
Nonprofits are key partners in delivering federal services yet reportedly often struggle to cover their indirect costs (costs not readily identifiable with particular programs or projects). This raises concerns about fiscal strain on the sector. To provide information on nonprofits' indirect cost reimbursement, especially when funding flows through entities such as state and local governments, GAO was asked to review, for selected grants and nonprofits, (1) how indirect cost terminology and classification vary, (2) how indirect costs are reimbursed, and (3) if gaps occur between indirect costs incurred and reimbursed, steps taken to bridge gaps. GAO selected six Departments of Health and Human Services and Housing and Urban Development grants and 17 nonprofits in Louisiana, Maryland, and Wisconsin. GAO selected these agencies for their historical relationship with nonprofits. GAO reviewed policies and documents governing indirect costs and interviewed relevant officials. GAO also reviewed research on nonprofits' indirect costs. Depending on the grant program, nonprofits may be reimbursed for indirect costs (generally costs such as rent or utilities), administrative costs (generally cost activities such as accounting or personnel), both, or neither. OMB officials said costs can be classified as either indirect or direct, and administrative cost activities are usually, but not always, classified as indirect costs. However, inconsistencies in the use and meaning of the terms indirect and administrative, and their relationship to each other, has made it difficult for state and local governments and nonprofits to classify costs consistently. This has resulted in varying interpretations of what activity costs are indirect versus administrative. As OMB guidance on cost principles for nonprofits recognizes (2 CFR Part 230), because nonprofit organizations have diverse characteristics and accounting practices, it is not possible to specify the types of costs that may be classified as indirect in all situations. This increases the challenges of administering federal grants and, in some cases, makes it difficult for recipients to determine those activities eligible for indirect cost reimbursement under a particular federal grant and those that are not. GAO found differences in the rate in which state and local governments reimburse nonprofits for indirect costs. These differences, including whether nonprofits are reimbursed at all, largely depend on the policies and practices of the state and local governments that award federal funds to nonprofits. Federal grants often provide wide latitude in setting cost reimbursement policies and practices, and some state and local governments do not reimburse these costs at all. Those that do can often choose the reimbursement rate. As a result, GAO found that variations in indirect cost reimbursement exist not only among different grants, but also within the same grant across different states. GAO found that nonprofits fund indirect costs with a variety of federal and nonfederal funding sources, and that when indirect cost reimbursement is less than the amount of indirect costs nonprofits determine they have incurred, most nonprofits GAO interviewed take steps to bridge the gap. They may reduce the population served or the scope of services offered, and may forgo or delay physical infrastructure and technology improvements and staffing needs. Because many nonprofits view cuts in clients served or services offered as unpalatable, they reported that they often compromise vital "back-office" functions, which over time can affect their ability to meet their missions. Further, nonprofits' strained resources limit their ability to build a financial safety net, which can create a precarious financial situation for them. Absent a sufficient safety net, nonprofits that experience delays in receiving their federal funding may be inhibited in their ability to bridge funding gaps. When funding is delayed, some nonprofits said they either borrow funds on a line of credit or use cash reserves to provide services and pay bills until their grant awards are received. Collectively, these issues place stress on the nonprofit sector, diminishing its ability to continue to effectively partner with the federal government to provide services to vulnerable populations.
Background The federal government uses direct loans and loan guarantees as tools to achieve numerous program objectives, such as assistance for housing, farming, education, small businesses, and foreign governments. Before the enactment of FCRA, credit programs—like most other programs—were recorded in budgetary accounts on a cash basis. This cash basis distorted the timing of when costs would actually be incurred and, thus, the comparability of credit program costs with other programs intended to achieve similar purposes, such as grants. For example, the cash-basis cost of a direct loan in a fiscal year was equal to the cash-basis cost of a grant. The long-term cost of a direct loan, however, may be much less than a grant because of loan repayments. Cash-basis budgetary recording also suggested a bias in favor of loan guarantees over direct loans. Loan guarantees appeared to be free because cash-basis recording did not recognize that some loan guarantees default. Furthermore, direct loans appeared to be relatively costly because the cash-basis recording did not recognize that many direct loans are repaid. FCRA changed the treatment of credit programs beginning with fiscal year 1992 so that their costs can be compared more accurately with each other and with the costs of other federal spending. Two key principles of credit reform are (1) the definition of cost (subsidy) in terms of the net present value of cash flows over the life of a loan and (2) the requirement that budget authority to cover the subsidy cost be provided in advance before new direct loan obligations are incurred and new loan guarantee commitments are made. FCRA defines the subsidy cost of direct loans as the present value over the loan’s life of disbursements by the government (loan disbursements and other payments) minus estimated payments to the government (repayment of principal, payments of interest, and other payments) after adjusting for projected defaults, prepayments, fees, penalties, and other recoveries. It defines the subsidy cost of loan guarantees as the present value of cash flows from estimated payments by the government (for defaults and delinquencies, interest rate subsidies, and other payments) minus estimated payments to the government (for loan origination and other fees, penalties, and recoveries). According to FCRA, the net present value is calculated by discounting the cash flows at the average interest rate on marketable Treasury securities of similar maturity to the direct or guaranteed loan when the loans are disbursed. FCRA gave OMB oversight responsibility to ensure proper implementation of credit reform, including agency calculation of subsidy costs. To provide a consistent, common approach to calculate the present value of credit program costs, OMB developed the CSM, a computer software program that calculates a subsidy rate based on agency-generated estimates of cash flows to and from the government. The CSM also calculates the portions of the subsidy cost attributable to defaults, interest subsidies, fees, and other subsidy components. Thus, the CSM is basically a calculator. Agency-generated cash flows are entered into the CSM by means of an electronic spreadsheet. The CSM’s basic function is to calculate the net present value of these cash flows by discounting them to the year monies are disbursed and dividing the amount of subsidy by the present value of the amount of disbursement to obtain the subsidy percentage. Agency-generated cash flows are essential for determining subsidy costs. Changing data on the cash flows, such as the expected rate of defaults, changes the subsidy calculation. Therefore, the CSM’s subsidy calculation is only as reliable as the data in agency-generated cash flows the CSM uses. Although FCRA requires the use of present value to measure the subsidy costs of direct loans and loan guarantees for budgetary accounting and reporting, the law does not address financial statements and associated reporting. However, the Federal Accounting Standards Advisory Board (FASAB) concluded that significant benefits would result from integrating budgetary and financial accounting for federal credit programs. FASAB recommended that since budgetary resources for direct loan and loan guarantee subsidies are required to be reported on a net present value basis, financial reporting of loan activity should be on the same basis. Statement of Federal Financial Accounting Standards (SFFAS) No. 2, Accounting for Direct Loans and Loan Guarantees, was issued in 1993 to provide accounting standards for federal direct loans and loan guarantees that incorporate FCRA’s subsidy calculation requirements. With the issuance of SFFAS No. 2, subsidy calculations became important not only for budgetary accounting and reporting purposes but also for financial reporting purposes. Scope and Methodology To determine whether the CSM complies with applicable laws and accounting standards, provides reliable results, and is maintained and operated under a system of adequate controls, we engaged the independent public accounting firm of Ernst & Young to perform an attestation in accordance with American Institute of Certified Public Accountants (AICPA) attestation standards on OMB management’s assertions regarding the CSM’s capabilities and limitations. A complete discussion of Ernst & Young’s scope and methodology is included in its report in appendix I. To ensure that Ernst & Young complied with contract requirements and applicable auditing standards, we defined the scope of work to be completed by Ernst & Young; met periodically with Ernst & Young during the course of its evaluation and attended key meetings with them, including their initial meeting with OMB staff; reviewed Ernst & Young’s work in accordance with generally accepted performed a limited analysis of the CSM, its assumptions, and mechanics in order to better understand the results of Ernst & Young’s work; analyzed the discounting formulas used by the CSM to discount the cash flows to the time of disbursement; and developed a limited number of test cash flow spreadsheets for use with the CSM to compare its results with those calculated manually and to gain an understanding of the proper use of the CSM. To identify supplemental audit steps that auditors should perform, we reviewed the CSM’s User’s Guide, OMB’s assertions, and Ernst & Young’s report. We also received advice and assistance from the Federal Audit Executive Council, credit agencies’ inspectors general, representatives of the Governmentwide Credit Reform Subgroup, and OMB’s credit reform staff. Our analysis of the Ernst & Young report and related work was conducted in Washington, D.C., from April 1997 through June 1997 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of OMB or his designated representative. OMB staff responsible for credit reform suggested some technical clarifications to our report, which we have incorporated where appropriate. CSM Calculations Comply With Definition of Credit Subsidy FCRA and SFFAS No. 2 contain several requirements about the budgetary and financial accounting treatment of direct loans and loan guarantees. However, the primary requirement pertinent to the calculation of the subsidy is the definition of cost. FCRA and SFFAS No. 2 define the cost of a direct loan or loan guarantee as the net present value of estimated future cash flows at the time when the loan is disbursed. This calculation incorporates cash flows to and from the government, excluding administrative costs and any incidental effects on governmental receipts or outlays. The CSM’s calculation of subsidies complies with this definition in that the CSM computes a subsidy cost by calculating the net present value of agency-generated cash flows of expected payments to and from the government by discounting these cash flows to the fiscal year when they are disbursed. For loans that disburse in more than 1 year, the CSM allocates the cash flows to each disbursement year and discounts the associated cash flows to the appropriate year of disbursement. FCRA and SFFAS No. 2 require that cash flows contain certain components, such as loan disbursements; repayments of principal; payments of interest; and other payments, including fees, penalties, and other recoveries. Spreadsheets that capture these cash flows are not part of the CSM and responsibility for creating these spreadsheets lies with CSM users rather than with OMB. However, OMB designed the CSM to read spreadsheets that contain these components. CSM Results May Differ From Theoretically Precise Net Present Value Calculations OMB’s assertions state that limitations exist in the CSM resulting from (1) the complexity of the FCRA requirement to calculate the net present value with respect to the time of disbursement, (2) efforts to simplify the CSM while at the same time making it flexible enough to fit all federal credit programs, (3) inherent limitations of discounting methods and financial models such as rounding definitions, and (4) the use of discounting formulas that differ slightly from standard methods. Because of these limitations, the subsidy percentage calculated by the CSM may differ from a “theoretically precise” result. For example, under some government loan programs, an agency receives principal and interest payments from borrowers on a daily basis throughout the year. Therefore, a theoretically precise subsidy calculation would require the daily discounting of these cash flows to time of disbursement. OMB believes that the added precision of such daily discounting would be burdensome and yield little value. Consequently, OMB provides timing options that approximate the daily discounting of cash flows. Although neither OMB nor Ernst & Young have identified any instances where differences between the CSM subsidy cost calculation and the theoretically precise calculation were significant, the materiality of these differences cannot be precisely determined in general because the relevant factors, such as the applicable discount rate and the size and timing of future cash flows, will vary from case to case. Except for one of the limitations, however, our assessment is that CSM users and their auditors can take steps to minimize or eliminate the impact of the limitations. Of the several limitations OMB included in its assertions, three impact the subsidy cost calculations. These are described in detail in OMB’s assertions and Ernst & Young’s report. The first limitation results from the CSM’s use of nonstandard discounting equations to calculate the net present value of cash flows for partial periods, such as semiannual and quarterly. The CSM adjusts its discounting equations for partial periods, when timing options other than “simple annual” are used, by dividing the discount rate by a factor, which is determined by the timing of cash flows and the periodicity of discounting. However, such partial period adjustments should be made exponentially to conform with standard discounting conventions. For example, the standard adjustment to the discounting equation for the semiannual discounting of cash flows occurring at the end of each 6-month period is the square root of (1 + rate) while the CSM uses (1 + rate/2). This results in CSM present values that are slightly lower than those calculated using standard geometric formulas. Because these equations are embedded in the CSM’s source code, users and their auditors are unable to mitigate this limitation. To resolve this problem, OMB should revise the computer source code so that the net present value calculations reflect standard discounting equations. The second limitation arises for programs that disburse loans over several years. FCRA requires that cash flows be discounted to the time of disbursement. OMB interprets the FCRA “time of disbursement” for calculation purposes as the “fiscal year of disbursement.” Consequently, in cases where programs disburse over several years, precisely calculating subsidies requires that agencies prepare cash flows clearly associated with each disbursement so that these cash flows can be discounted to the year of disbursement. Because disbursement year cash flows cannot always be provided due to limited agency accounting systems and credit program data, the CSM permits less detailed, aggregated cohort level data to be used as an approximation. If cohort level data are used, the CSM uses one of two methods to disaggregate the cash flows into portions that are attributable to the amounts that are disbursed in each year. However, the use of cohort level data can introduce distortions that result from (1) the disaggregation of the cohort level data and (2) the CSM’s averaging of discount rates for programs where discount rates differ for each disbursement year. Agencies can eliminate the impact of this limitation by using disbursement year data, when available, rather than cohort level data. The third limitation involves rounding. Because of rounding, and particularly in programs that have disbursements over several years, the calculated subsidy will be less precise if an inappropriate scale is used in the cash flow data. If the data are presented in millions and the actual values are in thousands, a significant amount of data may be lost when the CSM rounds to three decimal places. This effect is most pronounced when a large portion of program cash flow items are very small, since rounding of smaller dollar values increases the risk that the rounded values will be materially different than the actual values. For example, if a series of underlying values in millions of dollars is 0.0054, 0.0054, 0.0054, the CSM will round each to 0.005—losing 0.0004, or roughly 8 percent in each case, which may be significant. If these values were expressed in thousands of dollars (5.400 instead of 0.0054), none of the underlying values would be lost due to rounding. Reliable Subsidy Calculations Also Require Quality Cash Flow Data, Proper Use of the CSM, and Management Oversight When assessing the reliability of the CSM’s subsidy rate calculations, we found it useful to remember the important but limited role that the CSM has in the credit reform process. Reliable subsidy calculations also require quality cash flow data, clear guidance from OMB and proper use of the CSM by credit agencies, and close management oversight by both the credit agency and OMB. Because the CSM is essentially a calculator that processes estimated cash flows provided by the credit agency, its subsidy calculation is only as reliable as the agency-generated cash flow data. In the audits of credit agencies’ financial statements for fiscal year 1995, significant weaknesses were identified with the quality of cash flow estimates and supporting data. For example, the Department of Agriculture, which has the federal government’s largest balance of loans receivable, received a qualified audit opinion on its Rural Development component financial statements, in part, because of inadequately supported cash flows. Fiscal year 1996 financial statement audit results available as of July 1997 indicate that generally credit agencies are still having difficulty preparing quality, well-supported cash flows that comply with FCRA and SFFAS No. 2 requirements. Staff from GAO, OMB, and credit agencies are currently working together to develop approaches to improve cash flow estimates. Although the basic function of the CSM—to discount cash flows to the year of disbursement—is conceptually straightforward, use of the CSM can be complex because of the various options available and types of data to be entered. Consequently, proper use of the CSM requires sufficient, clear guidance from OMB on what the CSM options are and how best to use them to reflect the characteristics of credit agency loan programs. Also, credit agency officials must recognize that use of the CSM requires not only adequate knowledge of credit agency loan programs but familiarity with the concepts contained in FCRA and SFFAS No. 2. Moreover, given the complexity inherent in developing cash flow spreadsheets and using them with the CSM in subsidy calculations, agency management must exercise proper oversight to ensure that cash flow data is of high quality, the CSM is used properly, and controls surrounding the preparation of cash flows and the calculation of subsidies are adequate and operating as intended. Finally, given the role assigned to it by FCRA, OMB must oversee agencies’ credit reform implementation even though responsibility for preparing cash flows is with the credit agencies. We recently had the opportunity to illustrate the need for adequate oversight by credit agencies and OMB. In our July 16, 1997, testimony before the House Committee on Small Business, we reported on the estimates of credit subsidy for the Small Business Administration’s (SBA) guaranteed business loan and certified development company programs—more commonly called the “7(a)” and “504” programs, respectively. We reported on an error in SBA’s cash flow spreadsheet that we had uncovered in the calculation of the fiscal year 1997 subsidy costs for the 7(a) program. A critical cell in SBA’s cash flow spreadsheet was based on the number of dollars guaranteed instead of the number of dollars disbursed, that is, the total face amount of the loans. (SBA projected that it would guarantee on average about 76 percent of the fiscal year 1997 loan cohort.) As a result of this error, SBA’s estimated credit subsidy rate was higher by about 32 percent (1 divided by 0.76, the average guaranteed portion of loans disbursed by private lenders). This error went unnoticed by both SBA and OMB staff responsible for reviewing the 7(a) credit subsidy rate estimate. If those staff had compared the component data generated by the CSM for the erroneous fiscal year 1997 estimate with the components of the fiscal year 1996 estimate, they would have seen an unexplainable increase in the fee revenue component (there was no increase in the fee rates charged). According to SFFAS No. 2, subsidy estimate component data should be used to monitor and make decisions about the federal government’s credit programs. In 1995, the Governmentwide Credit Reform Subgroup was formed to resolve issues faced by (1) agencies in implementing credit reform and preparing quality cash flow data and (2) auditors reviewing credit subsidy estimates. An issue paper prepared by the Subgroup, Preparing and Auditing Direct Loan and Loan Guarantee Subsidies Under the Federal Credit Reform Act, is expected to be issued during fiscal year 1998. Controls and Documentation Should Be Improved Ernst & Young’s report includes the following control weaknesses surrounding the development, maintenance, and use of the CSM. First, the CSM was not designed, and is not maintained, in accordance with the validation, verification, and testing (VV&T) approach to software development. VV&T is a process of review, analysis, and testing employed throughout a structured system development lifecycle to ensure the production and maintenance of quality, reliable software. Second, the CSM program was developed and tested by a single programmer and was not independently tested to ensure that its functionality met the initial design request. Ernst & Young noted that the loss or absence of the original programmer may substantially hinder significant modification of the current program. Third, documentation provided to CSM users contains several errors and omissions, and exists in several pieces. Fourth, OMB’s storage of the program source code is insufficient to protect against loss, destruction, and corruption. Fifth, agencies visited by Ernst & Young were using the CSM without logical access controls to prevent unauthorized access. Finally, because it is difficult to verify which data the CSM used to calculate the subsidy, the CSM printed output should be enhanced. Three recommendations were made to improve controls over the CSM, and OMB credit reform staff generally agreed with them. Specifically, OMB staff agreed that (1) future revisions to the CSM will be accompanied by more detailed and complete documentation of the validation, verification, and testing of software, (2) documentation will be improved and expanded to correct for errors and omissions, and (3) the CSM printed output should be enhanced to provide an audit trail showing which data the CSM used to calculate the subsidy. However, OMB staff expressed concerns about some of the findings and one recommendation relating to controls over the CSM. Although OMB acknowledged in its assertions that it did not have a structured and documented VV&T process for developing and testing the CSM, OMB staff told us that the CSM had been developed through extensive discussions among OMB and agency staffs and had been tested over several years by CSM users at credit agencies as well as by OMB credit reform staff. OMB staff also emphasized that computer access controls are an agency’s responsibility and noted that current versions of desktop operating systems have password protection and other controls. Moreover, OMB said that the source code is stored on-site and off-site, in digital tape, fixed disk, and CD-ROM formats and that these storage media are adequate to prevent loss, destruction, or corruption. Finally, OMB’s position is that the loss of the original CSM programmer would not seriously affect future modifications of the program since (1) there is no immediate or urgent need for modifications to the CSM, so replacement staff would have ample time to familiarize themselves with the CSM, (2) other OMB staff or contract personnel could easily make such modifications by using the existing source code, knowledge of the programming language, and familiarity with credit reform concepts, and (3) the CSM is more likely to be replaced than modified. We believe that the improvements to the control environment surrounding the CSM agreed to by OMB, especially the use of VV&T or a similar process, will resolve the major control issues raised by Ernst & Young. Although we recognize that user agencies have ultimate responsibility for computer access controls, agencies clearly need guidance on properly controlling access to the CSM—Ernst & Young’s visits to seven user agencies found that none of them had logical access controls over the personal computers containing the CSM. We believe that OMB guidance on proper controls over access to the official agency copy of the CSM can be easily and quickly communicated to agency staff. In addition, since the completion of Ernst & Young’s work, we have confirmed that OMB has adequate storage of the CSM source code to prevent loss, destruction, or corruption. Revised CSM to Be Released After June 1998 OMB staff told us that they are considering improvements to the CSM, including a refinement of methods, more detailed output, improved documentation, and other improvements identified in the management assertions and, where appropriate, recommendations from the Ernst & Young report. Also, before releasing this improved version, OMB staff are considering whether to have an audit of the CSM calculations. OMB staff told us that the release of the new version of the CSM will be no earlier than June 1998. OMB staff also told us that they would recommend an interim release of the CSM, prior to the major release described above, if there were a change in law or other requirements or if a significant defect in the calculations was identified. However, in the OMB staff’s judgment, the relatively minor improvements that they believe could be accomplished in an interim update must be weighed against what they believe will be a substantial effort, mainly by agencies, to reinstall the model on hundreds of computers and train staff in the changes from the previous release. As of July 1997, OMB staff told us that they have found no evidence that an interim update is required. Further, OMB staff noted that OMB’s management assertions, which Ernst & Young concluded are “fairly stated in all material respects,” state that the effect of limitations in the current release of the CSM, based on cases reviewed to date, “have not revealed any instance in which such differences were significant.” Procedures Auditors Should Perform to Ensure Proper Use of the Credit Subsidy Model OMB’s assertions and Ernst & Young’s report pointed out that proper use of the CSM is the responsibility of the user agencies. This responsibility includes using proper cash flow data, correctly installing the appropriate CSM version, and making correct choices from available CSM options to accurately reflect specific credit program characteristics. In contracting with Ernst & Young, we did not ask the firm to determine whether agencies are properly using the CSM. Therefore, to ensure that CSM subsidy calculations are correct, auditors will need to, among other things, obtain assurance that agencies are using the CSM properly. With assistance from the Federal Audit Executive Council, credit agencies’ inspectors general, representatives of the Governmentwide Credit Reform Subgroup, and OMB’s credit reform staff, we identified supplemental audit procedures to be performed in audits of federal credit agencies and subsidy calculations. These procedures are listed in appendix II. Conclusions Taken together, OMB’s assertions on the CSM’s capabilities, Ernst & Young’s report, and the audit procedures included in this report should provide federal credit agencies and their auditors with a better understanding of how the CSM functions and additional guidance on proper use of the CSM. Although generally agreeing with Ernst & Young’s recommended steps for improving the CSM, OMB staff believe that an immediate release of a revised, improved CSM would not be worth the costs involved. OMB staff further note that they have found no evidence that the limitations in the current release of the CSM have had a material impact on subsidy calculations. Thus, they propose waiting until they have decided upon various policy matters and other changes to the CSM before they issue a revised version of the CSM. While this may be reasonable, we believe that the lack of adequate access controls at user agencies should be corrected immediately. Recommendations Based on our review of OMB’s assertions and Ernst & Young’s report, we recommend that the Director of OMB ensure that guidance is provided to user agencies to establish logical access controls surrounding use of the CSM. In addition, we recommend that the Director of OMB ensure that the following steps are taken in developing the next revision to the CSM: revise the discounting equations in the CSM to follow standard finance strengthen controls over the CSM by implementing a VV&T or similar process, improve the CSM documentation to correct for the mistakes and omissions noted in OMB’s assertions and Ernst & Young’s report, and enhance the CSM printout with additional data so that users and auditors are able to specifically identify which data were used by the CSM in the subsidy calculations. Within 60 days of the date of this letter, we would appreciate receiving a written statement on actions taken to address our recommendations. We are sending copies of this report to the Senate and House Appropriations and Budget Committees, the Senate Committee on Governmental Affairs, and the House Committee on Government Reform and Oversight. We are also sending copies to the chief financial officers and budget officials at federal credit agencies; the inspectors general with audit responsibilities for these agencies; and other interested parties. Copies will also be made available to others upon request. If you have any questions about this report, please call McCoy Williams, Assistant Director, at (202) 512-6906. Major contributors to this report are listed in appendix III. Ernst & Young’s Report Including OMB’s Assertions Audit Procedures to Verify Proper Use of the Credit Subsidy Model Proper use of OMB’s Credit Subsidy Model (CSM) requires that user agencies correctly install the appropriate CSM version, make correct choices from available CSM options and commands to accurately reflect specific credit program characteristics, control access to the CSM, and understand the CSM’s capabilities and limitations. With assistance from the Federal Audit Executive Council, credit agencies’ inspectors general, representatives of the Governmentwide Credit Reform Subgroup, and OMB’s credit reform staff, we identified the following audit procedures that should be performed to ensure proper use of the CSM. Comprehensive guidance on auditing credit reform subsidy estimates is included in Preparing and Auditing Direct Loan and Loan Guarantee Subsidies Under the Federal Credit Reform Act, a draft issue paper prepared by the Governmentwide Credit Reform Subgroup, which is expected to be issued during fiscal year 1998. The audit procedures discussed in the following sections should be used in conjunction with those presented in the issue paper. Additionally, these procedures are intended to provide audit guidance that may or may not be applicable in all situations. The auditors should use professional judgment in determining which are applicable to the agency they are auditing. Ensure Use of an Appropriate and Unmodified Version of the CSM Since 1990, OMB has periodically revised the CSM to add enhancements, make methodology changes, and otherwise improve its operation. Different versions of the CSM may produce slightly different subsidy rates. As of July 1997, the current version of the CSM was Version r.9, dated August 1, 1994. We expect that OMB will, on occasion, release new versions of the CSM. In addition, although it may be unlikely, the agency’s computer file of the CSM may become modified intentionally or accidentally. Therefore, the auditor should obtain the appropriate version of the CSM for the fiscal year under audit by contacting the agency’s OMB budget examiner. This version should be compared with the version used by the agency in its subsidy calculations. To verify that the agency’s version of the CSM is unmodified, the auditor should use the “file compare” feature of desktop operating software to compare the agency’s version with the OMB official, approved version. If the two versions are the same, the auditor can conclude that the agency’s version is unmodified. If they differ, the auditor should bring this to the attention of agency management and the OMB budget examiner and obtain an explanation for the differences. Finally, as the ultimate check, the auditor can calculate the subsidy rate using the agency’s cash flows, the agency’s version of the CSM, and OMB’s version of the CSM and compare the results. Verify That Approved Cash Flow Data Is the Same Data Used by the CSM to Calculate the Subsidy Rate The user agency should provide the auditor with the approved cash flow data that support its credit program subsidy rate for each of the credit programs selected for internal control and substantive testing. (Cash flow data will be available from electronic spreadsheet files in a format prescribed by the CSM User’s Guide.) The auditor should verify that these data were, in fact, the same data used by the CSM to calculate the applicable subsidy rate. The spreadsheet file name, the range name, and the date and time the spreadsheet was last changed are included in the printed CSM output. The auditor can check this information against the named spreadsheet file provided by the agency to verify the cash flow data used in the CSM’s subsidy calculation. However, if the spreadsheet file provided by the agency was changed after the subsidy calculation, the date and time stamp on the spreadsheet file will not match what is on the CSM output. In this case, the CSM output will not provide sufficient information to verify the cash flow data used by the CSM. Therefore, the auditor will need to use other methods. One method is to recalculate the subsidy rate using the cash flow data provided by the agency and the auditor’s copy of the appropriate version of the CSM obtained from the applicable agency’s OMB budget examiner. If the recalculated subsidy rate is the same as the subsidy rate under audit, the auditor should be able to conclude that the cash flow data provided by the user agency was the same data used by the CSM. If the recalculated subsidy rate is different, the auditor should bring this to the attention of agency management and the OMB budget examiner and obtain an explanation for the difference. Follow Up on Error Messages Prior to calculating a subsidy rate, the CSM performs several edits on agency-generated cash flows to help ensure that cash flow data do not contain obvious errors. If the CSM edit process identifies a serious error, the CSM will issue an error message and terminate its operation without calculating a subsidy. However, if the CSM edit process determines an error to be less serious, it will issue a “warning” but will not terminate the program. Warnings will be listed with the subsidy rate calculation on CSM output sent to a printer. The auditor should review CSM output to identify whether any warning messages are listed and follow up with agency management to determine why the situation causing the warning message was not resolved and whether not eliminating the error could have any impact on the subsidy rate calculation. In addition, the CSM provides options for the user to suppress certain warning messages. For example, when cumulative scheduled principal payments do not equal disbursements, a warning message is normally issued. If the agency has suppressed this warning, auditors should determine whether this suppression is appropriate. This concern applies to other warning messages as well. Specifically, the auditor should check the agency’s cash flow spreadsheet to determine whether the “suppress warnings” command was used. If so, the auditor should request that the agency explain why warning messages were suppressed and, if certain warning messages are suppressed, whether conditions exist that would cause those messages to be generated, and whether the warning indicates a material problem in the cash flows. Ensure That Options Chosen Properly Reflect Specific Characteristics of Each Credit Program Proper use of the CSM requires that the agencies select the appropriate options from those available (see Chapter III of the CSM User’s Guide, Version r.9) and use the appropriate Treasury rate to discount cash flows to net present value. Particular care should be used in reviewing the choice of timing options for the principal and interest payments in direct loan programs. When a row of cash flows for scheduled principal or interest payments is prepared using standard financial formulas (which assume disbursements at the beginning of the period and payments at the end of the period), the “simple annual” option should be used. In contrast, when estimates of interest and principal payments are based on the assumption that these payments occur continuously throughout the year, the timing option row of cash flows should be “continuous.” When the wrong timing option is used for scheduled principal or interest payments, the financing subsidy may be materially distorted. The auditor may also want to review the choice of timing options for payments and receipts other than principal and interest, although the effects of these distortions are generally smaller. Care should also be exercised when reviewing cash flows for loan guarantee programs that guarantee less than 100 percent of the face value. As indicated in the User’s Guide, the amount in the cash flow row for “disbursement of loans by private lenders” is the total amount disbursed by the lenders, regardless of how much is guaranteed by the credit agency. The amount of disbursed loans guaranteed by the government is included in the row of the cash flow representing the estimate of claims made against the government. For example, if an agency has a program that guarantees 75 percent of loans disbursed, and the lenders disburse $100,000 in loans that immediately default, the agency should put $100,000 in the disbursement by private lenders cash flow row and $75,000 in the cash flow row for defaults. Ensure Proper Scale Has Been Used in Cash Flow Spreadsheets OMB’s assertions state, “The model rounds cash flows to three decimal places when read from spreadsheet files. Because of the rounding, and particularly in programs that have disbursements over several years, the calculated subsidy can change slightly with the scale of the program. This effect is most pronounced when many of the cash flow items are very small after rounding (.005 or .011, for instance). Small values are especially sensitive to the hazards of rounding.” Therefore, agency controls should be in place to ensure that rounding to three decimal places has no significant effect on the spreadsheet values and, in turn, the calculated subsidy. For example, if a series of underlying values, in millions of dollars, are 0.0054, 0.0054, 0.0054, the CSM will round each to 0.005—losing 0.0004 in each case, which could be significant. In this situation, the agency should express values in thousands of dollars so that the underlying values are 5.400, 5.400, 5.400—losing nothing in the rounding—in order to obtain a more precise subsidy rate calculation. The auditor should confirm that management controls are adequate to ensure that the cash flows contain the proper scale and that rounding has no significant effect on the subsidy calculation. If these controls are not adequate, the auditors should review the cash flow spreadsheet to ensure that the scale used is appropriate. The auditor should also bring the situation to the attention of agency management. Determine Whether Cash Flows Are Prepared at Appropriate Level of Detail The CSM permits spreadsheet cash flow data to be prepared on a disbursement year basis or a cohort basis. (A disbursement year consists of all loans from a given cohort that are disbursed in a given fiscal year.) For the special case in which all disbursements occur during a single fiscal year, the disbursement year includes the entire cohort and these bases do not differ. However, for loan programs with cohorts that disburse over more than one year, the disbursement year includes just part of the cohort. For such programs, the cash flows for each disbursement year of a given cohort are necessary to precisely calculate subsidies at the time of disbursement. Because agencies cannot always provide such detail, the CSM permits less detailed cohort level data—combinations of 2 or more disbursement years—to be used as an approximation. But the use of cohort level data can introduce distortions. For example, a loan program can be expected to have a zero financing (interest rate) subsidy if the borrower rate is the same as the discount rate. However, if a program disburses loans over 2 or more years, cohort rather than disbursement year cash flows are used, and the discount rates are not held constant in all disbursement years, the CSM will calculate a non-zero subsidy. Therefore, whenever a loan program has substantial disbursements in 2 or more years and the agency has prepared cash flows using cohort level rather than disbursement year data, the auditor should determine why disbursement year cash flows were not used. Specifically, if there are reasons why disbursement year cash flows cannot be prepared, these reasons should be documented. On the other hand, if disbursement year cash flows are available, the auditor should determine whether the use of cohort level cash flows has had a material effect on the subsidy calculation. A determination that an effect is material should take into account the size of the difference in absolute terms and relative to the subsidy, the effect on the level of loans supported by the subsidy, and other factors the auditor may consider important. If the auditor determines that the effect is material, the auditor should recommend that the agency prepare cash flows on a disbursement year basis to eliminate the problem. If the agency is unable to do this, the auditor should exercise professional judgment to determine whether there is a potential for material misstatement and whether this situation would affect the ability to conclude on the fairness of the amounts in related accounts. Compare Cash Flow Spreadsheet and Related Subsidy Rate With Prior Years Credit reform and the CSM require credit agencies to develop spreadsheets of projected cash flows, which must be presented in a prescribed format and require the spreadsheet preparer to choose among various commands and options that properly characterize each credit program. Once an auditor has determined that a spreadsheet contains the proper format, commands, options, etc. for the credit program, then the auditor can have some assurance about future years’ cash flows with the same formats, commands, options, etc. If changes in formats or commands on the cash flow spreadsheets have been made, auditors should discuss with agency officials why such changes were made, including what the changes are intended to accomplish. An auditor may wish to use analytical procedures each year to confirm that any changes to the credit program are properly reflected in the spreadsheet and that changes to the spreadsheet and associated subsidy rate, including components, are reasonable. For example, if an agency’s fee structure has not changed, the auditor should expect the subsidy rate component attributable to fees to remain the same. Evaluate Agency Security Controls Over CSM Access OMB’s assertions state that agencies are responsible for ensuring that the CSM has not been corrupted or otherwise inappropriately changed. Such assurance requires that agencies have procedures in place to limit access to the CSM to authorized personnel only. For example, the auditor might expect to find procedures to ensure confirm password protection on the desktop workstation where the CSM resides. The auditor should review these procedures and determine if they are in place to verify that they adequately protect the CSM from unauthorized use and corruption. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Office of the Chief Economist Harold J. Brumm, Jr., Economist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Office of Management and Budget's (OMB) Credit Management Subsidy Model (CSM), focusing on whether the CSM: (1) conforms with relevant provisions of applicable legislation and accounting standards; (2) provides reliable results; and (3) is maintained and operated under a system of adequate controls. An additional objective was to identify supplemental audit steps that auditors should perform to ensure that federal credit agencies are using the CSM properly. GAO contracted with the independent public accounting firm of Ernst & Young LLP to evaluate OMB's written representations (assertions) about the CSM's capabilities and opine on whether they are fairly stated in all material respects. GAO noted that: (1) OMB's assertions on the CSM thoroughly explain the CSM's capabilities, limitations, and user agency responsibilities; (2) Ernst & Young concluded that OMB's assertions on the CSM are fairly stated in all material aspects and recommended several steps OMB should take to improve the reliability of CSM results and controls surrounding it; (3) based on GAO's review of Ernst & Young's work, GAO generally concurs with its conclusion and recommendations; (4) the Federal Credit Reform Act of 1990 and related federal accounting standards define the cost (subsidy) of a direct loan or loan guarantee as the estimated long-term cost to the government on a net present value basis at the time when a loan is disbursed; (5) the operation of the CSM conforms with this definition in that the model computes a subsidy cost by calculating the estimated net present value, at the time of loan disbursement, of agency-generated cash flows over the life of the loan; (6) OMB's assertions state that because of several limitations in the CSM's design, the subsidy cost calculated by the CSM may differ from a theoretically precise result; (7) for all but one of the limitations, credit agencies and their auditors can take steps to minimize or eliminate the impact of the limitations on the subsidy cost calculation; (8) the impact on the subsidy cost calculation of the limitation involving the use of nonstandard equations for discounting certain projected cash flows, however, is more difficult to evaluate and cannot be minimized by credit agencies and their auditors; (9) several weaknesses were identified relating to controls surrounding the development, maintenance, and use of the CSM; (10) GAO believes that if OMB implements a validation, verification, and testing approach (VV&T) or similar process, improves documentation, and provides guidance to credit agencies on controlling access to the CSM, the basic control weaknesses identified by Ernst & Young will be addressed; (11) OMB's assertions also state that user agencies are responsible for properly using the CSM; (12) consequently, when obtaining assurance that CSM subsidy cost calculations are correct, auditors will need to ensure that agencies are properly using the CSM; and (13) to help auditors obtain this assurance, GAO identified, with assistance from OMB staff, a series of supplemental audit procedures for auditors to follow when auditing federal credit agencies' financial statements and subsidy cost calculations.
Background According to a September 2011 DHS/FBI joint bulletin, more than 68 percent of general aviation aircraft registered with the Federal Aviation Administration are personally owned aircraft—mostly small, single- or twin-engine propeller aircraft—used for recreation or personal transportation. Corporate- or business-owned aircraft compose approximately 15 percent of general aviation aircraft. Regarding the types of general aviation in the general aviation aircraft fleet, FAA data indicate that about 63 percent of general aviation aircraft are single-engine piston aircraft, while about 4 percent are turboprop. Figure 1 shows the composition of the general aviation fleet. Pursuant to ATSA, TSA assumed from FAA responsibility for securing the nation’s civil aviation system. Consistent with its statutory obligations, TSA has undertaken a direct role in ensuring the security of commercial aviation through its performance and management of the passenger and baggage screening operations at TSA-regulated airports, among other things. In contrast, TSA has taken a less direct role in securing general aviation, in that it generally establishes standards that operators may voluntarily implement and provides recommendations and advice to general aviation owners and operators, except to the extent such operations fall under existing TSA security requirements or where otherwise specifically directed by statute.general aviation airports and aircraft is generally shared with state and local governments and the private sector, such as airports and aircraft owners and operators. TSA and Aircraft Operators Have Taken Actions to Secure General Aviation; TSA Obtains Information through Outreach and Inspections TSA has worked to enhance general aviation security by developing various security programs and working with aviation industry stakeholders to enhance their security efforts through the development of new security guidelines. The agency works to obtain information on the security practices of industry stakeholders through compliance inspections and outreach and is working with its industry partners to develop new security regulations. TSA and Industry Efforts to Enhance General Aviation Security As shown in table 1, TSA and other industry stakeholders have taken a number of actions to enhance general aviation security. Among other measures, TSA worked with members of the General Aviation Working Group of the Aviation Security Advisory Committee in 2003 and 2004 to develop recommended guidelines for general aviation airport security. A more detailed list of federal, state, and industry general aviation security initiatives can be found in appendix II. Independent of regulatory requirements, operators of private general aviation aircraft not covered under existing security programs we spoke to indicated that they implement a variety of security measures to enhance security for their aircraft. For example, 7 of the 12 operators that perform as private operators that we interviewed stated that they park their aircraft in hangars to protect them from possible misuse or vandalism. Further, 2 of the 12 operators stated they had hired security personnel to guard their aircraft if they are required to stay at an airport without hangar facilities. Seven of the 12 operators stated that they implement these security measures because of security concerns associated with operating their aircraft. For example, the 7 operators stated that their aircraft represent a major investment for their company and help generate a stream of income that must be protected, and that protecting the well-being of senior executives was a priority. TSA Inspections and Industry Outreach TSA obtains information directly from aircraft operators that fall under the Twelve-Five and Private Charter security programs (see fig. 2) through its review and approval of the security programs developed by these operators and through periodic inspections to determine the extent to which operators comply with their security programs. TSA Transportation Security Inspectors are responsible for conducting these periodic inspections and determining whether operators are in compliance with program requirements or whether a violation has occurred. As part of the inspection process, TSA inspectors examine certain key security areas with respect to Twelve-Five and Private Charter operations, including the roles and responsibilities of aircraft operator personnel and whether the operator has procedures for addressing emergencies. For example, TSA’s 2009 Inspector Handbook provides guidance to TSA inspectors to examine, among other things, whether aircraft operators under its security programs ensure that individuals are denied boarding if they do not have valid ensure that passenger identification documents are checked against have adequate procedures for addressing incidents where indications of tampering or unauthorized access of aircraft are discovered. Inspectors are required to record inspection results, including any violations of program requirements, in TSA’s PARIS database and to close the violations when the problem is resolved. Violations may be resolved with on-the-spot counseling; however, some violations may result in TSA sending a warning notice to the operator or in civil penalties for the operator. If warranted, follow-up inspections may be conducted, based on any findings made during an inspection. TSA officials stated that inspection results in PARIS are used to inform TSA of security challenges that may be faced by aircraft operators and to allow the agency to better address security concerns expressed by these operators. TSA inspection data indicate that from 2007 through 2011, aircraft operator compliance with security requirements has been well over 90 percent and has generally increased. TSA officials attribute the increase in compliance to a better understanding of security program requirements by operators, and to increased TSA outreach. Agency data illustrate that the reasons for noncompliance among aircraft operators varied. For example, in fiscal year 2011, inspectors found that Private Charter aircraft operators did not always provide advance notice to the Federal Security Director of upcoming private charter operations or of subsequent changes or additions, which occurred in 7 percent of 424 inspections for this item. Program compliance violations detected by inspectors were sometimes resolved either by counseling with the aircraft operator or by initiating an investigation of the incident, which could result in TSA issuing a warning notice or civil penalties being assessed. In addition to taking steps to obtain information on security measures enacted by general aviation aircraft operators that fall under TSA security programs, the agency has also taken steps to obtain information on security measures implemented by general aviation airport operators. Specifically, the 9/11 Commission Act required TSA to, among other things, develop a standardized threat and vulnerability assessment program for general aviation airport operators and implement a program to perform such assessments on a risk management basis. To help comply with the act’s requirement, TSA distributed a survey in 2010 to approximately 3,000 general aviation airports to identify any vulnerabilities at the airports, and received responses from 1,164 (39 percent) of the airports. In this survey, airport officials were asked to respond to questions on security measures implemented by the general aviation airport operators, such as whether hangar doors were secured when unattended, and whether the airport had closed-circuit camera coverage for hangar areas. This survey also included questions about the types of perimeter fencing and physical barriers installed, as well as the type of security measures in use at these airports. The survey found that, while most general aviation airports had initiated some security measures, the extent to which different security measures had been implemented varied by airport. For example, survey results indicated that more than 97 percent of larger general aviation airports responding to the survey had developed an emergency contact list, but less than 19 percent had developed measures to positively identify passengers, cargo, and baggage. The survey also found that nearly 44 percent of airports responding to the survey required security awareness training for all tenants and employees and more than 48 percent of airports had established community watch programs. According to TSA officials, the results of the survey were analyzed to identify the general strengths and weaknesses in the general aviation community, and to show an overall picture of general aviation security measures at a national and regional level. In addition, TSA officials said that the information collected in the survey can be used to help determine a plan of action to mitigate security concerns at general aviation airports. For example, TSA used the survey to identify approximately 300 airports that it considers to be higher risk and could therefore be prioritized to receive security grants, should they become available. TSA officials added that information from the survey allowed the agency to establish a baseline for security measures in place at general aviation airports. In addition to the survey, TSA also gathers information on security measures implemented by operators through outreach activities its inspectors conduct at general aviation airports, designed to establish a cooperative relationship with general aviation airport stakeholders and encourage voluntary adoption of security enhancements. However, TSA officials stated that this type of outreach by its inspectors is not mandatory and therefore is not conducted regularly. In addition, while inspectors are encouraged to record results of these outreach visits in PARIS, inspectors do not always do so in practice. Additional Security Measures Taken by Operators According to aviation industry officials, there are approximately 9,900 general aviation aircraft over 12,500 pounds not covered under either the Twelve-Five or Private Charter security programs. Analysis by the Homeland Security Institute indicates that some of these larger aircraft may be able to cause significant damage in terms of fatalities and economic costs, particularly general aviation aircraft with a maximum takeoff weight of 71,000 pounds. According to industry data, there are over 800 general aviation aircraft weighing over 71,000 pounds. TSA officials we spoke to stated that, unlike for aircraft that fall under the Twelve-Five or Private Charter security programs, the agency does not have a systematic mechanism to collect information on the security measures implemented by other general aviation aircraft operators that do not fall under TSA security programs. Rather, the agency has developed informal mechanisms for obtaining information on security measures enacted by these operators, such as outreach conducted by TSA inspectors, and has contacted general aviation industry associations to obtain this information as well as obtain information on the concerns of these operators regarding costs and other challenges associated with potential security requirements. As previously mentioned, TSA issued a notice of proposed rulemaking for a Large Aircraft Security Program in October 2008, which would have resulted in all general aviation aircraft larger than 12,500 pounds, including those not currently covered under existing security programs, being subject to TSA security requirements and inspections. However, industry associations and others expressed concerns about the extent to which TSA obtained industry views and information in the proposed rule’s development. They also questioned the security benefit of the proposed rule and stated that it could negatively affect the aviation industry given its broad scope. For example, officials from three of the six industry associations we interviewed stated that many of the proposed rule’s measures, such as having third-party contractors conduct inspections of private aircraft operators for a fee, would impose substantial logistical and cost burdens on the general aviation industry. These association officials added that any revised rule that TSA develops must take into account the security measures already put in place by general aviation aircraft operators as well as the costs associated with implementing any additional security measures. TSA managers responsible for general aviation security operations stated that, in response to these concerns, the agency was revising the proposed rule to make it more focused and risk-based, and that the agency plans to issue a supplemental notice of proposed rulemaking in late 2012 or early 2013. Further, officials from all six of the industry associations we interviewed stated that TSA has reached out to industry in developing its new rule and three of the six associations stated that TSA has performed a better job of reaching out to industry in its ongoing development of the new rule than it did with the rule it proposed in 2008. For example, the vice president from one association stated that as part of its development of its supplemental notice of proposed rulemaking, TSA has more actively sought information on these security measures, which better allows the agency to ensure the requirements would impose as limited a burden as possible while maximizing security. He also stated that TSA periodically solicits information on its proposed rule and on industry security measures from industry associations through its Aviation Security Advisory Committee. Weaknesses Exist in Processes for Conducting Security Threat Assessments and for Identifying Potential Immigration Violations TSA has not ensured that all foreign nationals seeking flight training in the United States have been vetted through AFSP prior to beginning this training or established controls to help verify the identity of individuals seeking flight training who claim U.S. citizenship. TSA also faces challenges in obtaining criminal history information to conduct its security threat assessments as part of the vetting process, but is working to establish processes to identify foreign nationals with immigration violations. Foreign Nationals’ Security Threat Assessments Some foreign nationals receiving flight training may not have undergone a TSA security threat assessment. Under AFSP, foreign nationals seeking flight training in the United States must receive a TSA security threat assessment before receiving flight training to determine whether each applicant is a security threat to the United States. This threat assessment is in addition to screening that the Department of State conducts on foreign nationals who apply for nonimmigrant visas and that U.S. Customs and Border Protection conducts on travelers seeking admission into the United States at ports of entry. According to TSA regulations, an individual poses a security threat when the individual is suspected of posing, or is known to pose, a threat to transportation or national security, a threat of air piracy or terrorism, a threat to airline or passenger security, or a threat to civil aviation security. AFSP to obtain flight training, TSA uses information submitted by the foreign national—such as name, date of birth, and passport information— to conduct a criminal history records check, a review of the Terrorist Screening Database, and a review of the Department of Homeland Security’s TECS system, as shown in table 2. See 49 C.F.R. § 1540.115(c). Description Criminal history record checks, which are fingerprint-based, require an adjudicator to review the applicant’s criminal history. According to TSA officials responsible for conducting these reviews, AFSP has no specific disqualifying offenses; however, if a foreign national applying to AFSP has criminal violations, TSA will forward this information to FAA to determine whether the violation disqualifies that individual from holding an FAA certificate. Information in the Terrorist Screening Center’s consolidated database of known or suspected terrorists—the Terrorist Screening Database—is used for security-related screening of foreign nationals applying to AFSP. For example, the Selectee List, a subset of the Terrorist Screening Database, contains information on individuals who must undergo additional security screening before being permitted to board an aircraft. The No Fly List, another subset of the Terrorist Screening Database, contains information on individuals who are prohibited from boarding an aircraft. If a foreign national is on one of these lists, TSA analysts will perform additional research to determine whether he or she is eligible to receive flight training. TECS, an updated and modified version of the former Treasury Enforcement Communications System, is an information-sharing platform that allows users to access different databases relevant to the antiterrorism and law enforcement mission of numerous other federal agencies. TSA reviews information contained in TECS to determine if an AFSP applicant has prior immigration-related violations. If the AFSP applicant has prior immigration-related violations, such as a previous overstay, TSA will conduct additional TECS queries to determine if the applicant is eligible to obtain flight training. An overstay is an individual who is admitted to the country legally on a temporary basis—either with a visa, or in some cases, as a visitor who was allowed to enter without a visa—but then overstayed his or her authorized period of admission. According to TSA data, about 116,000 foreign nationals applied to AFSP from fiscal year 2006 through fiscal year 2011, and TSA’s AFSP security threat assessments resulted in 107 training requests submitted by foreign nationals being denied from 2006 through 2011 because of national security reasons, immigration violations, or disqualifying criminal offenses. According to TSA officials, most foreign nationals taking training from a U.S. flight training provider will apply for an FAA airman certificate once their flight training is completed. Information obtained by FAA as part of this application for certification is placed in the airmen registry. Consistent with ATSA, TSA strives to coordinate with other federal agencies to secure the nation’s transportation systems. According to TSA, this may include coordinating with FAA and U.S. Immigration and Customs Enforcement (ICE) to identify individuals who pose a threat to transportation security. For example, FAA provides TSA with data on individuals new to the airmen registry database on a daily basis, including biographic information on foreign nationals applying for airman certificates based on their foreign license. According to a report by the DHS Office of Inspector General, in early 2009, TSA used these data to perform a one- time, biographic, name-based security threat assessment for each of the 4 million individual FAA airman certificate holders. These security threat assessments consisted of matching the biographic data provided by FAA against the Terrorist Screening Database to determine whether credible information indicated that the individual holding a certificate was involved, or suspected of being involved, in any activity that could pose a threat to transportation or national security. FAA certificate holders suspected of being in the Terrorist Screening Database were referred to TSA’s Transportation Threat Assessment and Credentialing office for investigation. The airman vetting activities had been transferred to TSA in October 2009 after a TSA and FAA work group developed business processes and an interagency agreement was signed, according to Since then, TSA has vetted both new FAA airman certificate FAA.applicants and holders on an ongoing basis against the Terrorist Screening Database. In addition to vetting names of FAA airman certificate holders against the Terrorist Screening Database, TSA also vets foreign nationals applying for flight training through the AFSP, including training that occurs before a student applies for an FAA airman certificate. To determine whether foreign nationals applying for FAA airman certificates had previously applied to AFSP and been vetted by TSA, we obtained data from FAA’s airmen registry on foreign nationals who had applied for airman certificates and provided these data to TSA so that the agency could conduct a matching process to determine whether the foreign nationals in the FAA airmen registry were in TSA’s AFSP database and the extent to which they had been successfully vetted through the AFSP database. The results of our review of TSA’s analyses are as follows: TSA’s analysis indicated that some of the 25,599 foreign nationals in the FAA airmen registry were not in the TSA AFSP database, indicating that these individuals had not applied to the AFSP or been vetted by TSA before taking flight training and receiving an FAA airman certificate. TSA’s analysis indicated that an additional number of the 25,599 foreign nationals in the FAA airmen registry were also in the TSA AFSP database but had not been successfully vetted, meaning that they had received an FAA airman certificate but had not been successfully vetted or received permission from TSA to begin flight training. As stated previously, TSA continuously vets all new and existing FAA airmen certificate holders against the Terrorist Screening Database, which would include the foreign nationals identified through TSA’s analysis. However, this vetting does not occur until after the foreign national has obtained flight training. Thus, foreign nationals obtaining flight training with the intent to do harm, such as three of the pilots and leaders of the September 11 terrorist attacks, could have already obtained the training needed to operate an aircraft before they received any type of vetting. In commenting on the results of the analysis, TSA’s Program Manager for AFSP could not explain with certainty why some of the foreign nationals applying for FAA airman certificates may not have been vetted though TSA’s security threat assessment process. The Program Manager stated, however, that certain individuals can receive exemptions from the vetting requirement as a result of a Department of Defense (DOD) attaché endorsement at a U.S. embassy or consulate overseas. TSA takes steps to help ensure that foreign nationals are obtaining security threat assessments prior to beginning flight training. Specifically, TSA regulations require flight training providers to maintain documentation on foreign nationals who receive AFSP approval to begin flight training as well as documentation on those who are taking flight training under DOD endorsements. Similarly, TSA standard operating procedures for inspectors indicate they should review documentation over the course of their inspections of the flight training provider, including documentation indicating the foreign national was approved for flight training under AFSP and, if available, the DOD endorsement letter that informs them of the status of the foreign national in question as a DOD endorsee, which would exempt them from receiving a security threat assessment under AFSP. Our review of compliance data from TSA’s PARIS database for fiscal year 2011 found that TSA inspectors have encountered and documented instances where foreign nationals attending flight school presented to the flight training provider DOD endorsement letters, which would indicate they are exempt from security threat assessment requirements. Additional details are considered sensitive security information. Flight School Compliance with Requirements TSA’s fiscal year 2011 Compliance Work Plan for Transportation Security Inspectors requires that a minimum of one comprehensive inspection per year must be performed on each of the approximately 7,000 known flight training providers. The work plan was revised in 2011 to require a minimum of two comprehensive inspections per year for each of the 4,500 certified flight instructors who train foreign students, and TSA’s program manager stated that the agency was able to inspect all of these entities at least twice in 2011. In general, the inspection process requires inspectors to, among other things, review documents maintained by the flight training provider, including the flight training records of both U.S. citizens and alien flight students, and also ensure that foreign students have registered with TSA’s AFSP database and were granted permission The results of the inspections are to be to begin flight training from TSA.reported in TSA’s PARIS database consistent with the reporting requirements of the work plan and other TSA guidance. As warranted, any follow-up inspections are to be performed based on findings made during the inspection process. As of January 2012, inspection results show that the rate of compliance with AFSP requirements increased from 89 percent in fiscal year 2005 to 96 percent in fiscal year 2011. TSA officials attribute the increase in compliance to a better understanding of AFSP requirements by flight training providers, among other things. Agency data also illustrate that the reasons for noncompliance among providers varied. For example, in fiscal year 2011, the reasons for noncompliance included violations such as missing photographs of foreign students, which occurred in 9 percent of 1,800 inspections for this item. In 7 percent of about 2,800 inspections, providers did not document and retain employee records related to completion of the required Security Awareness Training. When inspectors checked for retention of records of U.S. citizenship by the flight training provider, the provider was not in compliance in about 5 percent of the nearly 2,800 inspections performed in this area. Compliance violations detected by inspectors were sometimes resolved either by counseling with the flight training provider or by initiating an investigation of the incident, which could result in civil penalties being assessed. As part of its compliance inspection process, TSA inspectors also review records of documentation provided by U.S. citizens applying for flight training, which are maintained by flight training providers. TSA regulations governing AFSP require individuals claiming U.S. citizenship to provide one of the following documents, among other information, to flight training providers before accessing flight training: a valid, unexpired U.S. passport an original or government-issued birth certificate original certificate of birth abroad and a government-issued picture original certificate of U.S. citizenship with raised seal and government- original U.S. Naturalization Certificate with raised seal and government-issued picture identification. Flight school personnel are required to review the credentials presented by individuals claiming U.S. citizenship and to maintain records, and TSA inspectors, as part of the inspection process, review these records to ensure flight training provider compliance with regulatory requirements. Additional details are considered sensitive security information. Use of Criminal History Information We have previously reported on the challenges TSA faces in ensuring it has the necessary information and appropriate staffing to effectively conduct security threat assessments for screening and credentialing programs, which include AFSP. As we reported in December 2011, criminal history record checks are a key element of the security threat assessment process for TSA’s screening and credentialing programs, helping to ensure that the agency detects those applicants with potentially disqualifying criminal offenses. However, as we reported, the level of access that TSA credentialing programs have to the Department of Justice’s FBI criminal history records is the level of access accorded for noncriminal-justice purposes (i.e., equal to that of a private company doing an employment check on a new applicant, according to TSA), which limits TSA in accessing certain criminal history data related to charges and convictions. TSA said that it had been difficult to effectively and efficiently conduct security threat assessment adjudication of criminal history records because of the limited access it has as a noncriminal justice-purpose requestor of criminal history records—and that this limitation had increased the risk that the agency was not detecting potentially disqualifying criminal offenses. We reported that while TSA was seeking criminal justice-type access to FBI systems, the FBI reported that it is legally unable to provide this access. The FBI and TSA were collaborating on options, but had not identified the extent to which a potential security risk may exist under the current process, and the costs and benefits of pursuing alternatives to provide additional access. In December 2011, we recommended that TSA and the FBI conduct a joint risk assessment of TSA’s access to criminal history records. DHS concurred with this recommendation and indicated it would work with the Department of Justice to assess the extent of security risk, among other things, and evaluate the costs and benefits of each alternative. In response to our recommendations, the FBI reported that it was pursuing several strategies to provide TSA with access to the most complete criminal history information available for noncriminal justice-related purposes, including reaching out to states that do not provide criminal history records for noncriminal justice purposes as well as working to develop technical solutions. As of February 2012, TSA officials indicated that they are continuing to work with the FBI to address our recommendation. TSA officials responsible for overseeing security threat assessments stated that the process for conducting criminal history record checks for AFSP is substantively the same as that used for other TSA screening and credentialing programs. While there is no information indicating that any foreign nationals seeking flight training should not have been allowed to do so because of unidentified criminal offenses, we believe that TSA should continue to work with the FBI on joint risk assessments of TSA’s access to criminal history records for credentialing programs, including AFSP. Immigration Violations There have been instances of overstays or other immigration-related violations for foreign nationals taking flight training in the United States, most notably for three of the September 11 hijackers. Specifically, three of the six pilots and apparent leaders were out of status on or before September 11, including two in overstay status. AFSP was implemented to help address such security concerns. As previously discussed, as part of AFSP, TSA conducts security threat assessments for foreign nationals requesting flight training in the United States. According to TSA officials, the purpose of the security threat assessment, which includes a check of the Terrorist Screening Database and a criminal history records check, is to determine whether the foreign national requesting flight training presents a security threat; the checks are not designed to determine whether an applicant is in the country legally. As part of the security threat assessment, TSA also conducts reviews of DHS’s TECS database to determine if any negative immigration-related information is associated with the foreign national seeking flight training. However, TSA officials acknowledged that it is possible for a foreign national to be approved by TSA through AFSP and to complete flight training after entering the country illegally or overstaying his or her allotted time to be in the country legally. In 2010, ICE investigated a Boston-area flight school after local police stopped the flight school owner for a traffic violation and discovered that he was in the country illegally. Twenty-five of the foreign nationals at this flight school had applied to AFSP and had been approved by TSA to begin flight training after their security threat assessment was completed; however, the ICE investigation and our subsequent inquiries revealed the following issues: Eight of the 25 foreign nationals who received approval by TSA to begin flight training were in “entry without inspection” status, meaning they had entered the country illegally. Six of these foreign nationals were later arrested by ICE as a result of the investigation. TSA indicated 1 individual had been approved to begin flight training at two other schools, although the flight schools indicated that he did not complete training. Three of the 8 foreign nationals in “entry without inspection” status obtained FAA airman certificates: 2 held FAA private pilot certificates and one held an FAA commercial pilot certificate. Seventeen of the 25 foreign nationals who received approval by TSA to begin flight training were in “overstay” status, meaning they had overstayed their authorized period of admission into the United States. Sixteen of these were arrested by ICE as a result of the investigation. Four of the 17 foreign nationals in “overstay” status obtained FAA airman certificates: 3 held FAA private pilot certificates and 1 held a commercial pilot certificate. In addition, the flight school owner held two FAA airman certificates. Specifically, he was a certified Airline Transport Pilot (cargo pilot) and a Certified Flight Instructor. However, he had never received a TSA security threat assessment or been approved by TSA to obtain flight training. He had registered with TSA as a flight training provider under AFSP. Further, TSA data indicated that an additional foreign national arrested as a result of this flight school investigation for “entry without inspection” had previously completed flight training through an airline. According to the AFSP program manager, TSA reviews TECS to determine if the student has prior immigration violations, including overstays. However, the program manager stated that this TECS review is not designed to determine how long the student is authorized to stay in the country or whether the student had entered the country legally. Rather, if the TECS review indicates that the foreign national has previous immigration-related violations, such as overstaying the authorized period of admission, TSA is to conduct additional TECS queries to determine if the individual is eligible to receive flight training. Further, according to TSA, prospective flight students may apply for AFSP before entering the United States, rendering moot the question of whether the foreign national had entered the country legally or overstayed. The AFSP program manager stated that even though the foreign nationals were later found to be overstays, at the time of the review and adjudication of their security threat assessments, they were determined to be in legal status. According to TSA, none of the individuals that TSA processed and approved under AFSP had derogatory information within TECS, and visa overstay information is contained within TECS. However, ICE data we reviewed indicated that 16 of the 17 foreign nationals associated with the flight school who were found by ICE to be in overstay status at the time of the investigation had already been in overstay status at the time they received AFSP approval to begin flight training. This includes the 4 foreign nationals who were able to obtain FAA airman certificates. Further, the AFSP program manager stated that foreign nationals who may have entered the country illegally but who did not have prior immigration violations, did not have a criminal history, or were not on the terrorist watch list, could be successfully vetted through an AFSP security threat assessment and approved to receive flight training. The program manager added that under the current AFSP process, TSA cannot always determine at the time of application if an individual entered the United States “without inspection” (illegally) because applicants can apply to AFSP more than 180 days prior to the start date of training and applicants are not necessarily in the United States at the time of application. Senior officials from TSA and ICE stated that the agencies have initiated a process in which TSA and ICE check the names of AFSP applicants against the U.S. Visitor and Immigrant Status Indicator Technology (US- VISIT) program’s Arrival and Departure Information System (ADIS) to help address this gap, as well as to identify foreign nationals taking flight training who become overstays. Specifically, in March 2011, TSA vetted a list of current alien flight students in TSA’s AFSP database against names in USVISIT’s ADIS to determine if any were potential overstays. This review resulted in the identification of 142 possible overstays. In May 2011, TSA provided ICE with the results of its analysis, and ICE vetting further reduced the list of possible overstays to 22. In September and October of 2011, ICE initiated 22 investigations based on the results of this analysis, which resulted in three arrests. According to TSA and ICE officials, this initial matching of names in the AFSP database against ADIS was conducted once to give the agencies an indication of how many foreign nationals seeking flight training in the United States may be in violation of their immigration status and what the workload associated with conducting such matches would be. Information from this review could then be used to initiate investigations of individuals suspected of being in the country illegally either by overstaying their allotted time in the country or who may have entered the country illegally. The TSA and ICE officials added, however, that such a process would have to be conducted more regularly to systematically identify foreign nationals taking flight training who may be in violation of their immigration status or who may have entered the country illegally. They stated that establishing a more regular process of matching names of foreign nationals in the AFSP database against ADIS would allow the agencies to better identify foreign nationals seeking flight training who have violated the terms of their admission as well as those who have entered the country illegally. However, several issues related to how a name matching program would work are being considered, such as which agency would vet names in the AFSP database against ADIS, and how frequently names associated with potential violations would be provided to ICE. ICE and TSA officials stated that they have not specified desired outcomes or time frames, or established performance measures to evaluate the success of the program. Standards for program management state that specific desired outcomes or results should be conceptualized, defined, and documented in the planning process as part of a road map, along with the appropriate The standards steps and time frames needed to achieve those results.also call for assigning responsibility and accountability for ensuring the results of program activities are carried out. Having a road map, with appropriate steps and time frames, and individuals assigned with responsibility and accountability for fully instituting a pilot program, as well as instituting that pilot program if it was found to help identify foreign nationals taking flight training who may be in violation of their immigration status or who may have entered the country illegally, could help TSA and ICE account for flight students with potential immigration violations, and thus better position TSA to identify and prevent a potential risk. Conclusions Since our 2004 report on general aviation security, TSA has taken steps to enhance communications and interactions with general aviation industry stakeholders as well as improve the vetting of foreign nationals enrolling in U.S. flight schools. AFSP was implemented to help prevent future occurrences of foreign nationals obtaining flight training to commit terrorist attacks, as they did for the September 11, 2001, attacks. Key to the effectiveness of this effort is the ability of TSA to conduct meaningful security threat assessments on foreign nationals seeking flight training to help determine whether these individuals pose a security threat. However, as shown in TSA’s analysis, there are discrepancies between the data found in FAA’s airmen registry and TSA’s AFSP database, raising questions about whether some foreign nationals with airman certificates (pilot’s licenses) have completed required security threat assessments. In addition, working with ICE to develop a plan that assigns responsibilities and accountability and time frames for assessing the joint TSA and ICE pilot program to identify foreign nationals who may have immigration violations—including those who entered the country illegally to obtain flight training—and instituting that program if it is found to be effective, could better position TSA and ICE to determine the benefits of checking TSA data on foreign nationals pursuing flight training in the United States. Recommendations for Executive Action To better ensure that TSA is able to develop effective and efficient security programs for general aviation operators, we recommend that the Administrator of TSA take the following action: Take steps to identify any instances where foreign nationals receive FAA airman certificates (pilot’s licenses) without first undergoing a TSA security threat assessment and examine those instances so that TSA can identify the reasons for these occurrences and strengthen controls to prevent future occurrences. To better ensure that TSA is able to identify foreign nationals with immigration violations who may be applying to the Alien Flight Student Program, we recommend the Secretary of Homeland Security direct the Administrator of TSA and the Director of ICE to collaborate to take the following action: Develop a plan, with time frames, and assign individuals with responsibility and accountability for assessing the results of a pilot program to check TSA AFSP data against information DHS has on applicants’ admissibility status to help detect and identify violations, such as overstays and entries without inspection, by foreign flight students, and institute that pilot program if it is found to be effective. Agency Comments and Our Evaluation We provided a draft of this report to the Departments of Homeland Security and Transportation for comment. DHS, in written comments received July 13, 2012, concurred with the recommendations and identified actions taken, planned, or under way to implement the recommendations. The Department of Transportation’s Deputy Director of Audit Relations stated in an e-mail received on June 4, 2012, that the department had no comments on the report. Written comments are summarized below and official DHS comments are reproduced in appendix III. In addition, DHS and DOT provided written technical comments, which we incorporated into the report, as appropriate. In response to our recommendation that TSA take steps to identify instances where foreign nationals receive FAA airman certificates (pilot’s licenses) without first undergoing a TSA security threat assessment, DHS stated that TSA receives a daily feed from FAA of all new FAA certificates issued, and that TSA vets these against certificates in the Terrorist Screening Database on a daily basis. While this is a beneficial practice, we believe that it would be preferable for TSA to vet prospective flight students before they begin flight training, rather than after they have completed training and received a pilot’s certificate and are thus capable of flying an aircraft. In addition, while TSA vets the names of new certificate holders against the Terrorist Screening Database on a daily basis, the AFSP vetting process includes additional criminal history records checks and a check for derogatory immigration-related information. To help improve the AFSP vetting process, DHS also stated that TSA signed a memorandum of understanding with FAA in February 2012 to exchange data. The memorandum, which FAA signed in March 2012, outlines a process for FAA to provide certain data from its airmen registry on a monthly basis, via encrypted e-mail and password protected, to a designated point of contact within TSA, and authorizes TSA to use the data to ensure flight training providers are providing TSA with applicant/candidate information in order to conduct the appropriate background check prior to flight instruction. This is an important first step toward addressing our recommendation, provided that TSA uses the data to identify instances where foreign nationals receive FAA airman certificates without first undergoing a TSA security threat assessment, identifies reasons for these occurrences, and strengthens controls to prevent future occurrences, as we recommended. In response to our recommendation that TSA and ICE collaborate and develop a plan with time frames for assessing the results of a pilot program to check TSA AFSP data against information DHS has on applicants’ admissibility status, and to institute that pilot program if it is found to be effective, DHS stated that TSA will prepare a plan by December 31, 2012, to assess the results of the pilot with ICE to determine the lawful status of the active AFSP population. The plan is to include specific details on time frames and accountability and recommendations for next steps. We believe that these are positive actions that could help TSA address the weaknesses identified in this report and we will continue to work with TSA to monitor progress on the proposed solutions as the agency proceeds. In its comments, DHS also referred to additional recommendations related to TSA’s vetting of foreign nationals. Because DHS deemed the details of these recommendations and its response as sensitive security information, they are not included in the public version of this report. We are sending copies of this report to the Secretaries of Homeland Security and Transportation, the TSA Administrator, and appropriate congressional committees. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix IV. Appendix I: Scope and Methodology This report addresses the following questions: (1) What actions have the Transportation Security Administration (TSA) and general aviation aircraft operators taken to enhance security and how has TSA obtained information on the implementation of the operators’ actions? (2) To what extent has TSA ensured that foreign flight students seeking flight training in the United States do not pose a security threat? To address these questions, we examined laws and regulations— including provisions of the Aviation and Transportation Security Act (ATSA), Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act), and TSA regulations governing aircraft operators and the Alien Flight Student Program (AFSP)—related to the security of general aviation operations. We also interviewed representatives from six industry associations based on their participation in TSA’s Aviation Security Advisory Committee and on their focus on general aviation security issues: the American Association of Airport Executives, Aircraft Owners and Pilots Association, Experimental Aircraft Association, General Aviation Manufacturers Association, National Air Transportation Association, and National Business Aviation Association. We also interviewed officials from TSA’s Office of Security Operations, Office of Intelligence and Analysis, and Office of Security Policy and Industry Outreach, as well as U.S. Immigration and Customs Enforcement (ICE) and the Federal Aviation Administration (FAA). In addition, we conducted site visits and interviewed representatives from a nonprobability sample of 22 general aviation operators located at selected airports—including 5 private operators that operate at least one aircraft weighing more than 12,500 pounds, 7 private charter operators that also perform as private operators, and 10 flight schools—to observe and discuss security initiatives implemented. We selected these airports based on geographic dispersion (Southern California, North Texas, and Central Florida) as well as variation in the types of general aviation operations present (such as charter and private operations) and size of aircraft based at each airport. Because we selected a nonprobability sample of operators to interview, the information obtained cannot be generalized to all general aviation operators. However, the interviews provided important perspective to our analysis and corroborated information we gathered through other means. To identify actions TSA and general aviation aircraft operators have taken to enhance security and how TSA has obtained information on the implementation of the operators’ actions, we examined documentation on TSA’s inspection processes for monitoring aircraft operators’ implementation of security programs, including the Transportation Security Inspector Inspections Handbook, the National Investigations and Enforcement Manual, and the Compliance Work Plan for Transportation Security Inspectors. We also reviewed documentation related to aircraft operators’ implementation of voluntary security initiatives not covered by TSA security programs, such as guidance for TSA personnel who conduct outreach to general aviation operators. We reviewed a report conducted on behalf of DHS examining the potential damage that could be caused by different types of general aviation aircraft. We also reviewed the methodology and assumptions associated with this report and found them to be reasonable and well documented. Also, we reviewed National Safe Skies Alliance’s General Aviation Airport Vulnerability Assessment, which contains survey data on security measures implemented from a sample of general aviation airports, and TSA’s General Aviation Airport Vulnerability Briefing. We also interviewed TSA officials on efforts to interact with general aviation associations as a means to obtain information on security initiatives implemented by private general aviation operators, including the agency’s interaction with members of the Aviation Security Advisory Committee. We also interviewed TSA Federal Security Directors and Transportation Security Inspectors whose areas of operation encompass the airports we selected, as well as airport officials responsible for security at each airport. Finally, we reviewed TSA data from fiscal years 2005 through 2011 on the compliance of general aviation operators and flight training providers that fall under TSA security programs with program requirements. We chose these dates because they reflect the time frame after the publication of our previous report on general aviation security. For example, we obtained compliance data for general aviation operators covered under the Twelve-Five and Private Charter standard security programs stored in TSA’s Performance and Results Information System (PARIS) for fiscal years 2005 through 2011. We identified the frequency that aircraft operators and flight training providers were reported to be in compliance with program requirements. As part of this work, we assessed the reliability of TSA data in PARIS by interviewing TSA officials and reviewing documentation on controls implemented to ensure the integrity of the data in the database and found the data to be sufficiently reliable for use in this report. To assess the extent to which TSA has ensured that foreign flight students seeking flight training in the United States do not pose a security threat, we reviewed our recent reports related to DHS security threat assessment processes, and TSA guidance related to procedures for conducting security threat assessments of several agency programs, including AFSP. We interviewed TSA officials who perform security threat assessments and inspections of flight training providers for AFSP to better understand program operations. To determine whether foreign nationals applying for FAA airman certificates had previously applied to AFSP and been vetted by TSA, we obtained from FAA data on foreign nationals from FAA’s Comprehensive Airmen Information System, also known as the airmen registry. Specifically, we obtained FAA airmen registry data, including names and dates of birth, on 25,599 foreign nationals applying for their first FAA airman private pilot certificate, sport pilot certificate, or recreational pilot certificates from January 2006 through September 2011. We selected these dates because 2006 was the first full year after TSA assumed responsibility for AFSP from the Department of Justice and September 2011 was the end of the fiscal year for our reporting period. The data did not include information on foreign nationals applying for FAA airman certificates based on an existing foreign airmen certificate issued by another government, thus ensuring that the data we obtained were for foreign nationals who had obtained flight training in the United States and therefore would have been required to have applied for vetting under AFSP. We provided the FAA airmen registry data to TSA so that the agency could conduct a matching process to determine whether the foreign nationals in the FAA airmen registry were in the AFSP database and the extent to which they had been successfully vetted through AFSP. As stated previously, TSA receives FAA airmen registry data on a daily basis; however, given the specific parameters we specified for matching FAA airmen registry data against the AFSP database, we provided TSA with airmen registry data we had obtained from FAA to allow for easier review and analysis of TSA results. We found the FAA and TSA data and the approach, methodology, and results of the data matching process to be sufficiently reliable for our purposes. We used the results of TSA’s analysis to identify whether foreign nationals in the FAA airmen registry were not in the AFSP database, and therefore not approved for flight training through AFSP, as well as foreign nationals who were in the FAA airmen registry and were in the AFSP database, but had not been successfully vetted though AFSP. As part of this work, we also assessed the reliability of data in the FAA airmen registry as well as data in the AFSP database by interviewing FAA and TSA officials, and reviewing documentation on controls implemented to ensure the integrity of the data in the database and found both to be sufficiently reliable for use in this report. We also spoke to TSA inspection officials to discuss common issues associated with compliance inspections and efforts to address compliance deficiencies. We reviewed documentation on TSA compliance procedures for flight training providers participating in the AFSP program and reviewed summary statistics for the period fiscal year 2005 through fiscal year 2011, on flight school compliance compiled by TSA. We also performed an analysis on compliance data for flight training providers. We ascertained the reliability of AFSP inspection results derived from PARIS, by interviewing TSA officials and reviewing documentation on controls implemented to ensure the integrity of the data in the database, and found the inspection data sufficiently reliable for use in this report. We also spoke with cognizant TSA and ICE officials to discuss the pre-pilot initiative under way with ICE to detect foreign nationals registered with AFSP who overstayed their period of admission in the country or entered the country illegally. We also reviewed documentation from an ICE investigation related to a Boston-area flight training provider. We compared the names of foreign nationals ICE identified in this investigation with the names of AFSP candidates assigned to the flight school, to ascertain which of the AFSP candidates had undergone a security threat assessment and passed, but were subsequently found via the ICE investigation to have either overstayed their admission period or entered the country without inspection. We also evaluated TSA’s efforts to assess risk for the AFSP against Standards for Internal Control in the Federal Government. We conducted this performance audit from March 2011 through July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Examples of Federal, State, and Industry Efforts to Enhance General Aviation Security Security measure Federal Efforts to Enhance General Aviation Security Risk assessments TSA has conducted or commissioned five assessments examining threats, vulnerabilities, and consequences associated with potential terrorist use of general aviation aircraft. For example, in May 2007, TSA and the Homeland Security Institute published an assessment of, among other things, the potential destructive capability of various sizes of general aviation aircraft. In November 2010, TSA released its assessment of vulnerabilities associated with general aviation airports. In 2003 and 2004, TSA and the Aviation Security Advisory Committee developed guidelines or best practices designed to establish nonregulatory security standards for general aviation airport security. These guidelines are based on industry best practices and an airport characteristic measurement tool that allows airport operators to assess the level of risk associated with their airport to determine which security enhancements are most appropriate for their facility. According to the Acting General Manager for General Aviation, the committee is in the process of updating these guidelines, with an expected release in mid-2012. TSA implemented a hotline (1-866-GA-SECURE, or 1-866-427-3287) in December 2002, which allows individuals to report suspicious activities to a central command structure. Pursuant to FAA regulations, general aviation operations are generally prohibited within a 15-mile area of the Washington, D.C., metropolitan area unless otherwise authorized by TSA. This limits access at Potomac Airpark, Washington Executive/Hyde Field, and College Park Airport (referred to as the “Maryland-3”) to only cleared and vetted pilots operating in compliance with specific flight planning and air traffic control procedures. TSA advises FAA to impose airspace restrictions at various locations throughout the United States to limit or prohibit aircraft operations in certain areas when intelligence officials report heightened security sensitivity. This includes the Air Defense Identification Zone around Washington, D.C., and restrictions that are put into effect when the President travels outside of Washington, D.C. FAA has used Flight Data Center NOTAMs to advertise temporary flight restrictions and warn of airport closures. Aircraft weighing more than 12,500 pounds in scheduled or charter service that carry passengers or cargo or both, and that do not fall under another security program must implement a “Twelve-Five” standard security program, which must include, among other elements, procedures for bomb or air piracy threats. FAA, in July 2003, discontinued issuing paper airman certificates and began issuing certificates that incorporate a number of security features reducing the ability to create counterfeit certificates. The new certificates are made of high-quality plastic card stock and include micro printing, a hologram, and an ultraviolet-sensitive layer.An FAA requirement, adopted in October 2002, requires a pilot to carry government- issued or other form of photo identification acceptable to the FAA Administrator along with the pilot certificate when operating an aircraft. Requirement to notify FAA of aircraft transfers FAA, in February 2008, issued a final rule requiring those who transfer ownership of U.S.-registered aircraft to notify the FAA Aircraft Registry within 21 days from the transaction. Description The National Business Aviation Association proposed a security protocol for Part 91 operators, enabling operators with a TSA Access Certificate to operate internationally without the need for a waiver. TSA launched a pilot project in cooperation with the National Business Aviation Association with Part 91 operators at Teterboro Airport in New Jersey and later expanded the pilot to two additional airports. Education/outreach efforts Airport Watch The Aircraft Owners and Pilots Association implemented the Airport Watch program to help increase security awareness. The program includes warning signs for airports, informational literature, and a training videotape to educate pilots and airport employees on potential security enhancements for their airports and aircraft. It helped to increase awareness of TSA’s centralized toll-free 1-866-GA-SECURE (1-866-427- 3287) hotline. General aviation security educational materials The Experimental Aircraft Association distributed Airport Watch videotapes and other educational materials concerning security practices and airspace restrictions. The National Agricultural Aircraft Association produced the Professional Aerial Applicators Support System, an annual education program that addresses security of aerial application operations. It is presented at state and regional agricultural aviation association meetings throughout the country. The National Air Transportation Association, on September 24, 2001, issued a series of recommended security procedures for all aviation businesses through its Business Aviation Security Task Force. The recommendations focused on immediate steps to be taken, plus longer-term actions. Examples included signage, appointing a single manager responsible for security at all locations, developing a “security mission statement,” methods to verify identification, seeking local law enforcement assistance to develop a security plan, and a host of others, including an advisory poster that was created and distributed free to all association members. FAA, in January 2002, issued a number of recommended actions addressing security for flight schools and those renting aircraft. These recommendations are designed to provide security against the unauthorized use of a flight school or rental aircraft. The National Association of State Aviation Officials, in December 2002, submitted to federal and state authorities a document outlining general aviation security recommendations. This included securing unattended aircraft, developing a security plan, and establishing a means to report suspicious activity. In addition, airports should establish a public awareness campaign, perform regular inspections of airport property, and control movement of persons and vehicles in the aircraft operating area. The U.S. Parachute Association disseminated security recommendations to its 219 skydiving clubs and centers across the United States, most of them based on general aviation airports. Some recommendations were aimed at ensuring security of jump aircraft during operations, as well as periods when aircraft are idle. The General Aviation Manufacturers Association, in conjunction with the U.S. Department of the Treasury, worked to help aircraft sellers identify unusual financial transactions. The publication entitled Guidelines for Establishing Anti-Money Laundering Procedures and Practices Related to the Purchase of General Aviation Aircraft was developed in consultation with manufacturers, aviation-finance companies, used-aircraft brokers, and fractional ownership companies. Security measure Examples of state efforts to improve general aviation security Security plan for publicly owned airports (Alabama) All publicly owned general aviation airports in Alabama must prepare and implement a written security plan that is consistent with TSA’s May 2004 Security Guidelines for General Aviation Airports. The plan was to be submitted and on file by January 1, 2006, with the Aeronautics Bureau of the Alabama Department of Transportation in order for the airport to be eligible to receive a state-issued airport improvement grant. Florida requires that certain public-use general aviation airports implement a security plan consistent with guidelines published by the Florida Airports Council. Airport security enhancements (New Jersey) New Jersey requires that all aircraft parked or stored more than 24 hours be secured by a two-lock system, that hangar doors have working locking devices and be closed and locked when unattended, that permanent signs providing emergency contact phone numbers be posted where specified, and that communications equipment provided by the Division of Aeronautics for emergency notification by the division or law enforcement agencies be available. Background checks for flight students (New York) New York law requires flight students to complete a criminal background check and wait for written permission to be sent to his or her flight school before beginning flight training. Airports must also register with the state and supply contact information and a security plan consistent with TSA’s May 2004 Guidelines for General Aviation Airports. State troopers provide airports with security audits (Virginia) Virginia trained selected state troopers to provide airports with security audits at no charge to the airport operator. Security assessment of public-use general aviation airports (Washington) Washington contracted with a consultant to perform a security assessment of public- use general aviation airports. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Jessica Lucas-Judy, Assistant Director, and Robert Rivas, Analyst-in-Charge, managed this assignment. Erika D. Axelson, Orlando Copeland, Katherine Davis, Gloria Hernandez- Saunders, Adam Hoffman, Richard Hung, Mitchell Karpman, Stanley Kostyla, Thomas Lombardi, Marvin McGill, Jessica Orr, Anthony Pordes, Minette Richardson, and Robert Robinson made significant contributions to this report.
U.S. government threat assessments have discussed plans by terrorists to use general aviation aircraft—generally, aircraft not available to the public for transport—to conduct attacks. Also, the September 11, 2001, terrorists learned to fly at flight schools, which are within the general aviation community. TSA, within DHS, has responsibilities for general aviation security, and developed AFSP to ensure that foreign students enrolling at flight schools do not pose a security threat. GAO was asked to assess (1) TSA and general aviation industry actions to enhance security and TSA efforts to obtain information on these actions and (2) TSA efforts to ensure foreign flight students do not pose a security threat. GAO reviewed TSA analysis comparing FAA data from January 2006 to September 2011 on foreign nationals applying for airman certificates with AFSP data, and interviewed 22 general aviation operators at eight airports selected to reflect geographic diversity and variations in types of operators. This is a public version of a sensitive security report GAO issued in June 2012. Information TSA deemed sensitive has been omitted, including two recommendations on TSA’s vetting of foreign nationals. The Transportation Security Administration (TSA) and aircraft operators have taken several important actions to enhance general aviation security, and TSA is gathering input from operators to develop additional requirements. For example, TSA requires that certain general aviation aircraft operators implement security programs. Aircraft operators under these programs must, among other things, develop and maintain TSA-approved security programs. TSA has also conducted outreach to the general aviation community to establish a cooperative relationship with general aviation stakeholders. In 2008, TSA developed a proposed rule that would have imposed security requirements on all aircraft over 12,500 pounds, including large aircraft that Department of Homeland Security (DHS) analysis has shown could cause significant damage in an attack. In response to industry concerns about the proposed rule’s costs and security benefits, TSA is developing a new proposed rule. Officials from all six industry associations GAO spoke with stated that TSA has reached out to gather industry’s input, and three of the six associations stated that TSA has improved its efforts to gather input since the 2008 notice of proposed rulemaking. TSA vets foreign flight student applicants through its Alien Flight Student Program (AFSP), but weaknesses exist in the vetting process and in DHS’s process for identifying flight students who may be in the country illegally. From January 2006 through September 2011, more than 25,000 foreign nationals had applied for Federal Aviation Administration (FAA) airman certificates (pilot’s licenses), indicating they had completed flight training. However, TSA computerized matching of FAA data determined that some known number of foreign nationals did not match with those in TSA’s database, raising questions as to whether they had been vetted. In addition, AFSP is not designed to determine whether a foreign flight student entered the country legally; thus, a foreign national can be approved for training through AFSP after entering the country illegally. A March 2010 U.S. Immigration and Customs Enforcement (ICE) flight school investigation led to the arrest of six such foreign nationals, including one who had a commercial pilot’s license. As a result, TSA and ICE jointly worked on vetting names of foreign students against immigration databases, but have not specified desired outcomes and time frames, or assigned individuals with responsibility for fully instituting the program. Having a road map, with steps and time frames, and assigning individuals the responsibility for fully instituting a pilot program could help TSA and ICE better identify and prevent potential risk. The sensitive security version of this report discussed additional information related to TSA’s vetting process for foreign nationals seeking flight training.
Background Interagency contracting is designed to leverage the government’s aggregate buying power and simplify procurement of commonly used goods and services. This contracting method has allowed agencies to meet the demands for goods and services at a time when they face growing workloads, declines in the acquisition workforce, and the need for new skill sets. Interagency contracts are awarded under various authorities and can take many forms. They typically are used to provide agencies with common goods and services, such as office supplies or information technology services. In other cases, they may be used to fill specialized requirements, particularly if the other agency providing the contract support services has unique expertise in a particular type of procurement. Agencies that award and administer interagency contracts usually charge a fee to support their operations. There are two main methods of interagency contracting: direct and assisted. For direct acquisitions, rather than going through the process to award a new contract—soliciting offers, evaluating proposals, and awarding the contract—contracting officers at agencies can place orders directly on contracts already established by other agencies. With assisted acquisitions, customer agencies can obtain contracting services from other agencies, whose contracting officers place and administer orders on the customer agencies’ behalf. Assisted acquisitions can use interagency acquisition agreements (IAA) to document and establish general terms and conditions governing relationships between the customer agencies, which need the goods or services, and the servicing agencies, which provide the contracting services. Responsibility for acquisition policy and management at State is shared by two offices within the Bureau of Administration—the Office of the Procurement Executive (OPE) and the Office of Acquisitions Management (AQM), as shown in figure 1. OPE is responsible for establishing acquisition policy at State. This responsibility includes prescribing and implementing acquisition policies, regulations, and procedures; managing State’s procurement reporting system; appointing contracting officers; and establishing a system for measuring the performance of State contracting offices. AQM is responsible for providing a full range of contracting services to support activities across State, including acquisition planning, contract negotiations, cost and price analysis, and contract administration. Acquisition officials in OPE and AQM stated that they work closely on many acquisition activities, but there is no direct reporting relationship between the two. While AQM is by far the largest contracting office within State, other domestic bureaus and offices have varying degrees of contracting authority. Additionally, 277 of State’s overseas posts have limited authority to conduct contracting activities in support of the bureaus and program office activities carried out at each location. Finally, two additional contracting offices, known as Regional Procurement Support Offices, report to AQM and provide contracting services to the overseas posts. These offices operate as working capital funds, charging a fee to the overseas posts and other organizations in exchange for providing contracting services. In addition to AQM and its regional support offices, only those bureaus and posts with contracting authority can conduct direct interagency contracting. However, all bureaus and posts can use assisted interagency contracting, relying on contracting officers at other agencies to conduct procurements. In response to an increase in the amount of acquisition dollars going to contract servicing agencies, the Under Secretary of State for Management issued a memorandum in May 2002 describing the State First policy. The policy was incorporated into the Department of State Acquisition Regulations (DOSAR) and clarified later by implementing guidance. This policy directs domestic bureaus and offices to first use the services of AQM or another appropriate State contracting activity before transferring funds to another agency to conduct an acquisition. The policy states further that domestic bureaus or offices may only transfer funds to another agency for contracting services after obtaining a waiver from AQM. Application of this policy is limited to assisted interagency contracting actions. Instances in which a State contracting officer directly places an order on another agency’s contract are not subject to the policy. Additionally, the State First policy does not apply to assisted interagency contracting activities conducted by overseas posts. The State First policy instructs requesting bureaus to provide information about the proposed interagency contract action, including a description of the requirement and contracting services to be provided by the other agency, the estimated dollar value, the number of option years, the reason for using the other agency, and the amount of any surcharge or fee to be charged by the other agency for its contracting services. AQM, in consultation with OPE, is to review a bureau’s request and either issue a waiver allowing it to proceed with the proposed interagency contracting activity or decline the request and direct the bureau to the appropriate State contracting office for assistance, as described in figure 2. The State First policy also provides AQM with the authority to grant blanket waivers for future acquisitions involving the same item so that bureaus do not need to request an individual waiver each time they need to procure that item. For instance, the policy cites the acquisition of ammunition through DOD as an example of this type of recurring need that could be covered by a blanket waiver. State Has Limited Insight into Its Use of Interagency Contracting The Department of State has limited insight into the extent to which it uses interagency contracting. A key governmentwide data system does not fully capture information on interagency contracting, and State’s internal systems do not comprehensively track its use of these contracts. While State reported to us over $800 million in direct and assisted interagency contract actions in fiscal year 2006, these data were incomplete, and reported data were missing basic information in many cases. We have previously reported that the lack of reliable information on interagency contracts inhibits agencies from making sound contracting decisions and engaging in good management practices. State Cannot Rely on Governmentwide or Internal Agency Data for a Comprehensive View of Its Use of Interagency Contracts The Federal Procurement Data System-Next Generation (FPDS-NG), the federal government’s primary database for procurement actions, is not a reliable source of information on interagency contracts. We have reported in the past on difficulties in obtaining data and generating reports on interagency contracting using FPDS-NG. Similarly, the State Procurement Executive explained to us that it is difficult to extract interagency contracting data from FPDS-NG and that there is no single report that comprehensively identifies uses of interagency contracting. For assisted interagency actions, the servicing agency is responsible for entering data into FPDS-NG, but such entries do not always indicate that actions involve interagency contracts. If a contracting officer at another agency placed an order for State, that agency—not State—would be responsible for recording the order in FPDS-NG, and the fact that the order was done for State would not necessarily be recorded. While the servicing agency can enter a funding agency in FPDS-NG, it may identify itself as the funding agency instead. For example, we identified records in FPDS-NG for certain contract actions entered into by DOD for State that listed DOD as the funding agency. A DOD official told us that once funds are transferred to DOD, they lose their association with the funding agency. For direct contract actions, in which State contracting officers placed the orders and recorded the transactions in FPDS-NG, there is no data field that reliably indicates that these actions involved an interagency contract. In addition, State cannot rely on the data systems used by its central procurement and financial offices to provide complete information on its use of interagency contracting. AQM maintains a procurement data system; however, bureau officials told us that not all bureaus with contracting authority use this system and that assisted acquisitions where the contracting officer is at another agency are not recorded in this system. For example, a State official noted that a bureau that reported to us significant use of assisted interagency contracting does not use this system. The State Procurement Executive acknowledged the limitations of this system, noting it would be difficult to use it to identify interagency contracts. Further, State’s accounting system cannot be used to identify many interagency contracting actions. State officials explained that for direct actions, the accounting system does not record whether an interagency contract was used. Similarly, the officials said that for assisted actions, a “miscellaneous” data field that captures a variety of information may, but does not always, indicate that the transfer of funds to another agency is for a contract. While State officials told us that the most reliable way to identify interagency contract actions would be to request a list of these actions from each bureau and overseas post, several bureaus and posts had difficulty responding to our request for such information. For example, one bureau, which has used assisted interagency contracts, noted that the bureau had no reasonable means of obtaining information on its assisted interagency contract actions. In some cases, bureaus did not have a central point of contact responsible for tracking interagency contracts and many bureaus reported reviewing paper files to assemble the requested information on their assisted actions. Additionally, a procurement official expressed concern about another bureau’s lack of information on interagency contracts, noting that when she needed basic information, such as the amounts obligated by the bureau on these contracts, she was directed to the servicing agencies. Similar challenges were experienced in 2005 when State’s Office of Inspector General conducted a related review and sought to identify bureaus’ use of interagency contracts. The official who led that review told us he found that it was generally difficult for bureaus to compile data on interagency contracts and that a number of bureaus continually identified new contract actions throughout the course of the review. State-Reported Data on Interagency Contract Actions Were Incomplete, and Reported Actions Were Missing Information in Many Cases In the absence of a data system that reliably identifies State’s interagency contracts, we requested information on all interagency contract actions of at least $25,000 conducted in fiscal year 2006 from 53 State bureaus and overseas posts. Fifty-two of these bureaus and posts reported a total of over $800 million in interagency actions—$577.2 million for direct actions and $234.3 million for assisted actions (see app. II for more details on the data reported to us by State). However, we found that at least 13 of these bureaus provided incomplete data. In these cases, data from a servicing agency or FPDS-NG indicated that a particular servicing agency assisted a State bureau with interagency contracting in fiscal year 2006, but that bureau did not report any actions with that servicing agency. Based on our comparison of data State reported with data obtained from five servicing agencies and FPDS-NG, we identified at least $186 million in assisted interagency contracting that State did not report. Most notably, DOD reported assisting State’s Bureau of Near Eastern Affairs in performing nearly $144 million in contracting for logistics support in Iraq that was not included in State’s data. Furthermore, in many cases the interagency actions that were reported by State were missing basic information that would be needed for managing contracts and achieving good acquisition outcomes. For example, bureaus were not always able to identify the contractor for particular actions, and one bureau that reported over $26 million in assisted interagency contracting was not able to provide us with the contract or order numbers for many of the actions. Also, in some cases, obligation amounts reported by bureaus differed widely from those reported by servicing agencies or in FPDS-NG. For example, in one case, a State bureau reported placing over $15 million on an assisted action, while the servicing agency reported actions totaling $9.8 million on the same contract and order number. In another case, a State bureau reported a lower dollar value than the servicing agency, with State reporting a single action of $25,000 and the servicing agency reporting multiple actions totaling $471,000 for the same order. Because of such discrepancies, we were unable to verify the accuracy of a significant portion of State’s reported data, particularly for assisted actions. A Lack of Comprehensive and Reliable Information Inhibits Agencies from Making Sound Contracting Decisions and Engaging in Good Management Practices We have previously reported that agencies may not be able to make sound contracting decisions or engage in good management practices without comprehensive and reliable data on interagency contracting and the related costs and fees. Without such data, agencies cannot conduct analyses to determine if the use of such contracts is in their best interests or if there are opportunities for savings. For example, we reported in 2005 that DOD had difficulty making informed decisions about the use of other agencies’ contracting services because its financial systems did not collect data on interagency contracting. In 2006, we also found that the Department of Homeland Security (DHS) did not systematically monitor its spending on interagency contracts. As a result, it did not know what fees it was paying to other agencies to award contracts on its behalf and whether it could achieve savings through alternative contracting methods. Similarly, without access to complete and reliable data on its use of interagency contracting, State does not have the information needed to manage its use of interagency contracts to achieve good outcomes and ensure that it is receiving value for fees it pays to other agencies. State Cannot Ensure That Decisions to Use Assisted Interagency Contracting Are Being Made by the Appropriate Acquisition Officials Due to the way the State First policy has been implemented, State cannot ensure that decisions to use assisted interagency contracts are being made by OPE and AQM officials as called for by the policy. These acquisition officials often lack awareness of or involvement in decisions to use assisted interagency contracts for three main reasons. First, these officials have broadly exempted a number of assisted interagency contracting actions from the requirement to seek a State First waiver. Second, State’s bureaus have varying interpretations of when they need to obtain waivers for proposed assisted interagency contracting activities. Third, State acquisition officials have no mechanism to ensure that bureaus comply with the State First policy, relying primarily on the bureaus to voluntarily submit requests for State First waivers. Broad Exemptions Limit Ability to Evaluate the Use of Assisted Interagency Contracts State acquisition officials have broadly exempted a number of assisted interagency contracting actions from the State First waiver process. By creating these broad exemptions, acquisition officials are not fully aware of bureaus’ use of assisted interagency contracting. The exemptions apply to bureaus that are among the largest users of assisted interagency contracting. OPE issued guidance in 2005 stating that the State First policy does not apply to proposed funds transfers conducted under the Foreign Assistance Act. The Procurement Executive explained to us that this exemption from needing a waiver was intended to apply only to transfers of funds under the Foreign Assistance Act where another agency was responsible for carrying out the program. He said that bureaus should still seek State First waivers when transferring funds under the Foreign Assistance Act if the transfer is so the other agency can purchase goods or services for State. However, AQM and some bureau officials have interpreted and applied the guidance in a different way. The Director of AQM told us that the exemption from needing a waiver applies to all actions—including assisted interagency contracting—funded under the Foreign Assistance Act. Officials in the bureaus of Diplomatic Security (DS) and International Narcotics and Law Enforcement Affairs (INL) informed us that because of this exemption they do not seek State First waivers for assisted contract actions conducted under this Act. For example, officials in INL did not seek a waiver for an order for aviation support, issued in 2006 by DOD on their behalf and valued at approximately $51 million. Both DS and INL reported using assisted interagency contracting extensively compared to other bureaus, and DS and INL officials stated that the Foreign Assistance Act is one of the chief authorities under which they transfer funds to another agency for contracting services. As a result of a series of decisions, acquisition officials have also exempted a potentially large amount of DS’s assisted interagency contracting activity from review under the State First policy. Following the initial establishment of the State First policy, AQM exempted much of DS’s assisted interagency contracting activity from the policy. Then in January 2006, acquisition officials met with bureau officials to clarify application of the State First policy. Acquisition officials agreed to exempt assisted interagency contracting activities carried out under existing interagency acquisition agreements from review under the State First policy but stipulated that new IAAs would need to be reviewed. A bureau official told us that, at this meeting, she informed the acquisition officials that many of the bureau’s IAAs with servicing agencies did not have expiration dates. As a result, new requirements could continue to be fulfilled under existing IAAs without State First review. For example, DS placed a new task order in 2006 through another agency under an IAA signed in 2001—this order was not reviewed under State First. While aware of DS’s exemption, State’s Procurement Executive noted that the State First policy was designed to review such task orders to ensure that using another agency’s contracting services was in State’s best interest. Bureaus Differ in When They Seek a State First Waiver Bureaus within State have different interpretations of when they should seek the approval of the appropriate acquisition officials to initiate assisted interagency contracting activities. Some bureaus request State First waivers for individual contract actions related to specific requirements, such as issuing a new task order. In one case study we reviewed involving the Bureau of Population, Refugees and Migration, a program official sought a waiver under the State First policy to have another agency issue a new contract action to continue fulfilling the program’s requirements. Similarly, an INL program official sought a State First waiver to use DOD’s contracting services to fulfill a new requirement, prior to the 2005 exemption for Foreign Assistance Act activities. DS, however, does not typically seek waivers under the State First policy for individual task orders or requirements initiated under IAAs. Instead, it is DS officials’ understanding that the overarching IAA with the servicing agency, rather than the individual requirement, requires approval under the State First policy. DS has used IAAs broadly to establish relationships with other agencies and these IAAs can encompass many requirements, multiple contract actions, and several fiscal years. This practice, compounded by the exemption for DS’s IAAs entered into prior to 2006, has precluded much of DS’s interagency contracting activity from review under State First. Neither bureau officials nor acquisition officials identified a process to review long-standing agreements over time to determine whether changes have occurred or whether it is still appropriate for State to continue paying another agency for contracting support. For example, an IAA with one servicing agency was signed in 2001, and the servicing agency reported that it issued 128 new task orders under this IAA between December 2001 and February 2008, none of which was reviewed under State First. Because this IAA was never reassessed, DS officials thought they were paying a 2.3 percent fee for all actions under this agreement, but the actual fee charged had been raised since 2001. Based on our analysis of servicing agency data, since October 2004, the average fee paid across all contract actions under this IAA was 3.3 percent—meaning State paid almost $160,000 more in fees than DS officials thought they were paying. Acquisition Officials Lack Mechanisms to Monitor Compliance with the State First Policy State acquisition officials do not have mechanisms in place to ensure that bureaus are complying with the State First policy. According to the acquisition officials, they do not monitor compliance and are reliant on bureaus to voluntarily request waivers before using assisted interagency contracts. In the absence of such requests, they have no other way to obtain reliable information about bureaus’ use of assisted interagency contracts. For instance, because State does not comprehensively track its use of interagency contracting, acquisition officials cannot conduct queries to identify actions that should have been reviewed under State First. Further, they have no way to determine the extent to which bureaus have conducted procurements under various exemptions or whether bureaus have applied the exemptions appropriately. As a result, acquisition officials cannot independently determine whether the five waivers requested in fiscal year 2006 were an accurate reflection of assisted interagency contracting for that year. Problems with State First compliance have previously been reported. In 2005, the State Inspector General found that 16 of the 19 domestic bureaus and offices included in its review did not comply with the policy. The State Inspector General also reported that budget and financial officers in 9 of the 19 bureaus and offices indicated that they had no knowledge of the State First policy or its requirements. The State Inspector General noted that better compliance with State First could result in lower contract costs and the more economical utilization of administrative costs associated with the contracts. The Procurement Executive issued additional guidance on the State First policy as a result of the Inspector General’s findings but informed us that acquisition officials have not reviewed compliance since then to determine if compliance has improved. Officials in AQM said they believe the State Inspector General reviews compliance with the policy as part of its regular bureau inspections. However, an official from the State Inspector General’s office said that this is not part of the office’s routine monitoring activities. State’s Policies Do Not Ensure Contract Oversight for Assisted Interagency Contracts State’s policies do not ensure that responsibilities for overseeing contractor performance on its assisted interagency contracts are assigned to appropriately trained individuals. Contracting officers’ representatives (COR) play a key role at State in overseeing contractor performance, although the decision of whether to appoint a COR is at the contracting officer’s discretion. When CORs are appointed by State contracting officers, State acquisition regulations require contracting officers to outline the scope of the COR’s authority in an appointment memorandum to be maintained in the contract file. These regulations further specify that only State employees with adequate training and experience may serve as CORs on contract actions awarded by State contracting officers, a stipulation that would include actions under direct interagency contracting. According to State guidance, a COR is responsible for several functions related to oversight of contractor performance, including monitoring technical progress and the expenditures of resources related to the contract; informing the contracting officer, in writing, of any performance or schedule failure by the contractor or of any needed changes to the performance work statement or specifications; and performing inspection and accepting work on behalf of the U.S. government and reviewing and approving the contractor’s vouchers or invoices. State acquisition regulations, however, do not contain requirements or guidance regarding the assignment or training of CORs when using assisted interagency contracting. For assisted contracting actions, State’s acquisition officials view it as solely the servicing agency’s duty to ensure contractor oversight, rather than a responsibility that all involved parties share. Because State does not have requirements in place to ensure the assignment of appropriately trained oversight personnel, effective oversight depends on factors outside of State’s direct control. These factors include the rigor of a particular servicing agency’s policies and procedures and the involvement of State personnel who happen to be experienced and knowledgeable. In most of the seven cases of assisted interagency contracting we reviewed, the State personnel who performed oversight duties had programmatic knowledge and experience related to the requirements being fulfilled. However, servicing agency practices differed regarding COR designation and training. The State personnel assigned by the servicing agencies to oversee contractor performance had not always received training related to contract oversight or had their roles clearly designated. In three cases we reviewed, the servicing agencies took steps to ensure that oversight personnel were aware of their roles and responsibilities and had obtained the requisite training. In two other cases, the servicing agencies did not designate CORs, although State program officials were assigned some oversight responsibilities. In the first of these instances, the State program official told us she had already taken contract-related training. In the other instance, however, the State official had not received training related to contract oversight and explained to us that she often did not understand the documents the servicing agency asked her to sign, particularly with regard to the contracting terminology. Finally, in the last two cases, the servicing agencies designated State personnel as CORs but did not ensure that they received required training associated with these oversight responsibilities. In one of the cases we reviewed where the designated COR had not received required training, the servicing agency also did not keep COR designations up to date. The contracting officer at the servicing agency was not aware that the COR and other oversight personnel were no longer employed at State. In addition, the COR stated that he did not play a role in monitoring the time sheets or attendance of contract personnel, or providing performance feedback to the contracting officer. The official explained that, while he worked with the contractor to address any deficiencies related to performance, another official, who told us she had received COR training but who was not designated as the COR for the order, verified the accuracy of invoices. Ensuring the designation of appropriately trained CORs was particularly important for this order because it was a time-and-materials type contract, as were five of the six other assisted actions we reviewed. Time-and-materials contracts are considered high risk for the government because they offer no profit incentive to the contractor for cost control or labor efficiency. Therefore, it is important for the government to monitor contractor performance to ensure that the contractor is efficiently performing the work and effectively controlling costs. We and others have previously reported on problems with oversight of interagency contracting, including the risks of not clearly designating individuals responsible for providing oversight of contractor performance and of not ensuring that these individuals are properly trained to perform their duties. For instance, we reported that when the Army purchased interrogation support services through the Department of the Interior (Interior), Army personnel in Iraq responsible for overseeing contractor performance were not adequately trained to exercise their responsibilities. In this case, an Army investigative report concluded that the lack of training for the CORs assigned to monitor contractor performance at Abu Ghraib prison, as well as an inadequate number of assigned CORs, put the Army at risk of being unable to control poor performance or become aware of possible misconduct by contractor personnel. In 2007, the DOD Inspector General reported that DOD organizations were deficient in contract administration, including the surveillance of contractor performance and assignment of CORs when they made purchases through the Department of Veterans Affairs. The DOD Inspector General noted that interagency contracting requires strong internal controls, clear definition of roles and responsibilities, and sufficient training of both servicing and requesting activities personnel. In 2005, the State Inspector General identified problems associated with the oversight of assisted interagency contracts, noting the lack of documentation of activities to determine whether the contractor provided the specified deliverables. Conclusions Lacking information about the extent to which it uses interagency contracts, State is not positioned to make informed decisions about whether and when additional scrutiny, oversight, or other actions are necessary to ensure State’s interests are protected. The State First policy, put in place before other agencies widely reported the risks of interagency contracting, provided State with an opportunity to gain increased insight and control over when and how the department uses and pays for other agencies’ contracts and contracting support. However, subsequent exemptions, varying interpretations, and a lack of compliance monitoring of the State First policy have significantly limited that opportunity and restricted the ability of acquisition officials to manage State’s use of interagency contracts and the associated risks. Properly trained personnel are best positioned to oversee the delivery of goods and services, regardless of what agency placed the order. Yet State has not taken steps to ensure that such personnel are in place, which has exposed State to the same risks faced by other agencies. Due to the critical nature of State’s mission and the importance of contract support to fulfilling this mission, State cannot afford to abdicate responsibility for ensuring good acquisition outcomes, even when the contracting officer is at another agency. Recommendations for Executive Action To enable State to improve its management of interagency contracting, we recommend that the Secretary of State direct the Office of the Procurement Executive to take the following three actions: Develop, in consultation with the bureaus, a reliable means for tracking the use of interagency contracts so that the bureaus and acquisition officials can readily and reliably access data, such as the costs and associated fees. Analysis of such data could also be used to assess whether the State First process provides an accurate reflection of bureaus’ use of assisted interagency contracting. Work with the Office of Acquisitions Management, in coordination with the bureaus that make the most use of assisted interagency contracts, to clarify and refine the State First policy, including existing exemptions, and provide additional guidance as needed regarding which actions need review under the policy. Require bureaus seeking a State First waiver to identify in their request individual(s) who will be responsible for contract oversight and ensure they are trained to perform this key role. Agency Comments We provided a draft of this report to State for review and comment. In its written comments, State noted that the report captures the challenges posed by interagency contracting and agreed to implement the three recommendations. State’s comments are reprinted in their entirety in appendix III. State officials also provided technical comments that were incorporated where appropriate. We are sending copies of this report to interested congressional committees as well as the Secretary of State and the Director, Office of Management and Budget. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology Our objectives were to evaluate (1) the extent to which State has insight into its use of interagency contracting; (2) State’s policies and procedures for deciding when to use assisted interagency contracting; and (3) State’s ability to ensure oversight of assisted interagency contracting. For the purposes of this review, we defined interagency contracting as including both direct actions (orders placed by one agency’s contracting officers on another agency’s contracts) and assisted actions (obtaining contract support services from other agencies). To evaluate the extent to which State has insight into its use of interagency contracting, we initially attempted to identify data systems that would provide reliable information on State’s use of interagency contracting. In consultation with senior acquisition officials at State, we determined that such information could not be obtained from existing data systems. We then requested data from 35 bureaus, as well as 18 of State’s 277 overseas posts with authority to conduct contracting activities, on fiscal year 2006 purchases of at least $25,000 made through both types of interagency contracts. We received responses from 34 of the 35 bureaus and all 18 posts. Because data submitted by State bureaus and posts were compiled by staff in various positions, we requested that the executive directors of the bureaus and the general services officers (GSO) of the overseas posts confirm that the data submitted on behalf of their bureaus or posts were complete and accurate. We received confirmations from 46 executive directors and GSOs; the remaining 6 did not respond to our request for confirmation. To assess the reliability of the assisted actions reported to us by State, we compared State’s data with similar data we requested and received from five servicing agencies—the General Services Administration (GSA), Interior, the Department of the Treasury (Treasury), the National Institutes of Health (NIH), and two Army commands. These five servicing agencies represented 86 percent of the dollar value of assisted actions reported to us by State. In addition, we compared both direct and assisted actions reported by State with data maintained in FPDS-NG. We considered a State reported action to be verified if an action with the same contract and order number, and a dollar value difference within 7 percent, could be found in either FPDS-NG or in data reported by a servicing agency. In addition, actions that were reported by State, but not within the scope of our work, were removed from the final data. Duplicate actions—such as those reported by both the Office of Acquisitions Management and the requiring bureau—were also deleted from the final data. After conducting extensive work to ensure the consistency of the data, we determined our final data set to be sufficient for our purposes. Because this was not an audit of the servicing agencies or FPDS-NG, we used data from these sources only as a point of comparison with State-reported data and did not attempt to verify these data. To evaluate State’s policies and procedures for deciding when to use assisted interagency contracting and State’s ability to ensure oversight of assisted interagency contracting, we conducted 10 case studies of interagency contracting at State. Using the fiscal year 2006 data reported to us by State and the servicing agencies, as well as our preliminary research, we selected 10 cases to represent a range of characteristics, as shown in table 1. Three of the 10 cases were direct interagency actions, where contracting officers at State’s Office of Acquisitions Management placed orders off of other agencies’ contracts on behalf of State bureaus. The other seven consisted of assisted interagency actions, where State utilized contracting officers at the servicing agencies to place and administer orders on State’s behalf. In addition, cases were selected to examine a variety of bureaus within State as well as a variety of servicing agencies. Our 10 cases represented 8 State bureaus and 5 servicing agencies. We did not include interagency contracting at overseas posts because the State First policy does not apply to overseas posts. For each case, we reviewed contract documentation from State, the servicing agency, or both. We also interviewed relevant officials including contracting officers, individuals performing contract oversight, and other program officials as necessary. Finally, we reviewed State acquisition regulations, policies, and guidance and interviewed agency officials to understand their implementation. We also reviewed relevant GAO and Inspectors General reports. We conducted this performance audit from June 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: State’s Use of Interagency Contracting in Fiscal Year 2006 We requested data on direct and assisted interagency contract actions of at least $25,000 in fiscal year 2006 from 35 State bureaus and 18 overseas posts. All but one bureau and all 18 posts responded to our data request. According to data these State bureaus and posts reported to us, State conducted over $800 million in interagency contracting in fiscal year 2006—$577.2 million for direct actions and $234.3 million for assisted actions. For direct actions, State reported the following: 94 percent of the dollar value of these actions was conducted by the Office of Acquisitions Management on behalf of 35 bureaus and several overseas posts. 98 percent of the reported dollars for direct actions in fiscal year 2006 were placed on GSA contracts, including schedule contracts. Other actions included orders placed through NIH, National Aeronautics and Space Administration, and DOD contracts, among others. For assisted actions, State reported the following: Assisted actions were concentrated in less than a third of the bureaus and overseas posts that responded to our data request—of the 52 bureaus and posts that submitted data, only 16 reported assisted actions in fiscal year 2006. The most extensive users of assisted interagency contracting in fiscal year 2006 included the bureaus of International Narcotics and Law Enforcement Affairs, which reported $95.3 million; Information Resource Management, which reported $72.6 million; Diplomatic Security, which reported $26.5 million; and Consular Affairs, which reported $12 million. The 16 bureaus that made use of assisted interagency contracting conducted these actions through several different servicing agencies, including GSA, DHS, Interior, Treasury, NIH, and others. Approximately 47 percent of the dollar value of assisted actions reported by these bureaus was placed by DOD on State’s behalf (see fig. 3). Appendix III: Comments from the Department of State Appendix IV: GAO Contacts and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Johana R. Ayers, Assistant Director; Noah Bleicher; Greg Campbell; Theresa Chen; Alexandra Dew; Timothy DiNapoli; Kathryn Edelman; Arthur James, Jr.; Julia Kennon; and Winnie Tsen made key contributions to this report.
Interagency contracting--using another agency's contracts or contracting services--can provide agencies with opportunities to streamline the procurement process and achieve savings. However, GAO designated the management of interagency contracting a high-risk area in 2005 due, in part, to a lack of reliable data on its use and of clarity regarding contract management responsibilities. In 2002, the Department of State (State) issued the State First policy, requiring domestic bureaus to obtain approval from State acquisition officials before paying other agencies for contract support services. Under the Comptroller General's authority to conduct evaluations on his own initiative, GAO evaluated State's 1) insight into its use of interagency contracts, 2) policies on deciding when to use assisted interagency contracts, and 3) ability to ensure oversight. GAO's work included reviewing regulations, analyzing interagency contracting data, and conducting 10 case studies of direct and assisted interagency contracts that represented a range of State bureaus and servicing agencies. State officials have limited insight into the extent to which the department has used both methods of interagency contracting--direct by placing their own orders on another agency's contract and assisted by obtaining contract support services from another agency. State officials cannot rely on the federal government's primary data system for tracking procurements to readily identify instances when State has used interagency contracts. Further, State's central procurement and accounting systems do not reliably and comprehensively identify when interagency contracts have been used. While State officials told GAO the most reliable way to identify interagency contract actions would be to request data on these actions from bureaus and overseas posts, several bureaus and posts had difficulty responding to such a request. State reported to GAO over $800 million in interagency contract actions in fiscal year 2006, but these data were incomplete. For example, State did not report $144 million in assisted contracting performed on its behalf by the Department of Defense. GAO has previously reported that the lack of reliable information on interagency contracts inhibits agencies from making sound contracting decisions and engaging in good management practices. Due to the way the State First policy has been implemented, State cannot ensure that decisions to use assisted interagency contracting are made by the appropriate acquisition officials. These officials often lack awareness of or involvement in decisions to use assisted interagency contracts. First, State acquisition officials have created exemptions limiting the assisted contract actions subject to their review under the policy. For example, State's guidance exempts funds transfers under the Foreign Assistance Act, under which bureaus conducting large amounts of interagency contracting operate. Second, bureaus have varying interpretations of when approvals are needed under the policy. Some bureaus seek approvals for individual contract actions related to specific requirements. Another bureau interprets the policy as only requiring approval for a new overarching interagency acquisition agreement, which can encompass multiple contract actions and fiscal years. Third, State acquisition officials do not monitor State First compliance, so they are not positioned to know whether the five approval requests received in fiscal year 2006 fully reflected the extent of that year's assisted interagency contracting. State's policies do not ensure that responsibilities for overseeing contractor performance on assisted interagency contracts are assigned to appropriately trained individuals. State acquisition regulations do not require trained oversight personnel to be assigned when using assisted interagency contracting. As a result, effective oversight depends on factors outside of State's control, such as the rigor of servicing agencies' oversight requirements, which vary. GAO identified cases where State personnel were given responsibility for overseeing contractor performance but had not received related training. GAO and others have reported that agencies' interests are put at risk when the individuals responsible for overseeing contractor performance are not clearly designated and have not been properly trained.
Background Social Security is the largest source of retirement income for most American workers and their families. Since the program began paying benefits in 1940, Social Security has served as a publicly provided source of retirement income for workers. The program also provides benefits for dependents, survivors, and the disabled and covers about 96 percent of all workers. Social Security’s benefit structure is based on a formula that replaces specified percentages of lifetime average indexed earnings. The basic benefit formula is redistributive in that the percentage of lifetime earnings replaced (replacement rate) is higher for lower earners than it is for higher earners. Benefits for dependents and survivors are generally based on the earnings record of the worker from whom benefits are claimed. When Social Security was instituted, the age of eligibility for full benefits, or normal retirement age, was set at age 65. The Congress later enacted an early retirement age of 62 at which any worker could retire with actuarially reduced benefits. The normal retirement age is set to rise according to a phased-in schedule to age 67 by the year 2027. Numerous Proposals Address Social Security’s Long-Term Solvency Social Security is financed mainly through payroll taxes paid by workers and employers on covered earnings up to a maximum annual earnings level.The program is generally financed on a “pay-as-you-go” basis with the payroll taxes of current workers used to pay the benefits of current beneficiaries. Periodic surpluses of revenues over expenditures are credited to the Social Security Trust Funds, which represent future financial commitments by the government to the program. Current Trust Fund projections show that projected future revenues, including the amounts credited to the Trust Funds, will not be sufficient to finance full benefits in the year 2037 and thereafter. The Congress has addressed Social Security’s solvency in previous reform efforts, notably the 1977 and 1983 Amendments to the Social Security Act. These reforms focused on modifying the program’s existing benefit and financing structures without introducing major changes in the program. The reforms tended to focus on traditional options such as increasing the payroll tax rate or covered earnings, altering the benefit formula, and increasing the age of retirement. For example, the 1977 Amendments made technical changes to the benefit formula, lowered benefits, and set higher future payroll tax rates. The 1983 Amendments made a number of changes, including advancing the payroll tax rate increases enacted in 1977, increasing the number of workers covered under Social Security, and enacting a gradual rise in the normal retirement age to 67, which began to be effective this year. Despite the importance of these earlier reforms, there is relatively little evidence regarding their effects that is directly applicable to understanding the implications of current reform efforts on private pensions. Part of the reason for the lack of evidence is that the effects of Social Security reforms on pensions are intertwined with broader economic trends and coincident changes in tax and regulatory policies. The nature of the current reform debate changed when the 1994-1996 Social Security Advisory Council discussed a broader range of reforms.In addition to debating traditional reform options, the Advisory Council considered changing the basis for financing the program to include private investment. One option would involve government investment of Trust Fund assets in marketable financial securities. Another option would create an account for each worker, who could then invest in marketable securities. While both of these options might reduce the future cost of Social Security to employers and workers, the individual account option would have greater potential implications for Social Security’s benefit structure. This report will focus on individual accounts rather than collective investment because of the implications for the benefit structure and because numerous proposals incorporating individual accounts with more traditional options have been put forth to address the program’s long- term solvency. Role of Private Pensions Has Evolved Over Time Before the creation of Social Security, private pensions played a modest role in providing retirement income. However, from 1940 to 1970, the percentage of private wage and salary workers participating in private pension plans increased from 15 to 45 percent due to a variety of factors, including changes in tax and labor policies. This growth in pension participation has slowed, however, and has stabilized since 1970 at about one-half of the workforce.Historically, the pension system developed with defined benefit pension plans as the predominant form of coverage. Defined benefit plans generally provide benefits using a specific formula based on the earnings and tenure of the worker. Typically, defined benefit plans are funded completely by the employer, who bears the investment risk of such an arrangement. The other major type of pension plan is the defined contribution plan, which generally involves contributions by the employer to an individual account held for the worker, with the worker bearing the investment risk. Some defined contribution plans are structured to allow contributions by the employee. Often, defined contribution plans are provided by employers as a supplement to defined benefit plans. The current framework of the private pension system was shaped largely by the Employee Retirement Income Security Act (ERISA) of 1974. ERISA imposed specific requirements for vesting after which participants are entitled to benefits that is, the years of service requirements to protect pension promises and workers’ benefits. It established guidelines for the operation and funding of pension plans. ERISA also required plan termination insurance to protect workers’ benefits under private sector defined benefit pension plans. In part to encourage plan sponsorship, pension plans have long been accorded favorable income tax treatment under the tax code. Employer contributions to pension funds, up to certain limits, are deductible expenses, and employee contributions and the contributions’ investment earnings are deferred from taxation. The 1980s saw the institution of certain employee deferral defined contribution arrangements under section 401(k) of the Tax Code, which generally allow tax-deferred worker contributions in addition to employer contributions.The growth of 401(k) plans has been cited by experts as a major factor underlying the recent trend toward the greater availability of defined contribution plans, in addition to other factors, such as the increased costs of defined benefit plans partly associated with increased regulation and changes in income tax laws, which reduced the tax advantages of pensions. Social Security and Pensions Are Key Sources of Retirement Income Social Security provides about 38 percent of the aggregate cash income of the elderly (see fig. 1). Private pensions are a voluntary, employer-provided source of retirement income that comprises about 10 percent of aggregate elderly income.Also, some pensions are provided by public employers such as federal, state, and local governments. These comprise about 8 percent of aggregate elderly income. Pensions generally supplement Social Security benefits. For all but the lowest and highest income quintiles, pensions are the second most important source of retirement income (see fig. 2). In contrast, the benefits provided by Social Security are most important to workers and households in the middle ranges of the income distribution. Social Security comprises over 80 percent of the retirement income for households in the first (lowest) and second quintiles of the distribution. For the third (middle) and fourth quintiles, Social Security still serves as the most important source of retirement income. For the highest quintile, pensions are a more significant source of retirement income than is Social Security (20.5 percent compared with 18.3 percent), but pensions represent a smaller share for this group than either personal savings or earnings. One factor underlying these data is that pensions are not a universal source of retirement income as is Social Security. As of 1999, about half of the working population was covered by a pension. Although a larger number of workers may obtain some pension coverage over an entire career,it is unlikely that pension coverage will ever match the nearly universal coverage provided by Social Security. Linkages Between Social Security and Private Pensions Are Explicit and Implicit Social Security and private pensions are key sources of retirement income that are linked through the employer costs associated with the compensation provided to workers. Employers consider Social Security provisions in designing pensions that complement their human resource and other business strategies. Some of the interactions between Social Security and pensions are explicit, insofar as pension laws and regulations permit employers to formally “integrate” or take account of Social Security benefits and contributions in designing their pension plans. In addition, many pension plans may be indirectly or implicitly linked with other Social Security features, such as the normal retirement age or eligibility criteria for disability benefits. Pensions Are Designed to Address Employer Needs and Foster Adequate Income Replacement for Workers Many employers choose to offer a pension plan to further their business strategies or objectives. Although employers are motivated to offer a pension plan for many reasons,the most important involve (1) the employer’s need to attract and retain a workforce in a competitive labor market and (2) the tax advantages, or preferences, associated with pensions. The employer’s pension plan decision also will be shaped by the nature and characteristics of the workforce available to the employer and the employer’s size and type of industry. These factors will enter into the decision about the type of plan the employer will choose to sponsor and the benefits it provides to individuals at different income levels. Employers typically want to attract workers based on their productivity, motivate them to perform efficiently in pursuit of the firm’s goals, and retain them to reduce the costs associated with turnover. Pensions provide a tool for accomplishing these objectives. For example, pensions are a means of providing deferred compensation that may encourage workers to make long-term commitments to employers. This may have the benefit of reducing turnover, making for a more stable, productive workforce.At the same time, employers also want to manage the retirement of their workforce, and pensions are a means of offering incentives for workers to retire immediately or sooner or later than they would otherwise. Employers also choose to sponsor pension plans because of the favorable federal tax treatment of pension contributions and asset returns. This favorable tax treatment lowers the cost of a dollar of pension compensation to workers relative to an additional dollar of cash earnings. Business owners and more highly paid employees find this tax treatment attractive. Also, the tax advantages of pensions have traditionally played a role in the financial management of the corporation, allowing firms some flexibility in minimizing their tax liability and funding plans less expensively. For example, a firm may contribute, subject to certain conditions and limitations, more to the plan during profitable years, thus lowering its tax liability, and less during times when profitability is poor. In addition to motivations involving the labor market or tax preferences, workforce characteristics and other business-related factors enter into an employer’s decision to sponsor a plan and the form that plan takes. For example, the workforce characteristics of a small employer may differ from those of a large employer. Small employers may tend to employ workers for whom nonwage compensation is less important than wages. Such workers may be younger, less experienced, lower paid, and exhibit higher turnover and less attachment to full-time work than do workers in larger firms. An employer’s industry and occupational structure are also a consideration. Firms that use highly skilled labor may be more motivated to sponsor pensions than those using less skilled labor. After deciding to sponsor a plan, employers must determine the design of the plan and the benefits to be provided for workers. One of the most basic decisions is whether the plan is based on a defined benefit or a defined contribution, a decision that determines whether the employer or the worker bears the investment risk associated with funding the plan. Employers, in designing plans and setting benefit levels, will also consider a variety of factors, including the total retirement income that is considered adequate.One common measure of retirement income adequacy is the replacement rate, which represents the benefit amount in retirement for a single worker or household in relation to a measure of pre-retirement earnings, such as earnings in the year before retirement. Currently, many benefit professionals consider a 70 to 80 percent replacement rate as adequate to preserve the pre-retirement living standard.Social Security and pensions play complementary roles in helping workers attain an adequate retirement income. Because the Social Security benefit level provides a proportionately higher benefit for earners at lower levels of the distribution,some employers may balance this feature by designing plans to provide proportionately higher benefits to middle and higher income workers. Social Security and Pensions Are Explicitly Linked Through Integration Provisions Social Security reform could affect the employer’s pension plan decisions in cases where Social Security is explicitly linked to the pension plan’s provisions. One way that Social Security and pensions are explicitly related is through the “integration” or the consideration of Social Security benefits or contributions in calculating a private pension benefit. The concept of integration relates to the employers’ responsibility to provide Social Security contributions on behalf of workers and receive credit for these contributions in relation to their contributions or the benefits provided under their pension plans. In defined benefit plans, integration pertains to the benefits paid to participants; in defined contribution plans, it relates to the contributions made on behalf of workers by employers. Defined benefit plans commonly involve two methods of integration: In the offset method, the employer designs the plan to provide a given benefit based on the employee’s total compensation. A percentage of any Social Security benefit received is then deducted from the calculated pension benefit. In the excess or step-rate method, one layer of benefits is generally based on the employee’s total compensation, and a second layer is based on compensation in excess of a specified dollar level termed the “integration level.” This method is analogous to the way defined contribution plans are integrated with Social Security on the basis of contributions. Explicit integration provisions remain a common feature of many pension plans even though their form has changed over time. The Congress substantially revised integration provisions in the Tax Reform Act of 1986, and there was a subsequent decline in the prevalence of integrated plans and a strong shift toward the use of the excess method. From 1986 to 1997, the percentage of all defined benefit plan participants in medium to large firms with an integrated plan declined from 62 to 49 percent. Moreover, the percentage of participants in defined benefit plans using the offset method declined from about 43 to 13 percent of all participants, while the percentage of participants in plans using the excess method increased from 24 to 36 percent (see fig. 3). Pensions Are Implicitly Linked in Various Ways With Social Security Because Social Security has a central role in providing retirement income, almost all pension plans are implicitly linked to Social Security, insofar as their design takes into account the provisions of and benefits provided by Social Security. Because pension designs have evolved over time, they have incorporated specific features related to Social Security. Examples of such implicit linkage involve the specification of the age of benefit eligibility (retirement age) and benefit provisions for survivors and the disabled. An important implicit linkage with Social Security is pension plans’ specification of a normal retirement age. The retirement ages provided by Social Security form a basic framework around which plans design their provisions. Private defined benefit pensions generally include age and service provisions that determine when an employee becomes eligible for benefits. While some plans have age-only or service-only retirement requirements, most plans base retirement benefit eligibility on a combination of age and service.These age and service provisions allow employers to structure plans in ways that allow eligible workers to retire earlier than the ages set by Social Security. However, most plans allow workers who meet minimum service (vesting) requirements to claim pension benefits by age 62 or age 65. Another example of an implicit linkage with Social Security occurs when plans provide “bridge benefits,” which provide workers who retire from a private pension a supplement until the worker is eligible for a Social Security benefit at age 62. Social Security and pensions are also implicitly linked through provisions concerning survivor and disability benefits. Employer-sponsored pension plans offer benefits for surviving spouses of retired workers, referred to as joint and survivor benefits.Some pension plans also provide disability benefits. Joint and survivor benefits are provided in addition to the survivor benefits received under Social Security. Private disability benefits often supplement Social Security disability benefits. In 1995, more than 60 percent of full-time employees were covered under long-term disability plans. Although not required by law, many plans calculate the private disability benefit by offsetting the amount received from Social Security disability benefits in a manner somewhat analogous to the integration of pension benefits with Social Security old age benefits. Traditional Social Security Reforms Could Affect Pension Plans by Changing the Incentives Facing Employers and Workers Traditional reform options, such as reducing benefits or increasing payroll taxes, will likely affect the provision of employer-sponsored pensions. The effects on pensions will depend on the nature (e.g., benefit cut or payroll tax increase), magnitude (e.g., cut benefits 10 percent or raise payroll taxes 5 percent), and timing (e.g., raise taxes or the retirement age immediately or in 2015) of the reforms. Effects will also depend on whether Social Security and pensions are explicitly linked, such as through integration provisions, or implicitly linked. For any of the reforms under consideration, the ultimate effects on pensions will depend on employers’ and workers’ responses. Employers will likely respond to reforms that change their compensation costs or reasons for sponsoring plans. For example, Social Security reforms that reduce benefits could raise plan costs for employers with “offset” integration features, or employers could redesign their plans to eliminate that feature, absorb the costs, or take other actions. Workers will likely respond to reforms that change their Social Security contributions, their expected benefits, or their incentives to save, work, or retire. The interactions between workers and employers in response to Social Security reform will determine the form that pensions take and may affect other sources of retirement income. Traditional Social Security Reforms Will Affect Integrated Pension Plans Social Security reform proposals that incorporate traditional options such as reducing benefits or increasing payroll taxes will directly affect private pension plans that are integrated with Social Security. Benefit reductions could raise compensation costs for employers with plans integrated by using the offset method. Payroll tax increases implemented by changing the maximum taxable earnings level could affect the incentives present in plans using the excess method of integration. Such changes could cause employers to consider redesigning their plans to eliminate integration features, but they might also consider supplementing benefits to employees at higher earning levels through nontax-qualified plans. Reform proposals that reduce benefits will most likely increase employer costs for plans using the offset integration method. Defined benefit plans that use the offset integration method generally reduce the accrued pension benefit by a portion of the benefit earned from Social Security. Thus, reform proposals that reduce benefits will automatically reduce the offset amount. As a result, the portion of the total pension benefit that will be provided by the pension will increase, thus increasing employer plan costs. A reduction in Social Security benefits only raises the required pension portion by the amount of the partial offset. For offset plan participants, a Social Security benefit reduction may still result in a reduction in the overall retirement income amount because the Social Security benefit reduction is only partially offset by an increase in the amount coming from the pension. Employers may respond to these changes in a variety of ways. For example, their responses could vary from modifying the offset provision to absorbing the cost, presumably in reduced profitability, or by shifting the cost through increased product prices or reduced employment. Alternatively, the employer could alter other forms of compensation, such as wage rates or health benefits, in addition to, or in lieu of, changing the pension plan. One factor that would mitigate the overall effect of higher employer costs in offset plans is that the prevalence of this method of integration in defined benefit plans has declined substantially since the Tax Reform Act of 1986 (see fig. 3). However, increasing the costs of using this method could reduce its prevalence even further. Social Security reform proposals that increase revenues by increasing the taxable wage base could affect pension plans that are integrated with Social Security using the excess method. This could reduce plan costs but could also erode the effectiveness of the provision. This is because the excess method permits pension plans to have a higher contribution or accrual rate for employees above the “integration level,” which in most such plans is the maximum taxable ceiling on earnings covered by Social Security.As a result, if the level of maximum taxable earnings is raised, employers might adjust upwards the integration level of plans, thus reducing the number of covered workers eligible for higher contributions or accruals. This could reduce employer costs but could make the plan a less attractive incentive device both for the higher-earning employee and the employer interested in providing higher compensation for certain employees.Plans might be restructured with implications for benefit amounts, and some employers might reevaluate their motivations for plan sponsorship. One possible scenario is that employers might redesign their plans to provide more equal accruals across the earnings distribution and then supplement the benefits of their highly paid employees through the use of nontax-qualified plans, which is a current trend in the pension field.Such a development could result in lower benefits for lower and middle earners and could make higher earners and employers less interested in maintaining qualified pension plans. Traditional Social Security Reforms Will Also Affect Pensions Through Other Implicit Linkages Traditional Social Security reforms will also have implications for private pensions through implicit linkages between Social Security and pensions. Two implicit linkages involve increases in Social Security ages for normal and early retirement and increases in payroll tax rates. Proposals to raise Social Security’s retirement ages and payroll tax rates could increase employer costs, with employers possibly responding by reevaluating their plan designs. Although little evidence on the effects of changes in the retirement age is available, increases in the retirement age would likely lead employers to review existing early retirement incentives in light of current labor market conditions and their long-term human resource objectives. Many employers have used defined benefit pensions as a tool for workforce management, especially in reducing turnover or encouraging early retirement. The ability to offer early retirement incentives through a pension allows employers to choose when to induce turnover if the firm views this goal as beneficial. Employers can then hire newer workers at lower compensation levels, or they can motivate midcareer employees with greater opportunities for advancement. Data suggest that plans with age- only retirement provisions tend to peg these provisions more closely to the Social Security age for normal retirement (65) and around 55 for early retirement.However, more typically, plans have age and service requirements; over time these have tended toward age 62 as the normal retirement age, with provisions allowing earlier retirement, such as at age 55. If Social Security retirement ages are raised, it is unclear whether or how employers might adjust retirement ages in private pensions. For example, while the 1983 Amendments enacted an increase in the retirement age that has begun to be phased in, little is known about how this has affected the retirement ages used in pensions. Employers will likely continue to determine pension retirement ages according to their workforce management objectives. One factor that may induce employers to adjust the retirement ages of pensions is when plans offer some form of “bridge benefit,” which provides a pension benefit supplement to early retirees until they become eligible for Social Security retirement at early or normal ages. In such cases, higher Social Security retirement ages with no compensating adjustment in the plan’s retirement age could result in substantially higher pension costs to the employer. This could create an incentive to change the plan’s provisions for retirement age toward any higher Social Security retirement ages or to make other plan changes. Raising Social Security retirement ages could potentially create a larger gap between Social Security and private pension retirement age provisions and cause employers to rethink their retirement incentives in the context of current labor market trends. Some evidence indicates that employers might respond by seeking to retain the effectiveness of their retirement incentives.Although some employers will want to continue to use pensions as a tool to reduce or adjust the composition of their workforces, recent evidence indicates that some employers may now want to retain older workers.For example, some firms have been developing pension structures in which workers accumulate benefits more evenly over their careers, in contrast to the “back-loading” typical of traditional defined benefit plans.The introduction of “hybrid” arrangements such as “cash balance plans”could have the effect of reducing early retirement subsidies found in traditional defined benefit plans and could make it more attractive for firms to retain older workers in the future.Thus, to the extent that employers are implementing pension plan provisions that encourage workers to retire later, the potential effects of raising the Social Security retirement age on private pensions may be mitigated. In contrast, defined contribution plans do not have the same incentive properties for early retirement found in many defined benefit plans.Defined contribution plans generally allow workers to withdraw their accumulations or benefits as a lump-sum, without penalty, beginning at age 59½. Thus, changing Social Security retirement ages may have implications for the age at which individual workers choose to take their benefits. It is likely that some workers would delay retirement as they try to meet retirement income goals. Raising payroll taxes could lead employers to reduce plan benefits or even terminate some plans, which could reduce worker pension coverage. Higher payroll taxes would directly raise the compensation costs of employers in the short term and would likely trigger a series of economic adjustments to wages, prices, or employment.Pensions or other elements of compensation, such as health benefits or wages, could be reduced, leaving workers’ overall compensation and the firm’s profitability unchanged. Employer responses such as reducing plan benefits or terminating a plan could depend in part on the size of the firm. Because smaller employers are generally more sensitive to changes in their compensation costs, payroll tax increases could make them more likely to terminate their plans, compared with larger firms. If this is the case, then overall pension coverage might show a modest decrease, and the existing disparity in coverage and compensation between large and small firms might be exacerbated. In general, even though payroll taxes have been increased in the past, it is difficult to disentangle the effects of payroll taxes on employers and pensions from other influences. Workers May Respond to Traditional Social Security Reforms That Affect Contributions or Future Benefits A key element in evaluating the effect of traditional Social Security reforms on pensions involves workers’ responses to such reforms. While employers play a primary role concerning the decision to sponsor and design a pension plan, worker demands can play a substantive role in influencing these decisions. In addition, workers make individual decisions that have implications for their future retirement income. These include decisions about consumption and saving, how much to work, and when to retire. Social Security reform could affect the private pension system through its effect on workers’ saving and consumption decisions. Most traditional reforms involve workers paying more for promised future benefits or accepting lower benefits. Workers could experience either a reduction in their current income available for consumption if payroll taxes are increased, for example or a lower anticipated retirement income, which would occur if benefit levels were reduced. Workers’ response to traditional Social Security reforms could take different forms. They might act to offset the effect of reforms by saving more, or working more and retiring later. Alternatively, they might not change their savings or employment levels but experience a reduced living standard or draw down other assets. If Social Security reform reduces anticipated retirement income, many analysts would expect that workers might, to some degree, want to offset this effect by increasing their saving outside the Social Security program. Workers could demand higher pension compensation or save individually by contributing to 401(k)s or individual retirement accounts (IRAs).Past research has considered whether the existence of Social Security and workers’ anticipation of future benefits reduced their savings in other assets. While some analysts found evidence of reduced saving, others disputed such an effect.More recent research has examined whether the creation of IRAs or 401(k)s has led to a net increase in individual saving. Some economists contend that the contributions to these savings vehicles are offset by a reduction in other forms of saving, while others find that contributions are not completely offset and hence yield a net increase in savings by workers.The conflicting nature of these empirical debates illustrates the difficulty in drawing conclusions about the effects of proposed Social Security reforms on private pensions or national saving overall. Another related area in which Social Security reform could have implications for private pensions concerns workers’ decisions about how much to work and when to retire. Some analysts believe that if workers perceive a current or future decrease in income due to lower future benefits or higher payroll taxes, most would work more to make up for lower anticipated income. These reactions can also apply to their choice of retirement age. Workers anticipating less retirement income may choose to stay in the workforce longer than they might otherwise. Thus, benefit cuts or tax increases may create incentives for workers to work more in the current period or work more years to offset the effect of these changes. Such effects could imply that over time workers might tend toward higher labor force participation, more hours of work, or delayed or phased-in retirement. In turn, employers may redesign pensions to address such worker preferences. The movement toward defined contribution plan designs, which tend to reduce early retirement incentives, may be a trend consistent with these effects. Structural Reforms to Social Security Could Have Implications for the Private Pension System Including individual accounts as a reform feature raises key issues for the private pension system. Implications for the private pension system will depend on how the individual account is structured (e.g., how it is financed and administered), its scope (e.g., whether it has voluntary or universal participation), and its interaction with other reform provisions (e.g., whether other benefits are reduced). Like more traditional reforms, the effects of an individual account reform feature on the pension system will occur through explicit and implicit linkages between Social Security and pensions and employer and worker responses to specific reforms. Because individual accounts have generally been proposed as a part of more comprehensive reform packages that include traditional reforms such as cutting benefits, it is difficult to disentangle their possible effects on private pensions. Proposals for Individual Accounts Vary in Form and Raise Many Issues The implications of individual accounts for the private pension system will depend on how the accounts are structured and administered. These issues include the magnitude and nature of the accounts’ funding, how the accounts are administered, whether participation is voluntary or universal, the degree of choice and control accorded to workers in regard to investment of account funds and the form in which benefits are received, and the interaction of individual account features with other reform provisions. The most basic structural issues concern the magnitude of the accounts’ that is, the amount or the percentage of payroll devoted to the and the nature of that financing. While proposals vary, a number of them focus on creating accounts with a contribution of 2 percent of Social Security taxable payroll. This feature determines the future role accounts will play in Social Security financing and whether investment returns might alleviate the need for traditional reforms. The amount devoted to the account will also determine workers’ retirement income, contingent on investment performance. The nature of the individual accounts’ financing also has implications for the private pension system to the extent that it affects the contributions employers make on behalf of workers. Some proposals implement individual accounts through a “carve-out,” which generally maintains the current level of payroll tax rates but devotes a portion of payroll taxes (e.g., 2 percent) to the individual account. Other proposals implement the individual accounts by means of an “add-on,” which generally creates accounts that supplement the current Social Security program and increases overall contributions to the system. In general, while add-on accounts would appear more likely to directly increase employers’ costs, assessing the implications of these different structures for private pensions is complicated because it is necessary to consider the entire reform package, which may include benefit cuts or other revenue measures such as general revenue financing or government borrowing. Also, the degree to which Social Security benefits are reduced or offset against the account is an important design issue.The accounts can be integrated into the Social Security benefit structure in a way that preserves all currently legislated benefits as a floor. This would limit the risk borne by the worker, while allowing the worker to share in the rewards if the account exceeds the returns implicit in the current Social Security benefit structure. This feature will also affect the overall cost of a Social Security reform package to workers and employers. Another key issue concerns whether the accounts would be structured to allow workers to hold the account entirely outside the Social Security program or whether the accounts would be set up through government institutions that would play a role in administering and channeling funds to investors. Employers are concerned about the resources and administrative costs they would have to devote to managing the accounts and financial flows. Proposals differ in the degree to which the administration of the accounts is either centralized through government institutions or decentralized through employers and the financial industry. Some believe that retaining some government role could reduce administrative burdens on employers and workers, but others emphasize the advantages of expanded choice that could be made available to workers. The scope of an individual account reform mandatory or voluntary that is, whether the account is also could have implications for the private pension system. Some proposals include mandatory account provisions because they would appear to be more directly linked to Social Security, and in particular, to the universal nature of the program.Mandatory accounts thus provide a degree of certainty about the structure of Social Security that employers can take into account in responding to reform and in possibly redesigning their plans. The linkage between voluntary accounts and Social Security would appear to be more tenuous and more complicated to analyze. For example, some voluntary account designs are structured to supplement Social Security and would be hard to distinguish from retirement saving vehicles such as IRAs. Such similarities could result in low participation and minimal impact on most workers’ retirement income and on pension plans generally. Alternatively, voluntary accounts could be targeted to specific groups, such as young workers, lower income workers, or those who are not currently covered by a private pension. Such designs could address some of the concerns about the adequacy of benefits and gaps in coverage. Another important individual account design feature concerns the degree of choice and control that workers would have over their funds and the degree of flexibility that workers might have in accessing the funds in their accounts. Providing workers more options in which to invest their funds allows them to diversify risk and perhaps earn higher returns. However, this can increase the costs of the system. Allowing greater flexibility in accessing funds could give workers greater control over the decision to retire. They could also have greater control over the form in which they receive their retirement income over their lifetime because they could choose annuities or keep a portion of their funds invested. However, allowing greater access to funds before retirement and greater choice in the form in which retirement income is received could complicate administration, increase costs, and possibly reduce future retirement income. The interaction of an individual account feature with the other provisions of a comprehensive reform also has consequences for the private pension system. In this instance, the net effects on the private pension system would depend on the other provisions included in the reform and the structure of the individual account feature. Individual accounts could either moderate or exacerbate these effects, depending on the exact features of the broader reform. Individual Accounts Could Have Broad Implications for Integrated Plans and Employer-Worker Decisions Structural reforms that create individual accounts could have an array of implications for private pensions. Some arise from the explicit and implicit linkages between Social Security and pensions and will depend on the responses of employers and workers. For example, individual accounts could affect private pensions if the employer chooses to change the type of integration provisions used. To the extent that individual accounts affect employer costs or workers’ reactions to risk, both employers’ incentives to provide pension coverage and workers’ incentives to participate in pensions could be affected. One expert suggests that integrated pension plans may be affected through the definition of the Social Security benefit.The issue could arise where a portion of the individual’s total benefit comes from both the individual account and from Social Security. Plans currently must estimate the participant’s Social Security benefit to calculate the appropriate offset. With an individual account, estimating the benefit amount to determine the appropriate benefit offset could become more complicated and perhaps costly for the employer. This might compel employers to abandon the offset method in favor of the excess method of integration. For this method, they need only satisfy the rules regarding permitted disparity and do not need to calculate the total Social Security benefit accurately. Depending on their structure, individual accounts could increase employer costs by increasing contributions and imposing an administrative burden to maintain the accounts. Employers may respond to the higher costs associated with contributions to an add-on account in ways similar to those described for payroll taxes; worker behavior may also be affected. While large employers appear to be better able to handle the costs and administrative demands of an individual account system, smaller employers may face greater difficulties, such as reduced profitability, that could reduce their willingness to provide pensions. Worker reactions to the introduction of individual accounts will be shaped by the way in which they assess risk in relation to their retirement income. If individual accounts achieve higher rates of returns than beneficiaries receive under Social Security, and these returns are captured in a way that improves program financing, the accounts could reduce or mitigate the benefit cuts and tax increases otherwise needed to pay promised levels of Social Security benefits.Therefore, individual accounts might reduce workers’ need to adjust to lower anticipated benefit levels or the higher taxes included in some reform proposals. However, individual accounts would also likely increase the level of risk or uncertainty associated with anticipated retirement income.Under individual account proposals that provide for a broad range of investment choices, workers might be able to choose an appropriate level of risk by adjusting investments within the account, or they might reallocate their pension-related assets to readjust the level of risk of their overall portfolios. For example, workers could offset an increase in risk from the individual account by adjusting the allocation between fixed income assets and equities in their 401(k) accounts. Workers might also exhibit increased demand for annuity-type products or perhaps even defined benefit pension arrangements.The ability to adjust to the introduction of individual accounts could be more problematic for workers who have limited knowledge of investment principles or who have few or no alternative assets the population. For example, data suggest that about one-quarter of all families nearing retirement have less than $25,000 in assets such as stocks, bonds, home equity, and bank accounts. Implementing Individual Accounts Presents Challenges and Opportunities Individual accounts raise a broader set of issues for private pensions compared with traditional reforms. The design and implementation of individual accounts will affect not only employer costs but could present employers and workers with substantial challenges in coordinating existing defined benefit and defined contribution pension plans with individual accounts under the current regulatory framework for pensions. At the same time, an individual account structure could provide an opportunity to expand access to private retirement income while increasing the choice and flexibility available to workers. Employers might seek to offset any higher costs arising from individual accounts by reducing or restructuring their existing pension plans. If the accounts achieve investment returns that are below historical trends or the implicit returns to Social Security, the adjustments that employers and workers may have to make to maintain expected retirement income could be exacerbated. For example, if accounts perform below expectations, workers may desire to delay retirement, which could make it more difficult for employers who may want to offer early retirement incentives to older workers. A related issue concerns the degree to which employers would want to, or have to, coordinate their existing pensions with individual accounts. For example, pensions and individual accounts could have very different rules for distributing benefits. A 401(k) account currently permits workers to take lump-sum distributions without penalty at age 59½, while Social Security individual accounts might restrict distributions until the age of eligibility for Social Security (62 or later). Existing regulations for receiving lump sum distributions and the annuity options available to the worker might also be different for pensions and individual accounts, complicating administration by the employer. Addressing these challenges could result in substantial changes in the structure of the pension system, with possible implications for worker coverage and workers’ overall retirement income. At the same time, individual accounts could present an opportunity to improve the nation’s retirement income system by providing workers with new opportunities to save for retirement and by expanding workers’ flexibility in managing their retirement assets. One of the most frequently cited problems with the private pension system is that it does not cover all workers. A mandatory, universal system of accounts could provide a defined contribution account to all workers, which would give them access to private investment returns. While workers may offset their individual account saving by saving less elsewhere or borrowing more, the ability to do so may be limited for some and therefore an increase in total saving is possible.It may be argued that individual accounts could increase workers’ awareness of the need to save for retirement or become more knowledgeable about investment options. Individual accounts might also increase employers’ incentive to develop new pension vehicles to coordinate with individual accounts. While a voluntary system of individual accounts might draw low participation, the system could be structured to foster the goal of increased coverage if it is targeted to those groups with low pension coverage. Such an account design might also affect the ability and propensity of workers to save if such a voluntary supplemental saving option were linked with Social Security. While many employers are concerned about the potential administrative costs of an individual account system, there may be compensating advantages. Some experts have noted that while individual accounts, either as a supplement to or as a partial substitute for Social Security, might entail higher administrative costs, these costs might allow employers to provide workers with greater options and services compared with Social Security.Given a range of investment options, individual accounts could also provide workers a means of spreading market risk and perhaps give them more flexibility in terms of retirement choice later in their careers by allowing them to phase into retirement. Concluding Observations While Social Security and private pensions are linked in many ways, they remain distinct institutional frameworks for providing retirement income. Given that little is known about the effects of changes in either of these systems alone, determining the possible effects and interactions of Social Security reform across these very different structures is a difficult task. Furthermore, with the proposed introduction of individual accounts, the current Social Security debate has introduced the possibility of a major structural change in the program. Such accounts, often proposed in combination with other more traditional reforms, raise an even broader set of possibilities and questions. Our retirement income institutions operate in a dynamic environment where workers, employers, and policymakers interact to pursue the goal of retirement income security. Numerous concurrent influences, such as changes in tax and regulatory policies, also affect how these institutions develop and evolve, and the reform debate should explicitly consider these other issues. The limited knowledge we have of these influences and the complexity of instituting policy change suggests that any reform should be taken with caution and careful deliberation. At the same time, establishing agreement on the fundamental principles underlying the reform should be emphasized. These principles should include ensuring retirement income to those who most need it and encouraging the development of new opportunities to secure and expand the retirement income of future generations. Agency and Other Comments We provided draft copies of this report to the Social Security Administration, the Department of Labor’s Pension and Welfare Benefits Administration, and an external reviewer who is a university-based pension expert. They provided us with technical comments, which we incorporated where appropriate. In general, they concurred with our treatment of the issues as appropriate for an overview of the topic and noted that many issues discussed in the report could benefit from further research. We are sending copies of this report to the Honorable Charles B. Rangel, Ranking Minority Member, House Ways and Means Committee; other interested congressional committees; the Honorable Alexis M. Herman, Secretary of Labor; the Honorable Lawrence Summers, Secretary of Treasury; and the Honorable Kenneth S. Apfel, Commissioner of Social Security. We will also make copies available to others on request. If you have any questions concerning this report, please contact me at (202) 512- 5491 or Charles Jeszeck at (202) 512-7036. Other major contributors include Kenneth J. Bombara and Gene Kuehneman. References Aaron, Henry J. EconomicEffectsofSocialSecurity. Washington, D.C.: The Brookings Institution, 1982. Advisory Council on Social Security.Reportofthe1994-1996Advisory CouncilonSocialSecurity, Vols. I and II. Washington, D.C.: Advisory Council on Social Security, 1997. Association of Private Pension and Welfare Plans.LookingtotheFuture:A NewPerspectiveontheSocialSecurityProblem. Washington, D.C.: APPWP, Mar. 2000. Bender, Keith A. “Characteristics of Individuals With Integrated Pensions.” SocialSecurityBulletin, Vol. 62, No. 3 (1999), pp. 28-40. Board of Trustees, Federal Old-Age and Survivors Insurance and Disability Insurance Trust Funds. The2000AnnualReportoftheBoardofTrusteesof theFederalOld-AgeandSurvivorsInsuranceandDisabilityInsuranceTrust Funds. Washington D.C.: U.S. Government Printing Office, Mar. 30, 2000. Bureau of Labor Statistics. EmployeeBenefitsinMediumandLarge Establishments,1997, Bulletin No. 2517. Washington, D.C.: U.S. Government Printing Office, Sept. 1999. Bone, Christopher. “An Actuarial Perspective on How Social Security Reform Could Influence Employer-Sponsored Pensions,” in Prospectsfor SocialSecurityReform, O. Mitchell, R. J. Myers, and H. Young, eds. Philadelphia, Penn.: University of Pennsylvania Press, 1997, pp. 333-48. Clark, Robert L., and Sylvester J. Schieber, “Taking the Subsidy Out of Early Retirement: The Story Behind the Conversion to Hybrid Pensions,” paper prepared for the Conference on Financial Innovations for Retirement Income, The Wharton School, University of Pennsylvania, Philadelphia, Penn., May 1-2, 2000. Compensation Resource Group, Inc. ExecutiveBenefits:ASurveyof CurrentTrends,1999SurveyResults. Pasadena, Calif.: CRG, 1999. Employee Benefit Research Institute. TheFutureofPrivateRetirement Plans, Dallas L. Salisbury, ed. Washington, D.C.: EBRI, 2000. _____. BeyondIdeology:AreIndividualSocialSecurityAccountsFeasible? Dallas L. Salisbury, ed. Washington, D.C.: EBRI, 1999. _____. EBRIDatabook, 3rd ed. Washington, D.C.: EBRI, 1995. Engen, Eric M., and William G. Gale. TheEffectsof401(k)Planson HouseholdWealth:DifferencesAcrossEarningsGroups. Washington, D.C.: The Brookings Institution, August 2000. ERISA Industry Committee. TheVitalConnection:AnAnalysisofthe ImpactofSocialSecurityReformonEmployer-SponsoredRetirement Plans. Washington, D.C.: ERISA Industry Committee, July 1998. Feldstein, Martin, and Andrew Samwick. “The Transition Path in Privatizing Social Security,” in PrivatizingSocialSecurity, Martin Feldstein, ed. Chicago, Ill.: The University of Chicago Press, 1998, pp. 215-64. Fields, Gary S., and Olivia S. Mitchell. “The Effects of Social Security Reforms on Retirement Ages and Retirement Incomes.” JournalofPublic Economics(Winter 1984). Gale, William. “The Impact of Pensions and 401(k)s on Saving: A Critical Assessment of the State of the Literature,” paper prepared for the conference “ERISA After 25 Years: A Framework for Evaluating Pension Reform,” Washington, D.C., Sept. 17, 1999. Gebhardtsbauer, Ron. SocialSecurityOptionsandTheirEffectson DifferentDemographicGroups. Washington D.C.: American Academy of Actuaries, June 21, 1999. Gregory, Janice. “Possible Employer Responses to Social Security Reform,” in ProspectsforSocialSecurityReform, O. Mitchell, R. J. Myers, and H. Young, eds. Philadelphia, Penn.: University of Pennsylvania Press, 1997, pp. 311-32. Gustman, Alan, O. Mitchell, and T. Steinmeier. “The Role of Pensions in the Labor Market: A Survey of the Literature.” IndustrialandLaborRelations Review, Vol. 47, No. 3 (Apr. 1994), pp. 417-38. Ippolito, Richard A. “The New Pension Economics: Defined Contribution Plans and Sorting,” in TheFutureofPrivateRetirementPlans. Washington, D.C.: Employee Benefit Research Institute, 2000. Kollmann, Geoffrey, Ray Schmitt, and Michelle Harman.EffectofPension IntegrationonRetirementBenefits, report to the Congress (94-974 EPW). Washington, D.C.: Congressional Research Service, Dec. 6, 1994. Lichtenstein, Jules. SocialSecurityReform:PensionPlanIntegrationWith SocialSecurity. Washington D.C.: American Association of Retired Persons, Oct. 1998. Luzadis, Rebecca A., and Olivia S. Mitchell. “Explaining Pension Dynamics,” TheJournalofHumanResources, Vol. 26, No. 4 (1991), pp. 679- 703. McGill, Dan M., Kyle N. Brown, John J. Haley, and Sylvester J. Schieber. FundamentalsofPrivatePensions, 7th ed. Philadelphia, Penn.: University of Pennsylvania Press, 1996. Mitchell, Olivia S. “New Trends in Pension Benefit and Retirement Provisions,” Working Paper No. 7381. Cambridge, Mass.: National Bureau of Economic Research, Oct. 1999. _____. “Administrative Costs in Public and Private Retirement Systems,” in PrivatizingSocialSecurity, Martin Feldstein, ed. Chicago, Ill.: The University of Chicago Press, 1998, pp. 403-52. Olsen, Kelly A., and Jack L. VanDerhei. “Potential Consequences for Employers of Social Security ‘Privatization’: Policy Research Implications.” RiskManagementandInsuranceReview(Summer 1997), pp. 32-50. Palmer, Bruce A. “Retirement Income Replacement Ratios: An Update.” BenefitsQuarterly(Second Quarter, 1994), pp. 59-75. Schieber, Sylvester J. “The Employee Retirement Income Security Act: Motivations, Provisions, and Implications for Retirement Income Security,” paper presented at the conference “ERISA After 25 Years: A Framework for Evaluating Pension Reform,” Washington D.C., Sept. 17, 1999. Slusher, Chuck. “Pension Integration and Social Security Reform,” Social SecurityBulletin, Vol. 61, No. 3 (1999), pp. 20-27. Social Security Administration. IncomeofthePopulation, 55orOlder,1998. Washington, D.C.: U.S. Government Printing Office, Mar. 2000. _____. SocialSecurityBulletin,AnnualStatisticalSupplement. Washington, D.C.: U.S. Government Printing Office, 1999. Myers, Robert J. SocialSecurity, 4th ed. Philadelphia, Penn.: University of Pennsylvania Press, 1993. Sass, Steven A. ThePromiseofPrivatePensions:TheFirstHundredYears. Cambridge, Mass.: Harvard University Press, 1997. Uccello, Cori E. “401(k) Investment Decisions and Social Security Reforms,” Working Paper No. 2000-04. Center for Retirement Research, Mar. 2000. U.S. General Accounting Office. PensionPlans:CharacteristicsofPersons intheLaborForceWithoutPensionCoverage(GAO/HEHS-00-131, Aug. 30, 2000). _____. SocialSecurity:EvaluatingReformProposals(GAO/AIMD/HEHS-00- 29, Nov. 4, 1999). _____. SocialSecurityReform:ImplicationsofPrivateAnnuitiesfor IndividualAccounts(GAO/HEHS-99-160, July 30, 1999). _____. SocialSecurityReform:ImplementationIssuesforIndividual Accounts(GAO/HEHS-99-122, June 18, 1999). _____. SocialSecurity:DifferentApproachesforAddressingProgram Solvency(GAO/HEHS-98-33, July 22, 1998). _____. IntegratingPensionsandSocialSecurity:TrendsSince1986TaxLaw (GAO/HEHS-98-191R, July 6, 1998). _____. SocialSecurityFinancing:ImplicationsofGovernmentStock InvestingfortheTrustFund,theFederalBudget,andtheEconomy (GAO/AIMD/HEHS-98-74, Apr. 22, 1998). _____. 401(k)PensionPlans:LoanProvisionsEnhanceParticipationBut MayAffectIncomeSecurityforSome(GAO/HEHS-98-5, Oct. 1, 1997). _____. RetirementIncome:ImplicationsofDemographicTrendsforSocial SecurityandPensionReform(GAO/HEHS-97-81, July 11, 1997). Watson Wyatt Worldwide. TheUnfoldingofaPredictableSurprise:A ComprehensiveAnalysisoftheShiftFromTraditionaltoHybridPlans. Bethesda, Md.: Watson Wyatt Worldwide, 2000. Ordering Information The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. To Report Fraud, Waste, or Abuse in Federal Programs Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
Pursuant to a congressional request, GAO provided information on the interactions between Social Security and private pensions, focusing on the: (1) primary linkages between Social Security and private pensions and the way they interact to provide retirement income for workers and families; (2) effects of traditional Social Security reforms on the structure of employer-sponsored pension plans through changes in the costs and incentives faced by employers and workers; and (3) effects of nontraditional reforms, such as individual accounts, on the structure of the private pension system. GAO noted that: (1) Social Security and private pensions are key sources of retirement income that are linked through the employer costs associated with the compensation provided to workers; (2) because pension plans serve as a supplement to Socal Security, many plans are integrated--that is, they explicitly incorporate Social Security benefits or contributions into their plan design; (3) employers also implicitly consider Social Security provisions in designing pensions that complement their human resource and other business strategies; (4) traditional reforms in the Social Security program, such as changing benefits or taxes or raising the normal retirement age, may alter the incentives of workers and employers, which could prompt adjustments in private pension plans; (5) the effect of any specific reform will depend on the nature of the change its magnitude, its time horizon for implementation and its interaction with other provisions that comprise a comprehensive reform proposal; (6) employers' and workers' responses to reform will be shaped by a variety of factors, including the firm's size, the type of pension plan offered, and the economic status of the worker; (7) employers will respond to reforms that affect compensation costs or the incentives for sponsoring a plan; (8) the introduction of individual accounts raises a broad set of issues for private pensions, depending on how such a reform is structured, its scope--whether it is voluntary or universal--and its interaction with other reforms as part of a broader reform proposal; (9) like more traditional reforms, the the effects of an individual account feature on the pension system will depend on the explicit and implicit linkages between Social Security and pensions and employers' and workers' responses to specific reforms; (10) the nation's retirement income institutions operate in a dynamic environment where workers, employers, and policymakers interact to pursue the goal of retirement income security; (11) the complexity of making policy change suggests that any reform should be taken with careful deliberation; (12) the complexity of making policy change suggests that any reform should be taken with careful deliberation; and (13) at the same time, ensuring retirement income for those who most need it and encouraging the development of new opportunities to secure and expand the retirement income of future generations should be emphasized.
Background US-VISIT is a large, complex governmentwide program intended to collect, maintain, and share information on certain foreign nationals who enter and exit the United States; identify foreign nationals who (1) have overstayed or violated the terms of their visit; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detect fraudulent travel documents, verify visitor identity, and determine visitor admissibility through the use of biometrics (digital fingerprints and a digital photograph); and facilitate information sharing and coordination within the immigration and border management community. The US-VISIT Program Office has responsibility for managing the acquisition, deployment, operation, and sustainment of US-VISIT and has been delivering US-VISIT capability incrementally based, in part, on statutory deadlines for implementing specific portions of US-VISIT. For example, the statutory deadline for implementing US-VISIT at the 50 busiest land POEs was December 31, 2004, and at the remaining POEs, December 31, 2005. From fiscal year 2003 through fiscal year 2007, total funding for the US-VISIT program has been about $1.7 billion. In reports on US-VISIT over the last 3 years, we have identified numerous challenges that DHS faces in delivering program capabilities and benefits on time and within budget. In September 2003, we reported that the US- VISIT program is a risky endeavor, both because of the type of program it is (large, complex, and potentially costly) and because of the way that it was being managed. We reported, for example, that the program’s acquisition management process had not been established, and that US- VISIT lacked a governance structure. In March 2004, we testified that DHS faces a major challenge maintaining border security while still welcoming visitors. Preventing the entry of persons who pose a threat to the United States cannot be guaranteed, and the missed entry of just one can have severe consequences. Also, US-VISIT is to achieve the important law enforcement goal of identifying those who overstay or otherwise violate the terms of their visas. Complicating the achievement of these security and law enforcement goals are other key US-VISIT goals: facilitating trade and travel through POEs and providing for enforcement of U.S. privacy laws and regulations. Subsequently, in May 2004, we reported that DHS had not employed the kind of rigorous and disciplined management controls typically associated with successful programs. Moreover, in February 2006, we reported that while DHS had taken steps to implement most of the recommendations from our 2003 and 2004 reports, progress in critical areas had been slow. As of February 2006, of 18 recommendations we made since 2003, only 2 had been fully implemented, 11 had been partially implemented, and 5 were in the process of being implemented, although the extent to which they would be fully carried out is not yet known. US-VISIT Scope, Operations, and Processing at Land POEs Currently, US-VISIT’s scope includes the pre-entry, entry, status, and exit of hundreds of millions of foreign national travelers who enter and leave the United States at over 300 air, sea, and land POEs. However, most land border crossers—including U.S. citizens, lawful permanent residents, and most Canadian and Mexican citizens—are, by regulation or statute, not required to enroll into US-VISIT. In fiscal year 2004, for example, U.S. citizens and lawful permanent residents constituted about 57 percent of land border crossers; Canadian and Mexican citizens constituted about 41 percent; and less than 2 percent were US-VISIT enrollees. Figure 1 shows the number and percentage of persons processed under US-VISIT as a percentage of all border crossings at land, air, and sea POEs in fiscal year 2004. Foreign nationals subject to US-VISIT who intend to enter the country encounter different inspection processes at different types of POEs depending on their mode of travel. Those who intend to enter the United States at an air or sea POE are to be processed, for purposes of US-VISIT, in the primary inspection area upon arrival. Generally, these visitors are subject to prescreening, before they arrive, via passenger manifests, which are forwarded to CBP by commercial air or sea carrier in advance of arrival. By contrast, foreign nationals intending to enter the United States at land POEs are generally not subject to prescreening because they arrive in private vehicles or on foot and there is no manifest to record their pending arrival. Thus, when foreign nationals subject to US-VISIT arrive at a land POE in vehicles, they initially enter the primary inspection area where CBP officers, often located in booths, are to visually inspect travel documents and query the visitors about such matters as their place of birth and proposed destination. Visitors arriving as pedestrians enter an equivalent primary inspection area, generally inside a CBP building. If the CBP officer believes a more detailed inspection is needed or if the visitors are required to be processed under US-VISIT, the visitors are to be referred to the secondary inspection area—an area away from the primary inspection area—which is generally inside a facility. The secondary inspection area inside the facility generally contains office space, waiting areas, and space to process visitors, including US-VISIT enrollees. Equipment used for US-VISIT processing includes a computer, printer, digital camera, and a two-fingerprint scanner. Figure 2 shows how U.S. citizens and most Mexicans, Canadians, and foreign nationals subject to US-VISIT are to be processed at land POEs. As of August 2006, there were 170 land POEs that are geographically dispersed along the nation’s more than 7,500 miles of borders with Canada and Mexico. Some are located in rural areas (such as Alexandria Bay, New York, and Blaine-Pacific Highway, Washington) and others in cities (such as Detroit) or in U.S. cities across from Mexican cities, such as Laredo and El Paso, Texas. The volume of visitor traffic at these POEs varied widely, with the busiest four POEs characterized by CBP, in fiscal year 2005, as San Ysidro, Calexico, and Otay Mesa, California, and Bridge of the Americas in El Paso, Texas. DHS Had Installed US-VISIT Biometric Entry Capability at Nearly All Land POEs, but Faces Challenges Identifying and Monitoring the Operational Impacts on POE Facilities My statement will now focus on what the US-VISIT Program Office had done to implement US-VISIT entry capabilities at land POEs and what impact US-VISIT has had on these facilities. At the time of our review, DHS had installed the entry portion of US-VISIT at 154 of the nation’s 170 land POEs, usually with minimal new construction or changes to existing facilities. As required by law, the US- VISIT entry capability includes biometric features—such as digital scans of 2 fingerprints—to help verify the identity of visitors. CBP officials at all 21 land POEs we visited told us that US-VISIT’s entry capability has generally enhanced their ability to process visitors subject to US-VISIT by providing assurance that visitors’ identities can be confirmed through biometric identifiers and by automating the paperwork associated with processing I-94 arrival/departure forms. Going forward, DHS plans to introduce changes and enhancements to US- VISIT at land POEs intended to further bolster CBP’s ability to verify the identity of individuals entering the country, including a transition from digitally scanning 2 fingerprints to scanning 10. While such changes are intended to further enhance border security, deploying them may have an impact on aging and space-constrained land POE facilities because they could increase inspection times and adversely affect POE operations. Our site visits, interviews with US-VISIT and CBP officials, and the work of others suggest that both before and after US-VISIT entry capability was installed at land POEs, these facilities faced a number of challenges— operational and physical—including space constraints complicated by the logistics of processing high volumes of visitors and associated traffic congestion. Moreover, our work over the past 3 years showed that the US- VISIT program office had not taken necessary steps to help ensure that US-VISIT entry capability operates as intended. For example, in February 2006 we reported that the approach taken by the US-VISIT Program Office to evaluate the impact of US-VISIT on land POE facilities focused on changes in I-94 processing time at 5 POEs and did not examine other operational factors, such as US-VISIT’s impact on physical facilities or work force requirements. As a result, program officials did not always have the information they needed to anticipate problems that occurred, such as problems processing high volumes of visitors in space-constrained facilities. Turning to another aspect of our work on US-VISIT entry capability, our December 2006 report stated that management controls did not always alert US-VISIT and CBP to operational problems. Our standards for internal controls in the federal government state that it is important for agencies to have controls in place to help ensure that policies and procedures are applied and that managers be made aware of problems so that that they can be addressed and resolved in a timely fashion. CBP officials at 12 of 21 land POE sites we visited told us about US-VISIT- related computer slowdowns and freezes that adversely affected visitor processing and inspection times, and at 9 of the 12 sites, computer processing problems were not always reported to CBP’s computer help desk, as required by CBP guidelines. Although various controls are in place to alert US-VISIT and CBP officials to problems as they occur, these controls did not alert officials to all problems, given that they had been unaware of the problems we identified before we brought them to their attention. These computer processing problems have the potential to not only inconvenience travelers because of the increased time needed to complete the inspection process, but to compromise security, particularly if CBP officers are unable to perform biometric checks—one of the critical reasons US-VISIT was installed at POEs. Our internal control standards also call for agencies to establish performance measures throughout the organization so that actual performance can be compared to expected results. While the US-VISIT Program Office established performance measures for fiscal years 2005 and 2006 intended to gauge performance of various aspects of US-VISIT at air, sea, and land POEs in the aggregate, performance measures specifically for land POEs had not been developed. It is important to do so, given that there are significant operational and facility differences among these different types of POEs. Additional performance measures that consider operational and facility differences at land POEs would put US- VISIT program officials in a better position to identify problems, trends, and areas needing improvements. DHS Cannot Currently Implement a Biometric US-VISIT Exit Capability at Land POEs and Faces Uncertainties as Testing of an Alternative Exit Strategy Continues My statement will now focus on the challenges facing DHS as it attempts to implement a biometric exit capability at land POEs. Various Factors Have Prevented US-VISIT from Implementing a Biometric Exit Capability Various factors have prevented US-VISIT from implementing a biometric exit capability. Federal laws require the creation of a US-VISIT exit capability using biometric verification methods to ensure that the identity of visitors leaving the country can be matched biometrically against their entry records. However, according to officials at the US-VISIT Program Office and CBP and US-VISIT program documentation, there are interrelated logistical, technological, and infrastructure constraints that have precluded DHS from achieving this mandate, and there are cost factors related to the feasibility of implementation of such a solution. The major constraint to performing biometric verification upon exit at this time, in the US-VISIT Program Office’s view, is that the only proven technology available would necessitate mirroring the processes currently in use for US-VISIT at entry. A mirror image system for exit would, like one for entry, require CBP officers at land POEs to examine the travel documents of those leaving the country, take fingerprints, compare visitors’ facial features to photographs, and, if questions about identity arise, direct the departing visitor to secondary inspection for additional questioning. These steps would be carried out for exiting pedestrians as well as for persons exiting in vehicles. The US-VISIT Program Office concluded in January 2005 that the mirror-imaging solution was “an infeasible alternative for numerous reasons, including but not limited to, the additional staffing demands, new infrastructure requirements, and potential trade and commerce impacts.” US-VISIT officials told us that they anticipated that a biometric exit process mirroring that used for entry could result in delays at land POEs with heavy daily volumes of visitors. And they stated that in order to implement a mirror image biometric exit capability, additional lanes for exiting vehicles and additional inspection booths and staff would be needed, though they had not determined precisely how many. According to these officials, it is unclear how new traffic lanes and new facilities could be built at land POEs where space constraints already exist, such as those in congested urban areas. (For example, San Ysidro, California, currently has 24 entry lanes, each with its own staffed booth and 6 unstaffed exit lanes. Thus, if full biometric exit capability were implemented using a mirror image approach, San Ysidro’s current capacity of 6 exit lanes would have to be expanded to 24 exit lanes.) As shown in figure 3, based on observations during our site visit to the San Ysidro POE, the facility is surrounded by dense urban infrastructure, leaving little, if any, room to expand in place. Some of the 24 entry lanes for vehicle traffic heading northward from Mexico into the United States appear in the bottom left portion of the photograph, where vehicles are shown waiting to approach primary inspection at the facility; the 6ix exit lanes (traffic toward Mexico), which do not have fixed inspection facilities, are at the upper left. Other POE facilities are similarly space-constrained. At the POE at Nogales-DeConcini, Arizona, for example, we observed that the facility is bordered by railroad tracks, a parking lot, and industrial or commercial buildings. In addition, CBP has identified space constraints at some rural POEs. For example, the Thousand Islands Bridge POE at Alexandria Bay, New York, is situated in what POE officials described as a “geological bowl,” with tall rock outcroppings potentially hindering the ability to expand facilities at the current location. Officials told us that in order to accommodate existing and anticipated traffic volume upon entry, they are in the early stages of planning to build an entirely new POE on a hill about a half-mile south of the present facility. CBP officials at the Blaine-Peace Arch POE in Washington state said that CBP also is considering whether to relocate and expand the POE facility, within the next 5 to 10 years, to better handle existing and projected traffic volume. According to the US- VISIT program officials, none of the plans for any expanded, renovated, or relocated POE include a mirror image addition of exit lanes or facilities comparable to those existing for entry. In 2003, the US-VISIT Program Office estimated that it would cost approximately $3 billion to implement US-VISIT entry and exit capability at land POEs where US-VISIT was likely to be installed and that such an effort would have a major impact on facility infrastructure at land POEs. We did not assess the reliability of the 2003 estimate. The cost estimate did not separately break out costs for entry and exit construction, but did factor in the cost for building additional exit vehicle lanes and booths as well as buildings and other infrastructure that would be required to accommodate a mirror imaging at exit of the capabilities required for entry processing. US-VISIT program officials told us that they provided this estimate to congressional staff during a briefing, but that the reaction to this projected cost was negative and that they therefore did not move ahead with this option. No subsequent cost estimate updates had been prepared, and DHS’s annual budget requests have not included funds to build the infrastructure that would be associated with the required facilities. US-VISIT officials stated that they believe that technological advances over the next 5 to 10 years will make it possible to utilize alternative technologies that provide biometric verification of persons exiting the country without major changes to facility infrastructure and without requiring those exiting to stop and/or exit their vehicles, thereby precluding traffic backup, congestion, and resulting delays. US-VISIT’s report assessing biometric alternatives noted that although limitations in technology currently preclude the use of biometric identification because visitors would have to be stopped, the use of the as yet undeveloped biometric verification technology supports the long-term vision of the US- VISIT program. However, no such technology or device currently exists that would not have a major impact on facilities. The prospects for its development, manufacture, deployment, and reliable utilization are currently uncertain or unknown, although a prototype device that would permit a fingerprint to be read remotely without requiring the visitor to come to a full stop is under development. While logistical, technical, and cost constraints may prevent implementation of a biometrically based exit technology for US-VISIT at this time, it is important to note that there currently is not a legislatively mandated date for implementation of such a solution. The Intelligence Reform and Terrorism Prevention Act of 2004 requires US-VISIT to collect biometric exit data from all individuals who are required to provide biometric entry data. The act did not set a deadline, however, for requiring collection of biometric exit data from all individuals who are required to provide biometric entry data. Although US-VISIT had set a December 2007 deadline for implementing exit capability at the 50 busiest land POEs, US-VISIT has since determined that implementing exit capability by this date is no longer feasible, and a new date for doing so has not been set. The US-VISIT Program Office Tested Nonbiometric Technology to Record Travelers’ Departure, but Identified Numerous Performance and Reliability Problems US-VISIT has tested nonbiometric technology to record travelers’ departure, but testing showed numerous performance and reliability problems. Because there is at present no biometric technology that can be used to verify a traveler’s exit from the country at land POEs without also making major and costly changes to POE infrastructure and facilities, US- VISIT tested radio frequency identification (RFID) technology as a nonbiometric means of recording visitors as they exit. RFID technology can be used to electronically identify and gather information contained on a tag—in this case, a unique identifying number embedded in a tag on a visitor’s arrival/departure form—which an electronic reader at the POE is intended to detect. While RFID technology required few facility and infrastructure changes, US-VISIT’s testing and analysis at five land POEs at the northern and southern borders identified numerous performance and reliability problems, such as the failure of RFID readers to detect a majority of travelers’ tags during testing. For example, according to US- VISIT, at the Blaine-Pacific Highway test site, of 166 vehicles tested during a 1-week period, RFID readers correctly identified 14 percent—a sizable departure from the target read rate of 70 percent. Another problem that arose was that of cross-reads, in which multiple RFID readers installed on poles or structures over roads, called gantries, picked up information from the same visitor, regardless of whether the individual was entering or exiting in a vehicle or on foot. Thus, cross-reads resulted in inaccurate record keeping. According to a January 2006 US- VISIT corrective-action report, remedying cross-reads would require changes to equipment and infrastructure on a case-by-case basis at each land POE, because each has a different physical configuration of buildings, roadways, roofs, gantries, poles, and other surfaces against which the signals can bounce and cause cross-reads. Each would therefore require a different physical solution to avoid the signal interference that triggers cross-reads. Although cost estimates or time lines had not been developed for such alterations to facilities and equipment, it is possible that having to alter the physical configuration at each land POE in some regard and then test each separately to ensure that cross-reads had been eliminated would be both time consuming and potentially costly, in terms of changes to infrastructure and equipment. However, even if RFID deficiencies were to be fully addressed and deadlines set, questions remain about DHS’s intentions going forward. For example, the RFID solution did not meet the congressional requirement for a biometric exit capability because the technology that had been tested cannot meet a key goal of US-VISIT—ensuring that visitors who enter the country are the same ones who leave. By design, an RFID tag embedded in an I-94 arrival/departure form cannot provide the biometric identity- matching capability that is envisioned as part of a comprehensive entry/exit border security system using biometric identifiers for tracking overstays and others entering, exiting, and re-entering the country. Specifically, the RFID tag in the I-94 form cannot be physically tied to an individual. This situation means that while a document may be detected as leaving the country, the person to whom it was issued at time of entry may be somewhere else. Our report also noted that DHS was to have reported to Congress by June 2005 on how the agency intended to fully implement a biometric entry/exit program. As of October 2006, this plan was still under review in the Office of the Secretary, according to US-VISIT officials. According to statute, this plan is to include, among other things, a description of the manner in which the US-VISIT program meets the goals of a comprehensive entry and exit screening system—including both biometric entry and exit—and fulfills statutory obligations imposed on the program by several laws enacted between 1996 and 2002. Until such a plan is finalized and issued, DHS is not able to articulate how entry/exit concepts will fit together— including any interim nonbiometric solutions—and neither DHS nor Congress is positioned to prioritize and allocate resources for a US-VISIT exit capability or plan for the program’s future. DHS Had Not Articulated How US- VISIT Strategically Fits with Other Land Border Security Initiatives My statement will now focus on DHS efforts to define how US-VISIT fits with other emerging border security initiatives. DHS had not articulated how US-VISIT strategically fits with other land border security initiatives. In recent years, DHS has planned or implemented a number of initiatives aimed at securing the nation’s borders. In September 2003, we reported that agency programs need to properly fit within a common strategic context governing key aspects of program operations—e.g., what functions are to be performed by whom; when and where they are to be performed; what information is to be used to perform them; what rules and standards will govern the application of technology to support them; and what facility or infrastructure changes will be needed to ensure that they operate in harmony and as intended. We further stated that DHS had not defined key aspects of the larger homeland security environment in which US-VISIT would need to operate. For example, certain policy and standards decisions had not been made, such as whether official travel documents would be required for all persons who enter and exit the country, including U.S. and Canadian citizens, and how many fingerprints would be collected—factors that could potentially increase inspection times and ultimately increase traveler wait times at some of the higher volume land POE facilities. To minimize the impact of these changes, we recommended that DHS clarify the context in which US-VISIT is to operate. Our December 2006 report noted that, 3 years later, defining this strategic context remained a work in progress. Thus, the program’s relationships and dependencies with other closely allied initiatives and programs were still unclear. According to the US-VISIT Chief Strategist, the Program Office drafted in March 2005 a strategic plan that showed how US-VISIT would be strategically aligned with DHS’s organizational mission and also defined an overall vision for immigration and border management. According to this official, the draft plan provided for an immigration and border management enterprise that unified multiple internal departmental and other external stakeholders with common objectives, strategies, processes, and infrastructures. As of October 2006, we were told that DHS had not approved this strategic plan. This draft plan was not available to us, and it is unclear how it would provide an overarching vision and road map of how all these component elements can at this time be addressed given that critical elements of other emerging border security initiatives have yet to be finalized. For example, under the Intelligence Reform and Terrorism Prevention Act of 2004, DHS and the Department of State are to develop and implement a plan, no later than June 2009, which requires U.S. citizens and foreign nationals of Canada, Bermuda, and Mexico to present a passport or other document or combination of documents deemed sufficient to show identity and citizenship to enter the United States (this is currently not a requirement for these individuals entering the United States via land POEs from within the Western Hemisphere). This effort, known as the Western Hemisphere Travel Initiative (WHTI), was first announced in 2005, and some members of Congress and others have raised questions about agencies’ progress carrying out WHTI. In May 2006, we issued a report that provided our observations on efforts to implement WHTI along the U.S. border with Canada. We stated that DHS and the Department of State had taken some steps to carry out the Travel Initiative, but they had a long way to go to implement their proposed plans, and time was slipping by. Among other things, we found that key decisions had yet to be made about what documents other than a passport would be acceptable when U.S. citizens and citizens of Canada enter or return to the United States—a decision critical to making decisions about how DHS is to inspect individuals entering the country, including what common facilities or infrastructure might be needed to perform these inspections at land POEs, and a DHS and Department of State proposal to develop an alternative form of passport, called a PASS card, would rely on RFID technology to help DHS process U.S. citizens re-entering the country, but DHS had not made decisions involving a broad set of considerations that included (1) utilizing security features to protect personal information, (2) ensuring that proper equipment and facilities are in place to facilitate crossings at land borders, and (3) enhancing compatibility with other border crossing technology already in use. As of September 2006, DHS had still not finalized plans for changing the inspection process and using technology to process U.S. citizens and foreign nationals of Canada, Bermuda, and Mexico re-entering or entering the country at land POEs. In the absence of decisions about the strategic direction of both programs, it was unclear (1) how the technology used to facilitate border crossings under the Travel Initiative would be integrated with US-VISIT technology, if at all, and (2) how land POE facilities would have to be modified to accommodate both programs to ensure efficient inspections that do not seriously affect wait times. This raises the possibility that CBP would be faced with managing differing technology platforms and border inspection processes at high-volume land POEs facilities that, according to DHS, already face space constraints and congestion. Similarly, our December 2006 report noted that it is not clear how US- VISIT is to operate in relation to another emerging border security effort, the Secure Border Initiative (SBI)—a comprehensive DHS initiative, announced last year, to secure the country’s borders and reduce illegal migration. Under SBI and its CBP component, called SBInet, DHS plans to use a systems approach to integrate personnel, infrastructures, technologies, and rapid response capability into a comprehensive border protection system. DHS reports that, among other things, SBInet is to encompass both the northern and southern land borders, including the Great Lakes, under a unified border control strategy whereby CBP is to focus on the interdiction of cross-border violations between the ports and at the official land POEs and funnel traffic to the land POEs. As part of SBI, DHS also plans to focus on interior enforcement—disrupting and dismantling cross border crime into the interior of the United States while locating and removing aliens who are present in the United States in violation of law. Although DHS has published some information on SBI and SBInet, it remains unclear how SBInet will be linked, if at all, to US- VISIT so that the two systems can share technology, infrastructure, and data across programs. Also, given the absence of a comprehensive entry and exit system, questions remain about what meaningful data US-VISIT may be able to provide other DHS components, such as Immigration and Customs Enforcement (ICE), to ensure that DHS can, from an interior enforcement perspective, identify and remove foreign nationals covered by US-VISIT who may have overstayed their visas. In a May 2004 report, we stated that although no firm estimates were available, the extent of overstaying is significant. We stated that most long-term overstays appeared to be motivated by economic opportunities, but a few had been identified as terrorists or involved in terrorist-related activities. Notably, some of the September 11 hijackers had overstayed their visas. We further reported that US-VISIT held promise for identifying and tracking overstays as long as it could overcome weaknesses matching visitors’ entry and exit. Conclusions, Recommendations, and Agency Response Developing and deploying complex technology that records the entry and exit of millions of visitors to the United States, verifies their identities to mitigate the likelihood that terrorists or criminals can enter or exit at will, and tracks persons who remain in the country longer than authorized is a worthy goal in our nation’s effort to enhance border security in a post-9/11 era. But doing so also poses significant challenges; foremost among them is striking a reasonable balance between US-VISIT’s goals of providing security to U.S. citizens and visitors while facilitating legitimate trade and travel. DHS has made considerable progress making the entry portion of the US- VISIT program at land POEs operational, but our work raised questions whether DHS has adequately assessed how US-VISIT has affected operations at land POEs. Because US-VISIT will likely continue to have an impact on land POE facilities as it evolves—especially as new technology and equipment are introduced—it is important for US-VISIT and CBP officials to have sufficient management controls for identifying and reporting potential computer and other operational problems that could affect the ability of US-VISIT entry capability to operate as intended. For example, if disruptions to US-VISIT computer operations are not consistently and promptly reported and resolved, it is possible that a critical US-VISIT function—notably, the ability to use biometric information to confirm visitors’ identities through various databases— could be disrupted, as has occurred in the past. The need to avoid disruptions to biometric verification is important given that one of the primary goals of US-VISIT is to enhance the security of U.S. citizens and visitors, and in light of the substantial investment DHS has made in US- VISIT technology and equipment. To help DHS achieve benefits commensurate with its investment in US-VISIT at land POEs and security goals and objectives, we recommended that DHS (1) improve existing controls for identifying and reporting computer processing and other operational problems to help ensure that these controls are consistently administered and (2) develop performance measures specifically for assessing the impact of US-VISIT operations at land POEs. With respect to DHS’s effort to create an exit verification capability, developing and deploying this capability at land POEs has posed a set of challenges that are distinct from those associated with entry. US-VISIT has not determined whether it can achieve, in a realistic time frame, or at an acceptable cost, the legislatively mandated capability to record the exit of travelers at land POEs using biometric technology. Apart from acquiring new facilities and infrastructure at an estimated cost of billions of dollars, US-VISIT officials have acknowledged that no technology now exists to reliably record travelers’ exit from the country, and to ensure that the person leaving the country is the same person who entered, without requiring that person to stop upon exit—potentially imposing a substantial burden on travelers and commerce. US-VISIT officials stated that they believe a biometrically based solution that does not require those exiting the country to stop for processing, that minimizes the need for major facility changes, and that can be used to definitively match a visitor’s entry and exit will be available in 5 to 10 years. In the interim, it remains unclear how DHS plans to proceed. According to statute, DHS was required to report more than a year ago on its plans for developing a comprehensive biometric entry and exit system, but DHS has yet to finalize this road map for Congress. Until DHS finalizes such a plan, neither Congress nor DHS is likely to have sufficient information as a basis for decisions about various factors relevant to the success of US-VISIT, ranging from funding needed for any land POE facility modifications in support of the installation of exit technology to the trade-offs associated with ensuring traveler convenience while providing verification of travelers’ departure consistent with US- VISIT’s national security and law enforcement goals. We recommended that as DHS finalizes the mandated report, the Secretary take steps to ensure that the report includes, among other things, information on the costs, benefits, and feasibility of deploying biometric and nonbiometric exit capabilities at land POEs. Our recommendation also stated that DHS’s report should include a description of how DHS plans to align US-VISIT with other emerging land border security initiatives and what facilities or facility modifications would be needed at land POEs to ensure that different technologies and processes work in harmony. By showing how these initiatives are to be aligned, Congress, DHS, and others would be in a better position to understand what resources and tools are needed to ensure success and ensure that land POE facilities are positioned to accommodate them. DHS generally agreed with our recommendations and stated that it either had begun to take or was planning to take actions to implement them. It acknowledged that the exit technology tested by DHS would not satisfy statutory requirements for a biometric exit system and said that it would perform research and industry outreach to satisfy the mandate. DHS, however, disagreed with our finding that the US-VISIT Program Office did not fully consider the impact of US-VISIT on the overall operations at POEs. It said that US-VISIT impacts are limited to changes in Form I-94 processing time, which according to officials, improved, and that issues related to capacity, staffing, and other factors are “arguably” beyond the scope of US-VISIT. We agree that the approach taken to do operational assessments of the impact of US-VISIT land POE facilities focused on changes to I-94 processing time. Our concern is that the assessments did not examine other operational factors, such as US-VISIT’s impact on physical facilities, to help ensure that US-VISIT operates as intended. We believe more complete assessments of the impact of US-VISIT on land POE operations would better position DHS to anticipate potential problems and develop solutions, especially as additional US-VISIT capabilities, such as 10-fingerprint scanning, are introduced at these facilities. This concludes my prepared testimony. I would be happy to respond to any questions that Members of the Subcommittee may have. GAO Contact and Staff Acknowledgements For further information about this testimony, please contact me at (202) 512-8816. John Mortin, Assistant Director; Amy Bernstein; Frances Cook; Odi Cuero; Richard Hung; Amanda Miller; James R. Russell; and Jonathan Tumin made key contributions to this testimony. Appendix I: Legislative Overview of the US- VISIT Program The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 originally required the development of an automated entry and exit control system to collect a record of departure for every alien departing the United States and match the record of departure with the record of the alien’s arrival in the United States; make it possible to identify nonimmigrants who remain in the country beyond the authorized period; and not significantly disrupt trade, tourism, or other legitimate cross-border traffic at land border ports of entry. It also required the integration of overstay information into appropriate databases of the Immigration and Naturalization Service and the Department of State, including those used at ports of entry and at consular offices. The system was originally to be developed by September 30, 1998; this deadline was changed to October 15, 1998, and was changed again for land border ports of entry and sea ports to March 30, 2001. The Immigration and Naturalization Service Data Management Improvement Act (DMIA) of 2000 replaced the 1996 statute in its entirety, requiring instead an electronic system that would provide access to and integrate alien arrival and departure data that are authorized or required to be created or collected under law, are in an electronic format, and are in a database of the Department of Justice or the Department of State, including those created or used at ports of entry and at consular offices. The act specifically provided that it not be construed to permit the imposition of any new documentary or data collection requirements on any person for the purpose of satisfying its provisions, but it further provided that it also not be construed to reduce or curtail any authority of the Attorney General (now Secretary of Homeland Security) or Secretary of State under any other provision of law. The integrated entry and exit data system was to be implemented at airports and seaports by December 31, 2003, at the 50 busiest land ports of entry by December 31, 2004, and at all remaining ports of entry by December 31, 2005. The DMIA also required that the system use available data to produce a report of arriving and departing aliens by country of nationality, classification as an immigrant or nonimmigrant, and date of arrival in and departure from the United States. The system was to match an alien’s available arrival data with the alien’s available departure data, assist in the identification of possible overstays, and use available alien arrival and departure data for annual reports to Congress. These reports were to include the number of aliens for whom departure data were collected during the reporting period, with an accounting by country of nationality; the number of departing aliens whose departure data were successfully matched to the alien’s arrival data, with an accounting by country of nationality and classification as an immigrant or nonimmigrant; the number of aliens who arrived pursuant to a nonimmigrant visa, or as a visitor under the visa waiver program, for whom no matching departure data have been obtained as of the end of the alien’s authorized period of stay, with an accounting by country of nationality and date of arrival in the United States; and the number of identified overstays, with an accounting by country of nationality. In 2001, the USA PATRIOT Act provided that, in developing the integrated entry and exit data system under the DMIA, the Attorney General (now Secretary of Homeland Security) and Secretary of State were to focus particularly on the utilization of biometric technology and the development of tamper-resistant documents readable at ports of entry. It also required that the system be able to interface with law enforcement databases for use by federal law enforcement to identify and detain individuals who pose a threat to the national security of the United States. The PATRIOT Act also required by January 26, 2003, the development and certification of a technology standard, including appropriate biometric identifier standards, that can be used to verify the identity of persons applying for a U.S. visa or persons seeking to enter the United States pursuant to a visa for the purposes of conducting background checks, confirming identity, and ensuring that a person has not received a visa under a different name. This technology standard was to be the technological basis for a cross-agency, cross-platform electronic system that is a cost-effective, efficient, fully interoperable means to share law enforcement and intelligence information necessary to confirm the identity of persons applying for a U.S. visa or persons seeking to enter the United States pursuant to a visa. This electronic system was to be readily and easily accessible to consular officers, border inspection agents, and law enforcement and intelligence officers responsible for investigation or identification of aliens admitted to the United States pursuant to a visa. Every 2 years, beginning on October 26, 2002, the Attorney General (now Secretary of Homeland Security) and the Secretary of State were to jointly report to Congress on the development, implementation, efficacy, and privacy implications of the technology standard and electronic database system. The Enhanced Border Security and Visa Entry Reform Act of 2002 required that, in developing the integrated entry and exit data system for the ports of entry under the DMIA, the Attorney General (now Secretary of Homeland Security) and Secretary of State implement, fund, and use the technology standard required by the USA PATRIOT Act at U.S. ports of entry and at consular posts abroad. The act also required the establishment of a database containing the arrival and departure data from machine-readable visas, passports, and other travel and entry documents possessed by aliens and the interoperability of all security databases relevant to making determinations of admissibility under section 212 of the Immigration and Nationality Act. In implementing these requirements, the INS (now the Department of Homeland Security ) and the Department of State were to utilize technologies that facilitate the lawful and efficient cross-border movement of commerce and persons without compromising the safety and security of the United States and were to consider implementing a North American National Security Program, for which other provisions in the act called for a feasibility study. The act, as amended, also established a number of requirements regarding biometric travel and entry documents. It required that not later than October 26, 2004, the Attorney General (now Secretary of Homeland Security) and the Secretary of State issue to aliens only machine-readable, tamper-resistant visas and other travel and entry documents that use biometric identifiers and that they jointly establish document authentication standards and biometric identifiers standards to be employed on such visas and other travel and entry documents from among those biometric identifiers recognized by domestic and international standards organizations. It also required by October 26, 2005, the installation at all ports of entry of the United States of equipment and software to allow biometric comparison and authentication of all U.S. visas and other travel and entry documents issued to aliens and passports issued by visa waiver participants. Such biometric data readers and scanners were to be those that domestic and international standards organizations determine to be highly accurate when used to verify identity, that can read the biometric identifiers used under the act, and that can authenticate the document presented to verify identity. These systems also were to utilize the technology standard established pursuant to the PATRIOT Act. The Intelligence Reform and Terrorism Prevention Act of 2004 did not amend the existing statutory provisions governing US-VISIT, but it did establish additional statutory requirements concerning the program. It described the program as an “automated biometric entry and exit data system” and required DHS to develop a plan to accelerate the full implementation of the program and to report to Congress on this plan by June 15, 2005. The report was to provide several types of information about the implementation of US-VISIT, including a “listing of ports of entry and other DHS and Department of State locations with biometric exit data systems in use.” The report also was to provide a description of the manner in which the US-VISIT program meets the goals of a comprehensive entry and exit screening system, “including both entry and exit biometric;” and fulfills the statutory obligations imposed on the program by several laws enacted between 1996 and 2002. The act provided that US-VISIT “shall include a requirement for the collection of biometric exit data for all categories of individuals who are required to provide biometric entry data, regardless of the port of entry where such categories of individuals entered the United States.” The new provisions in the 2004 act also addressed integration and interoperability of databases and data systems that process or contain information on aliens and federal law enforcement and intelligence information relevant to visa issuance and admissibility of aliens; maintaining the accuracy and integrity of the US-VISIT data system; using the system to track and facilitate the processing of immigration benefits using biometric identifiers; the goals of the program (e.g., serving as a vital counterterrorism tool, screening visitors efficiently and in a welcoming manner, integrating relevant databases and plans for database modifications to address volume increase and database usage, and providing inspectors and related personnel with adequate real time information); training, education, and outreach on US-VISIT, low-risk visitor programs, and immigration law; annual compliance reports by DHS, State, the Department of Justice, and any other department or agency subject to the requirements of the new provisions; and development and implementation of a registered traveler program. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes a December 2006 GAO report on the Department of Homeland Security's (DHS) efforts to implement the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program at land ports of entry (POE). US-VISIT is designed to collect, maintain, and share data on selected foreign nationals entering and exiting the United States at air, sea, and land POEs. These data, including biometric identifiers like digital fingerprints, are to be used to screen persons against watch lists, verify identities, and record arrival and departure. This testimony addresses DHS's efforts to (1) implement US-VISIT entry capability, (2) implement US-VISIT exit capability, and (3) define how US-VISIT fits with other emerging border security initiatives. GAO analyzed DHS and US-VISIT documents, interviewed program officials, and visited 21 land POEs with varied traffic levels on both borders. US-VISIT entry capability had been installed at 154 of the 170 land POEs. Officials at all 21 sites GAO visited reported that US-VISIT had improved their ability to process visitors and verify identities. DHS plans to further enhance US-VISIT's capabilities by, among other things, requiring new technology and equipment for scanning all 10 fingerprints. While this may aid border security, installation could increase processing times and adversely affect operations at land POEs where space constraints, traffic congestion, and processing delays already exist. GAO's work indicated that management controls in place to identify such problems and evaluate operations were insufficient and inconsistently administered. For example, GAO identified computer processing problems at 12 sites visited; at 9 of these, the problems were not always reported. US-VISIT has developed performance measures, but measures to gauge factors that uniquely affect land POE operations were not developed; these would put US-VISIT officials in a better position to identify areas for improvement. US-VISIT officials concluded that, for various reasons, a biometric US-VISIT exit capability cannot now be implemented without incurring a major impact on land POE facilities. An interim nonbiometric exit technology tested (see photo, below right) did not meet the statutory requirement for a biometric exit capability and thus cannot ensure that visitors who enter the country are those who leave. DHS had not yet reported to Congress on a required plan describing how it intended to fully implement a biometric entry/exit program or use nonbiometric solutions. Until this plan is finalized, neither DHS nor Congress is in a good position to prioritize and allocate program resources or plan for POE facilities modifications. DHS had not articulated how US-VISIT is to align with other emerging land border security initiatives and mandates, and thus could not ensure that the program would meet strategic program goals and operate cost effectively at land POEs. Knowing how US-VISIT is to work with these initiatives, such as one requiring U.S. citizens, Canadians, and others to present passports or other documents at land POEs in 2009, is important for understanding the broader strategic context for US-VISIT and identifying resources, tools, and potential facility modifications needed to ensure success.